But it doesn't use 24 CPUs. This is too slow for big dataset. I can set up to 48 CPUs.
And the dada2 has same problem, it use R package, R doesn't support parallel computing in general.
I don't know how to do.
Looking forward to your help and reply!
This is typical of most tools that are capable of using multiple cores / CPUs / threads. That is, simply requesting more cores, does not necessarily mean that the software will run faster, or perfectly scale with an increasing number of cores. That is, there is a point where adding more cores will not be of benefit, as not all tools, or the tasks they perform, are perfectly parallelizable. Often this means that running a task with 24 cores will run slower than the same task with 12 cores, as there is not enough work to spread around, and the cores end up spending more time communicating with each other rather than working on the actual task itself.
Some tools are aware of this issue and will autocscale to the appropriate number of cores regardless of how many you request. This occurs with vsearch, DADA2, and other tools. In fact, a perfect example of this can be observed with the phylogeny iqtree action. For example, if you run the following command using --p-n-cores set to auto instead of a numeric value like 4:
Then this command will continue to add extra cores until there is no longer a significant improvement in run time (you can see this occuring in the onscreen output if you use the --verbose flag). That is, it'll avoid the issue of using too many CPUs where you will observe diminishing returns or slowing down of the task. Does this make sense?