I have four batches of 16S sequencing data sets for one project.
Therefore, I firstly used cutadapt demux-paired and cutadapt trim-paired to trim the seqs, and then used DADA2 to denoise them. Though the most of seqs were removed, I thought the remained reads would be enough for the following analysis.
Secondly, I used Fearture-table merge to merge the produced table (by DADA2), and used Fearture-table merge-seqs to merge the the produced rep-seqs (by DADA2).
Is it because my rep-seqs containing too much seqs?
I guess the step of merge-seqs made the file containing too much seqs???
However, there is no parameter in the merge-seqs to reduce the number of the seqs.
How can I fix this error?
I just want to analysis these data sets together, like what we have always done in QIIME1.
Thanks for putting together such a clear and detailed question, @Moon!
Good sleuthing ! This is, indeed, an out of memory error. Here’s an experiment you can try. Sometimes when parallelizing processes, each thread contributes a significant amount of memory usage.
If you’re running out of memory, try decreasing the number of threads to 4. If it still gives you OOM errors, you can try dropping the number of threads even farther. Overall, this will likely mean a longer run time, but that’s probably OK if it runs!
If this doesn’t work for you, you could experiment with the --p-parttree option. This algoestimates rather than fully calculating the tree, and is designed for use with large data sets. I have no idea whether this is actually going to help with memory usage, but it may be worth trying.
Alternately, you could try looking at another alignment tool. I’ve had a good experience with fragment-insertion, but only you know whether that approach is right for your study.
@Moon, your cutadapt and DADA2 questions were unrelated to your alignment question, and have been moved to separate topics. Please give unrelated questions their own topics - it helps keep things tidy.
Let us know how your experiments with parttree and fragment-insertion go.