Vsearch and DADA2

Hi, I am processing Miseq paired-end reads (2times251bp) targeting the V3-V4 region of the 16S. I have performed cutadapt to trim out the adapter and primer sequences. And this is the quality plot:

This is the command line:

From this step onwards, I am confused on how to process further. I have heard that merging the reads using vsearch followed by DADA2 denoise-single (single due to previously-merged reads using vsearch) will help to save a lot of sequence counts.
On the other hand, merging with DADA2 denoise-paired without going through vsearch will cause a huge loss of sequence reads.

Therefore, from this point onwards (after cutadapt), I am unsure if I should proceed with vsearch > DADA2 denoise-single OR
just DADA2 denoise-paired.

Any suggestion will be much appreciated.


I read about the same and tried both options with different datasets. Each time I got better results with Dada2 paired rather than with VSERCH - Dada2 single.
But maybe you should not trust my outputs and try both as well to make sure that you used best approach for your data

1 Like

Hi @timanix ,
Thanks for the information!
When you said better results, which one are you referring to (like the sequence counts, etc. ?) Did you lost a lot of reads during the DADA2 step?
So, you performed Cutadapt first and then DADA2 paired to get you the better results that you mentioned, right?
Thank you!

Yeah, I mean larger amount of output reads.

I tried different trimming parameters in Dada2, and also tried to merge reads with VSEARCH and then denoise them in Deblur or DADA2 as single reads. With some trimming parameters I got less amount of output reads, with some - more. So it is important to try several parameters .

Yeah, I got better results with Dada paired after cutadapt

1 Like

According to the DADA2 paper,

fastqFilter() implements filtering of fastq files that largely recapitulates the usearch fastq_filter command (http:// fastq_filter command). In short, this function trims sequences to a specified length, removes sequences shorter than that length, and filters based on the number of ambiguous bases, a minimum quality score, and the expected errors in a read. [18] fastqPairedFilter() implements the same trimming and filtering, but applies it to paired reads jointly, only outputting reads where both the forward and reverse reads pass the filter.

You’re probably getting similar filtering behavior from vsearch and from DADA2, but qiime dada2 denoise-paired handles denoising and joining in one QIIME 2 command, and uses default behaviors that are internally consistent and “make sense” for high-quality ASVs.

I’m not an expert on DADA2, @suetli19, but I thought I’d throw in my two cents. “Join them separately and then have DADA2 pretend their single-end reads” feels a little hacky to me. I’m not aware of any reason it’s a bad idea, I just don’t like the semantics of it. Personally, I’d probably just go with DADA2, but as @timanix sagely suggested, you can always experiment with a couple approaches and see what works for your data.

1 Like

Ok, I circled up with another mod on this, @suetli19, and this approach is a bad idea. :laughing:

Each base in your paired-end reads gets its own quality score from the sequencer. When a software tool joins two paired ends into one read, it needs to make a choice about what quality score to assign to the positions where the two reads overlap, so quality data gets lost.

As discussed in this topic, assigning new quality scores like this can mess with DADA 2’s error model (which uses quality scores for denoising). Your results won’t reflect the actual quality scores assigned to the reads during sequencing, and may be less meaningful because of that.

Deblur doesn’t care about quality scores. Feel free to experiment with pre-joining and denoising with deblur, but I’d recommend you avoid running pre-joined reads through DADA2.


Hi @ChrisKeefe ,

Your explanation is really clear and yeah, it does makes so much sense to me now. I’d probably just go with DADA2 denoise-paired since it’s much better.

Thanks a lot!

Suet Li

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.