I'm working with targeted amplicon data (~460 bp) for 18S, sequenced using a NextSeq 300 bp paired-end kit. However, I'm encountering significant read loss during the DADA2 denoising step.
Issue:
Almost all reads are lost during the denoising step.
There are polyC and polyG tails present in the majority of the reads.
Disabling truncation for both forward and reverse reads slightly improves the number of filtered reads, but all of these are subsequently lost at the chimera removal step.
Steps Taken:
I have tried running the DADA2 pipeline without truncation for both forward and reverse reads.
Despite this, the issue persists with most reads being lost at the chimera removal step.
Did you trim your adapters/primers from your reads before denoising? That should chop off anything preceding the primer/adapter sequence and DADA2 requires that there is only biological sequence data in your reads, i.e. no adaptors/primers.
I would use cutadapt (docs for cutadapt are here) like so for 16s data:
I have two separate sets of 16S and 18S data from same samples run at the same Illunina facility and results are same for both i.e loosing almost 90% reads, and polyC and polyG sequences at the beginning and end of almost all merged reads.
I also thought that trimming primers should remove the polyC/polyG but it seems like its not happening. The output file (out-sub2-18S.txt) shows that primers/adaptors have been trimmed from all reads.
Not sure whether it is related to sequencing errors related to sequencer?
Oh, how strange, I would also assume the primer trimming would deal with it. Did you recieved with Illumina adpters removed? I guess if you say this:
The trimming has worked as explected. It's odd to me that those are inside the primer regions.
I think cutadapt outside of Qiime2 has a specific setting for NextSeq data. This is because the two-colour chemistry is known to produce G tail ends, because the no signal is a G base, so when it runs out of magical chemistry it thinks it's seeing G's. Cutadapt also has a poly-A trimmer, both are explained here.
I have tried the cutadapt --nextseq-trim=20 option in cutadapt.
It has removed all the polyC and polyG tails but in doing so I am left with just below 0.5% of the total input reads in the final step of denoising (primarily due to reduced length of both forward and reverse reads - leading to no overlapping and no joining). So, to me, it looks like there is something wrong with the sequencing platform and now I am thinking to contact the facility manager. I used all the positive controls and few samples for this trial and all I am left with is 18 sequences of varying lengths (~220 bp to 380 bp) for a region of 450 bp size.
I am looking for your's and other colleagues' opinion for the way forward in this scenario,
I agree that it seems like something went wrong with the sequencing run. Because you're losing a lot of reads at the merging step, if you want to move forward with these sequences you have the option of using the forward reads only (not an ideal outcome but perhaps better than nothing).