Thank you for the reply,
That makes more sense. I may have used the wrong parameters.
DADA2 denoise step is as below:
qiime dada2 denoise-paired
--i-demultiplexed-seqs $JOB_DIR/demux.qza
--o-table $JOB_DIR/table.qza
--o-representative-sequences $JOB_DIR/rep-seqs.qza
--o-denoising-stats $JOB_DIR/denoising-stats.qza
--p-trim-left-f 0
--p-trim-left-r 0
--p-trunc-len-f 0
--p-trunc-len-r 0
--p-n-threads 5
I didn't truncate any sequences because I thought the quality wasn't good and I might loose too many seqs.
I went back to my qiime1 steps and found that I used the default parameters on split_libraries_fastq.py step, which had "-q" (--phred_quality_threshold
The maximum unacceptable Phred quality score) as 3. The open reference otu picking with this data gave me ridiculously high numbers of OTUs(80000s) with singletons and chimeras removed. I'm thinking the parameter -q 3 could've been too lenient compared to qiime2 filters.
Please let me know if I should change the parameters.
Cheers,