@colinbrislawn thank you so much for your quick response!
Unfortunately the 8 samples I loose at the dada2 filteration step is important for my study. They all have over 30k reads but turn into less than 1k after dada2 filteration step.
no biological reason, failed samples r random from different treatments while their replicates are ok. But with those failing i end up with one replicate per treatment!
I tried --max_ee 7 with dada2 but same results, does this means it is not sequencing issue, if yes then what?
Using only forward reads runs ok but then my taxonomic resolution goes bad, right!
I tried to use deblur instead of dada2 following this tutorial. It looks like it works to join but eventually i loose like 90% after denoising with deblur but after joining all reads retained.
when i tried --p-trim-length 385 as that was the minimum length at subsampling I got much better results , but still not sure if that is the best choice to move forward with! deblur-table_385.qzv (465.2 KB)
Yeah, I don't see the data columns either. Perhaps you can try to run that command again and see if the rerun fixes the file?
I'm not very familiar with the deblur plugin, so I will not offer any advice here.
I think trying DADA2 denoise-single for just your forward reads is a good idea.
The taxonomic resolution may be reduced, but it's not 'bad'. The first amplicon studies using the Illumina miseq used <100 bp reads and they got published. Your forward reads alone are more then twice that.