I have generated tables of ASVs with DADA2 and OTUs using open-reference clustering with Vsearch from the same dataset, but I was just wondering if anyone knows of any way to quickly CROSS-REFERENCE the results of the two to determine clustering similarity?
Thanks for getting back to me, that figure is awesome - something to aspire to!! Do you know any way to generate visualizations like that?
I am interested in the cluster topography ie how certain features identified by OTU/ASV algorithms are clustered differently between methods - basically I am trying to determine the most biologically significant clustering for a set of samples and hope to use cross-referencing to show what level of resolution (species, genus, strain etc) best distinguishes true sequence variants. I am using DADA2 and vsearch. DADA2 gives me ASV’s to spp level, however vsearch has more customizable steps with more parameters and thus gives more chance to specifically adjust filtering and quality control etc albeit at the cost of using arbitrary thresholds to cluster into OTU’s.
Once I have figured out the best clustering for my sample set, I will move into diversity metrics.
Yeah, that’s basically a Sankey diagram turned sideways. There’s lots of ways to make these, like with the R package alluvial or with Google Charts.
I’m still not entirely sure I understand your question… It sounds like you are trying to compare the features created by DADA2 and Vsearch, but I’m not sure how you determine which ones are best. That Sankey diagram could help show which features are most similar, which would be a great first step.
I’ve wanted to make a graph like this for a long time, so if you do compare q2-dada2 and q2-vsearch in this way, I would love to see the result!
Not really. I think you are on a great path and asking all the right questions!
I did want to clear up this common misconception, if you are interested.
DADA2 gives you ASVs to the sequence-variance level. Hopefully, this means it can distinguish even a single base pair difference between two real amplicons! vs
This has nothing to do with taxonomy.
Vsearch, on the other hand, gives you OTUs to the percent-distance-to-the-centroid level, commonly 97% or 99% these days.
Thanks a lot for clearing all that up for me - very useful! I would like to further this though....
I am experimenting with different filtering parameters etc to try to get the most biologically significant representation of the features contained per sample, however I am struggling to decide which variables and values produce the best feature library. With DADA2 I get the following outputs:
The associated commands are in the attached file (filtering commands.txt - please ignore file names, they are just different iterations of filtering)
The taxonomic classifications produced from the files for each (DADA2 vs Vsearch) command workflow are pretty dramatically different (see screenshots and files) and seem in contrast to the filtering statistics - DADA2 appears to contain a huge amount MORE diversity (as in individual ASVs) whereas the Vsearch files appear to contain only a relatively small number of OTU classifications when considering how many sequences are retained through filtering. - how can this be? Any suggestions on how I should investigate this difference and what is driving this difference? Is it filtering parameters or clustering? Could a difference of this magnitude be the result of just the difference in clustering algorithm (using sequence variation and error-correction model vs using 98% similarity threshold)? Or do the vsearch files just retain a huge amount of replicates/error-repeats or noisey sequences that are all grouped under the same taxonomic affiliation/consensus sequence? In this sense it looks like the samples contain loads of trash and the DADA2 is simply removing all that noise and retaining sequence diversity correctly relative to vsearch (which looks like the filtering is done under the wrong parameters), but the initial quality statistics show the data is pretty good quality and without examining in detail I do not know how to answer these questions/figure out where or what is happening? Do you have any suggestions on how I can investigate this anomaly further, or how I should go about addresssing this problem and finding a solution?
Additionally, I also ran the DADA2 ASV file through the Vsearch open-reference clustering command to cross-reference clustering granulation and I get a reduction in feature count from 4,944 to 4,249 but retain feature abundance - assume this is due to the different strategies dealing with sequences within 2% similarity (using 98% threshold) - but does this help clarify what is happening with the filtering? I think this is quite a large reduction in ASVs but is this magnitude of reduction normal (ie would we expect >700/4944 seq to be within 2% similarity from samples like our/microbiome samples?)? Does this indicate that it is the different filtering parameters generating the difference in taxonomies and not the clustering? filtering commands.txt (5.7 KB)
Thank you for your detailed post. There’s a lot to discuss here, and I know just where to start!
Yes! You need a positive control.
The core idea is to use an artificial sample with a known mix of microbes. For example, in this paper, they simulate microbial samples so they know the exact composition of each sample.
Then they analyse these samples in different ways, and just like you, they observed that different analysis methods produced slightly different results. But importantly, because they know the true composition of their samples, they can find the analysis method that most closely matches the ground truth.
Each of the questions you raise here could be answered by using a mock community. Heck, you could even use a mock-community made by someone else!
I think running this comparison using mock data would also be helpful because it could help us untangle which steps are causing changes in final results. For example,
or taxonomy assignment? Once we know the right answer, we can check our work every step of the way.
If you have specific questions, I can try to answer them too. But the only way I can think to address all the questions raised here, is to get some positives controls!