What would you make of a dataset that, when you run said OTU table through Vegan to calculate distances using defaults (Method==Bray, Binary = FALSE) and depending on whether you filter based on per-sample ASV abundances, you get very different pictures?
For example:
Data is processed with Cutadapt and DADA2 defaults, then...
Situation-1. incorporate all reads of any abundance > 1
Situation-2. Require per-sample ASVs abundance > 100
Now calculate distances, and run metaMDS...
The distances for (1) all look super close to 0. The NMDS plot looks like a dart board from a professional dart player.
The distances for (2) are varied, and the NMDS plot is scattered as if I was throwing said darts after too many gin and tonics.
My suspicion initially was that there is pervasive low level contamination, and that by filtering low abundance data, I'm getting rid of the common stuff pushing down my distances. But the Bray test is incorporating abundance information, so I'm not convinced that's what's going on.
Perhaps it's worth trying a few other distance metrics? Something like Morisita-Horn weighs the dominant (per abundance) ASVs more heavily, but short of that, I'm not sure how else to explain what I'm observing.
Thanks for your thoughts!