I am working on 16S rRNA data that represent 25 samples collected from dead standing trees with either white- or brown-rot fungi grown on it.
I didn't observe any clear significant differences between the two categories in terms of the alpha diversity. However, I could see some significance in the beta-diversity. The PCoA plot showed that there is separation but not so clear. The same samples were used for the ITS based analyses by someone else who found that 8 out of 25 samples don't contain any ITSs representing the above mentioned brown- or white-rot fungi. I removed those 8 samples from my analyses altogether to see if the 16S data become less fuzzy and a bit more revealing.
I did the following:
1. Removed the raw reads of all the 8 samples and re-run the analyses again on 17 samples.
I found that the alpha diversity differences did not change much in terms of statistical significance. However, to my surprise the beta diversity changed and I could not see any statistical significance at all and the PCoA plot was all over the place.
2. Instead of removing all the raw reads and re-doing the demux and DADA2 steps, I made a new metadata file with only 17 samples and performed the downstream analyses using the table.qza representing all 25 samples. (i.e. I took the table.qza representing all 25 samples that was generated after dada2 step and filtered it so that only 17 samples are available for downstream analyses).
I found that the alpha-diversity significance did not change but the beta-diversity was significant. The PCoA was very clear showing separation of groups. The data appeared good enough to show some clear differences.
Note : 25 samples contained 12 brown-rot samples and 13 white-rot samples. While after removal of 8 samples I got a total of 17 samples that contain 10 brown-rot and 7 white-rot samples. DNA extractions, experimental design, collection of samples etc were done by someone else, I only got the .fastq files of the sequences.
My questions:
Which method is scientifically correct? What would be your advice to make sense of the data in the best possible way?
DADA2 breaks this quadratic scaling by processing samples independently. This is possible because DADA2 infers exact sequence variants, and exact sequences are consistent labels that can be directly compared across separately processed samples. This isn’t the case for OTUs, as the boundaries and membership of de novo OTUs depend on the rest of the dataset
So your results will be identical no matter the number of samples. Unless...
--p-pooling-method TEXT Choices('independent', 'pseudo')
What setting did you use on these runs?
For number 2... Choosing the right cohort is super hard, for a number of reasons (as shown through comics: 1, 2). Let's hope it's an easy technical issues, not one of those murky biological/statistical questions.
There are a number of reasons that DADA2 could produce slightly different results when samples are removed. For one thing, the error model would only be built from reads from those samples, which would change the denoising process. The chimera checking process could also be affected.
I asked Mehrbod_Estaki about this and he had some advice:
Perhaps you can compare the # of unique sequences between the tables to see if there are major discrepancies.
Also possible that the original results was very ‘borderline’ significant and a different random subsampling during rarefying created different results (we see this in the Parkinson’s mouse tutorial), or they may have used different parameters during DADA2?
This is great way to check for typos or changed settings to make sure the difference is biological, not technical.
Keep up the great detective work and let us know what you find next! :qiime2: