best practices in 16S analyses

Hi Everyone,

I am working on 16S rRNA data that represent 25 samples collected from dead standing trees with either white- or brown-rot fungi grown on it.

I didn’t observe any clear significant differences between the two categories in terms of the alpha diversity. However, I could see some significance in the beta-diversity. The PCoA plot showed that there is separation but not so clear. The same samples were used for the ITS based analyses by someone else who found that 8 out of 25 samples don’t contain any ITSs representing the above mentioned brown- or white-rot fungi. I removed those 8 samples from my analyses altogether to see if the 16S data become less fuzzy and a bit more revealing.

I did the following:

1. Removed the raw reads of all the 8 samples and re-run the analyses again on 17 samples.

I found that the alpha diversity differences did not change much in terms of statistical significance. However, to my surprise the beta diversity changed and I could not see any statistical significance at all and the PCoA plot was all over the place.

2. Instead of removing all the raw reads and re-doing the demux and DADA2 steps, I made a new metadata file with only 17 samples and performed the downstream analyses using the table.qza representing all 25 samples. (i.e. I took the table.qza representing all 25 samples that was generated after dada2 step and filtered it so that only 17 samples are available for downstream analyses).

I found that the alpha-diversity significance did not change but the beta-diversity was significant. The PCoA was very clear showing separation of groups. The data appeared good enough to show some clear differences.

Note : 25 samples contained 12 brown-rot samples and 13 white-rot samples. While after removal of 8 samples I got a total of 17 samples that contain 10 brown-rot and 7 white-rot samples. DNA extractions, experimental design, collection of samples etc were done by someone else, I only got the .fastq files of the sequences.

My questions:

Which method is scientifically correct? What would be your advice to make sense of the data in the best possible way?

Thank you all,

1 Like

Hello @Irshad,

This sounds like a cool study! :mushroom: :microbe: :evergreen_tree:

Just to make sure I am clear, are these the two cohorts you are working with now?

Cohort rot color number of samples Total samples in cohort
All samples brown 12 25
All samples white 13 25
Subset with target Fungi brown 10 17
Subset with target Fungi white 7 17

And are these your initial results?

Process Alpha significant? Beta significant?
Use all samples no yes? :thinking:
1. remove and reprocess same (no) no! :sob:
2. remove from metadata only same (no) yes! :star_struck:

Thanks for explaining your study so well. I hope got everything.


Let's get to the real question:

I would frame this as two questions:

  1. What data processing method will lead to the most correct and accurate results?
  2. What biological cohort will yield the most useful and interesting results?

Number 1 should be easy. From the DADA2 workflow for Big Data:

DADA2 breaks this quadratic scaling by processing samples independently. This is possible because DADA2 infers exact sequence variants, and exact sequences are consistent labels that can be directly compared across separately processed samples. This isn’t the case for OTUs, as the boundaries and membership of de novo OTUs depend on the rest of the dataset

So your results will be identical no matter the number of samples. Unless...

--p-pooling-method TEXT Choices('independent', 'pseudo')

What setting did you use on these runs?


For number 2... Choosing the right cohort is super hard, for a number of reasons (as shown through comics: 1, 2). Let's hope it's an easy technical issues, not one of those murky biological/statistical questions. :microscope: :chart_with_upwards_trend:

Colin

4 Likes

Thank you for your time! Yes, your dissection of the cohort is correct. I used the following script:

qiime dada2 denoise-single
–i-demultiplexed-seqs demux.qza
–p-trim-left 0
–p-trunc-len 280
–o-table single-end-table.qza
–o-representative-sequences single-end-rep-seqs.qza
–o-denoising-stats single-end-stats.qza

Thanks

Hello again @Irshad,

There are a number of reasons that DADA2 could produce slightly different results when samples are removed. For one thing, the error model would only be built from reads from those samples, which would change the denoising process. The chimera checking process could also be affected.

I asked Mehrbod_Estaki about this and he had some advice:

Perhaps you can compare the # of unique sequences between the tables to see if there are major discrepancies.
Also possible that the original results was very ‘borderline’ significant and a different random subsampling during rarefying created different results (we see this in the Parkinson’s mouse tutorial), or they may have used different parameters during DADA2?

This is great way to check for typos or changed settings to make sure the difference is biological, not technical.

Keep up the great detective work and let us know what you find next!
:female_detective: :male_detective: :qiime2:

Colin