Deep sequencing

Dear,
Is there a minimal number of sequences (deep) for sample size for 16S sequencing for the robustness of the results?
Can I find some papers that explain about this. Because I read good papers with few sequences and another with a lot of sequences per sample.
Thanks for your attention.

3 Likes

Hi @Alinne_Castro,

That is a great question! My thoughts on the topic, though I look forward to hearing others’ as well. Sorry about the length of the reply!

The question is rather loaded and I don’t think there is really a simple answer for this, so I’ll touch on a few points that might simplify it a bit.

  1. In my opinion, the most important factor is the sample source. If you are looking at a sample that you expect to have low 16S diversity (for example, a wine sample) then you might get all the information you need with very few reads (i.e. 1000 sequences, I’m kind of guessing here so I hope wine people aren’t offended). Whereas more complex communities lets say from soil or oceans might demand higher reads to truly capture the community diversity. The global patterns paper from 2010 suggested that 2,000 single-end reads was sufficient for most communities. The most recent update earth microbiome paper is actually a great follow-up to that as well. These estimates are both good and bad in my opinion. In the right hand they are useful guidelines at best but without proper understanding of the whole picture can actually lead to some misleading testings and interpretations. In general the higher the number of reads, the better, though the benefits of more reads plateu after a while so if you have lets say a gut sample with 10,00 reads that is great, if you have 20,000 reads even better, but at 100,000 its unlikely you are going to reveal anything of significance that you didn’t already reveal at the other depths, with the exception of maybe alpha diversity measures. Which feeds me into the 2nd point.

  2. What questions are you trying to answer. This is also very important. Sampling depth can have a significant effect on measures of alpha diversity, in particular with measures that focus on rare taxa but not as much perhaps on beta diversity. Many people use rarefaction curves of alpha diversity scores to demonstrate that sufficient sequencing depth has been reached in their experiments, though that isn’t without its own critiques either. The idea here is that if the diversity scores reach a plateu prior to your minimum sampling depth then you are likely ok. For beta diversity analysis, especially those accounting for phylogenetic distances (like Unifrac) and taking into account abundances, the overall patterns seems to be more giving to sampling depths. So if the question you are trying to answer from your data relates to the overall profile of a community, for example microbiome of healthy vs IBD patients, you might reveal these patterns quite robustly with as little as 1000-2000 sequences (any samples below 1000 reads or so in my opinion should be discarded right from the get go). But if you are trying to do more precise exploring, for example compare the abundance of some rare taxa between 2 similar treatments groups then you will need greater depth to have confidence in your tests. Leading us to the 3rd point.

  3. The choice of your downstream analysis. Unequal sequencing depth is certainly an issue that can’t be ignored. The two main approaches to dealing with this currently are a) rarefying to an even sampling depth or b) normalizing the data to stabalize the variance. This is a rather big and loaded topic on its own which I dare not go down here but this choice is also data-dependent as elegantly covered by this paper. In some downstream stats methods the tests are sensitive to low abundance and rare taxa which can create noise within the data without actually providing much meaningful information. As so, often it is recommended to remove rare features and/or those appearing in only a few samples. Sometimes this is necessary but the downside to this is that you are discarding real data and if those taxa happen to be part of the group you are interested in then you might not be able to infer much from those tests anyways. So again, what you are looking for can greatly affect how you process your data. In this case, what tests/models you were planning on using can also factor into your choices.

An example. I deal with a lot of 16S data from fecal and intestinal tissues. I almost always discard any samples that have less than ~1000-1500 reads right away. If a couple of samples still have low ~1500-2500 reads depth but the next lowest sample is 5000-6000 reads, then I weigh in the benefit of keeping those samples based on which treatment group they come from and what the remaining n-values would be if I discard them. Again this is all based on your experiment design and the questions being asked. With 5000-6000 minimum reads per sample I am usually pretty happy and comfortable with most downstream analyses, but if sample sizes are already low, then I would retain those samples with 1500-2500 reads and approach my downstream analysis accordingly and interpret my findings with more caution.

I know there are a lot of other factors at play but hopefully this will give you an idea about your question or at least give you some reading material to find out on your own. At the end of day, get to know your data and be able to defend your choices!

-Bod

22 Likes

Dear Mehrbod,
Thank you for your quick reply!
Have a good day,
Cheers

3 Likes

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.