In our lab, we send 16S sequencing fecal samples from WT and APP trangenic rats to Illumina.
From Illumina we received the reads and we received reports with the annotation of these reads (where they performed a previous quality control).
At the same time, we sent the reads to a bioinformatics group in Spain and they sent us other reports with annotation of these reads. According to them, Illumina uses an automatic unsupervised program, which is not so reliable. The Spanish group submitted the FASTQ data to the standardized qiime2 protocol. In addition, unlike Illumina (which uses its own private database), the Spanish group used the 16S SILVA database.
The differences in the annotation of both reports are abysmal (their analysis leads to very different hypotheses and conclusions in our experiment). That is why it makes me noise that they can differ so much.
Is it usual for Illumina's 16S reports to be inaccurate?
Welcome in the forum!
I think your is a tricky question. I personally never compared Illumina Basespace 16S app (I am assuming the Illumina pipeline is referring to this) with Qiime2.
However, when you say they are giving very different annotation, can you clarify what do you mean? Is this true comparing many different taxonomic levels? I assume at species level the differences form the databases will prevail, but maybe at family/order level the result may be more similar than what you think?
I understand you don't like the automatic process proposed by Illumina, which I may agree for a point of view, however, I do not think you can say 'standardized qiime2' protocol. There are so many things you can do differently that I'd say any of the researcher here has his own different pipeline. The QC steps and trimming settings may affect a lot the results, so in a way I would ask to the Spanish group if the Qiime2 pipeline has been tailored onto your data, eg. the trimming QC settings have been checked if make sense with the quality profile for your reads?
The only real thing you could do is use a/many control samples, with a known taxonomy profile, to evaluate different pipelines.
Keep in mind that any read dataset is somehow unique, by sequence length, quality profile, duplication level, so technically you should not trust one-off comparison but use a positive control in each of your run.
I am sorry I can not give a straight answer, let see what other suggests.
Sorry, I think @llenzi and I were drafting at the same time, so hopefully its okay that I add onto his answer!
Without full details about the pipeline, it is difficult to assess why there might be differences between the Illumina report and your group's results. Differences throughout the pipelines can result in differences is results, including:
Denoising or clustering (including the approach and algorithm)
The type of taxonomic annotation
The database used
The metrics chosen for analysis
the statistical tests
Right now, "right" or "accurate" is hard to attribute to microbiome data; we aim for "consistent" and "reproducible". That's why clear documentation of methods are so critical.
First of all, thank you very much for your answers.
As for the differences, I am referring to the family level, which I find strange. Not only the list of families you read is completely different, but also the count of some families differed between control rats and groups (which can lead to totally opposite conclusions).
As for the processes that were applied to our data, we asked Illumina but they did not give us much information about it. On the other hand, I will find out what processes they did from the Spain group.
Thank you very much for the advice and suggestions!