Batch correction and library prep and alfa diversity

Hi,

I am having trouble with composing a dataset made by 3 different runs, but in particular with 2 different library preparations, which to my knowledge, based on RNASeq experience, is the major cause of batch.

I have just run my microbiome pipeline starting from fastqs, however when I go for alfa diversity I can appreciate the effect of the two library preparations. In addition to that if I run the wilcoxon. test on alfa- diversity I see a significant result.
The same appears when I run beta diversity and adonis test on this category (different sequencing library preparation).

I imagined to use abundance values and correct the batch in a linear model, however I am not that sure this would be sufficient and in any case the differences I can see in alfa diversity suggest that in one preparation I can count more OTU, while in the other one that is not the case.

Could you give me advice on how to face those situations?

Thanks a lot,

Michela

Hi @MichelaRiba,
Batch correction of microbiome data is a challenging topic, and not fully addressed (in part because there are issues with compositionality etc that limit application of techniques from other fields).

q2-perc-norm is designed to address batch effects specifically in case-control studies. This is the only QIIME 2 plugin that I am aware of that is specifically used for addressing batch effects in microbiome data.

The paper for that method also compares it to batch control methods for RNAseq applied to microbiome data so would be a useful read: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006102

In some cases, batch effects in microbiome data appear to be driven by contaminants, e.g., from reagents, so you might also try using decontam (an R package, not yet a QIIME 2 plugin but we plan to add it some time this year). That will identify and remove putative contaminants, which others have shown can fix apparent batch effects. This is a paper demonstrating that decontam is effective at removing batch contamination effects for multi-batch comparisons: https://www.biorxiv.org/content/10.1101/2020.04.20.052035v1.full

You could also include batch as a main effect in, e.g., an ANOVA test for alpha diversity differences (see q2-longitudinal for multi-factor ANOVA) to control for this effect and test if there is an underlying effect of biology! But I would recommend doing this after at least some attempts at batch correction (e.g., with decontam to remove contaminants).

Please let us know what you find!

2 Likes

Hi,

thanks a lot for the suggested papers and methods. Indeed I am in between the situations:

  • starting from the last proposed paper mentioning also the experimental design to have the possibility ultimately to correct batches of sample preparations, using for example mock communities: I cannot at the moment because we added mock communities but only in one of the two batches.
    The only point which has been tried to fix is the processing of all the samples together so using the same parameters
  • regarding the first approach you propose, dealing with quantile normalization, if said using microarray language, or percentile normalization I read that those kind of approaches, are not fitted to count-based measurements, that is for example alfa diversity (OTU number) and I would like to put together data exaclty to enforce the results we have at the level of alfa-diversity.

I have tried, following suggestions in the web, to:

  • calculate beta diversity on collapsed (family level) data instead of on OTU tables, using non-plylogenetic metrics (e.g. Bray-Curtis) but I see a significant Adonis test regarding exactly the batch parameter
  • calculate alfa diversity on those values (that are counts, just collapsed counts, not percent measures
    however again the pairwise comparisons using Wilcoxon rank sum test considering the results concerning batch significant.

I found additional inspiration among R packages: MMUPHin (https://www.bioconductor.org/packages/release/bioc/html/MMUPHin.html), which tries to mathematically subtract the batch effect while preserving the wanted differences, however I saw that the effect of batch was not completely disappearing, this sounds to me that I have differences in the composition of the two cohorts and I was wandering if we could somehow try and fix the analys considering only OTU features measured by both batches (example using functions in the otuSummary R package (https://cran.r-project.org/web/packages/otuSummary/index.html), which I did not try to the end)
or maintain the two separated and try and put together in the end using

  • scaled alfa diversity measures

In the end I do not know exactly what the batch effect measured in. alfa diversity is telling me…

Regarding the statistics of differential abundance both at the percentage or normalized OTU level I tried and use respectively linear models including batch as covariate in the linear model
or prety similar approach but using edgeR for differential OTU-level caluclation
If you have additional comments please answer, the discussion would be very important,

thanks,

Michela

Hi @MichelaRiba,
There are many forum posts about batch effect and different strategies that have been proposed, so it would be worth searching the forum for similar advice. This discussion also may be relevant:

Based on what you are saying so far, it sounds like decontam (or similar for removing contaminants) might be the best approach for you. I recommend giving that a try.

Carefully reviewing your trimming/truncation/clustering parameters (and quality filtering) would be another good thing to try, though I am guessing from earlier discussion that you are using closed-reference OTU clustering?

A third possibility (if you are comparing to old data) is that there might be methodological variation that you might not be able to control based on this information. E.g., DNA extraction or primer differences?

Hi,
thanks for reply;
I have to say that our negative control show really negative results, for this reason I do not think that using a decontamination approach would solve.
I pointed to you a lot of ideas that are pretty newer as you asked.
I have processed the fastqs altogether so there is no bias due to the change of protocol for analysis, they have been produced in the same machine…however
I would go through the analytical preparation also, which as I mentioned, maybe the first source of bias.
My point refers to how to treat alfa diversity in case of apparent and proven batch and on this specific point I did not find an answer.
I mentioned approaches as otuSummary in R which separarted abundant from non-abundant data. Or the approach undertaken by MMUPHin R package to subtract batch while preserving the wanted “differences”.
Could you please comment on this line?

Thanks a lot,

Michela

your ideas above make sense (e.g., account for batch in a linear model), and my follow-up comments were specific to this point (e.g,. use of decontam which of course as you note requires some contaminants detected).

I am not familiar with these packages so cannot comment on them. Others who are familiar may have more to add :man_shrugging:

Good luck!

1 Like

Thanks a lot for your follow up,
I will update you and the discussant if I will be able to “fix” my batch and have convincing evidence I am controlling the effect, I hope in a short time! I really need some good luck indeed!
Thanks for suggestions and literature and links

Michela

1 Like