ancom: all features are significantly different with correlation

Hi there,

My data contains 20samples(10samples per group, two groups) and 481 features,
I imported ancom funtion from skbio, here are my codes:

from skbio.stats.composition import ancom
from scipy.stats import ttest_ind
ancom_none, ancom_none_pct = ancom(table=table, grouping=grouping, significance_test=ttest_ind, multiple_comparisons_correction=None)
===> 5 features were significantly different between 2 groups. Here are the W value of some features:

ancom_none, ancom_none_pct = ancom(table=table, grouping=grouping, significance_test=ttest_ind, multiple_comparisons_correction='holm-bonferroni')
===> I prefered to do the correction, so I set multiple_comparisons_correction to be 'holm-bonferroni', nothing else changed. But now all features rejected the null hypothesis, so they were all significantly different between two groups, while the W value were pretty low. Same features with pic above:

The results were weird, could anyone give some suggestions?
Thanks in advance!

Yichen

Hi @maque4004,
The issue you are seeing looks similar to what is described here, have a look and let us know if you have any additional questions.

1 Like

Thanks @Mehrbod_Estaki
Sorry I didn’t explained clearly, my question is why I ran ancom funtion with multiple_comparisons_correction was None, the output seemed reasonable(only 5 features are significant).
But I’d like to do the correlation, so I set multiple_comparisons_correction to be ‘holm-bonferroni’, now all features were significant and W values were pretty low.

1 Like

Hi @maque4004,
Unfortunately I don’t have a good answer for your question here as this is a rather unexpected behavior (as explained by the ANCOM developers in that link). It is likely an error of the test under some tricky conditions. The results from your non-corrected test seem reasonable (i.e. high W features are typically the truly significant ones) but of course without accounting for multiple-testing you significantly increase the rate of false positives.
The second scenario where low W values are considered significant is obviously unreliable and so shouldn’t be trusted (as explained in the other link). My recommendations are to try a different tool for your differential abundance testing, example: q2-aldex2, q2-songbird, or q2-conrcob.

p.s I’ve moved this thread to “Other Bioinformatics Tools” as it wasn’t directly using any Qiime2 tools.

1 Like