Alpha diversity significance results Kruskal-Wallis (pairwise) test result seems odd

I'm using QIIME 2018.11 and have noticed that the q values generated are occasionally identical during pairwise comparison of alpha diversity, despite having different H or p values. I have also noticed this with the beta diversity assessment as well.

When taking the values directly from the downloaded TSV and running the Kruskal-Wallis Test on prism, the H and p-value for the Kruskal-Wallis (all groups) test match that seen for my data seen in qiime2 view. However, when I run the pairwise test using the Original FDR method of Benjamini and Hochberg to correct for multiple comparisons (which to my understanding is we see in the qiime2 view document), the p values and q value do not match. I've attached a screenshot of the results in an excel file.

Any idea what's going on?

Thanks

1 Like

Hi there @Paraqui - would you be able to provide the QZV with the Faith's PD alpha group significance results for these groupings? We would like to reproduce your analysis here. Thanks! :qiime2:

1 Like

Sure, here's the file. Thanks!

faith-pd-group-significance.qzv (330.8 KB)

Thanks @Paraqui - you should double-check your spreadsheet, I found several inconsistencies — your definitions of A and D are not the same between runs. For example, for your QIIME 2 results table, in “A-B”, you defined A as Ctrl+E, but then in “A-C” and “A-D”, A is defined as Ctrl+V. Similarly, the same thing happens with D in “A-D” and “C-D”.

1 Like

I flipped some of the data output in qiime so that it matched the prism output in the spreadsheet. I included an image of the actual groups instead of letters.

Thanks @Paraqui, makes sense.

I have no idea what Prism is doing specifically, I don't have access to Prism. If you want to see the source for q2-diversity, you can see exactly what is happening here: q2-diversity/_visualizer.py at master · qiime2/q2-diversity · GitHub

Let us know what you think. :qiime2:

Hey there @Paraqui! @ebolyen & I were curious about this, so we grabbed a demo of Prism and tried it out. We think we recreated your Prism analysis here:

If this is the case, one thing that jumped out at us is that the test here uses the "mean rank of each column", while QIIME 2 compares the columns outright. We are going to keep looking into some things here on this end, but, this might provide some indication as to why this is a bit different. It also looks like Prism is unable to perform pairwise Kruskal-Wallis, so recreating the QIIME 2 analysis in Prism might not be possible.

Still looking into this… It looks like prism uses an entirely different post-hoc test: Dunn’s.

Dunn's test is normally selected as the default post hoc test, but in the screenshot it seems you used the original FDR method of Benjamin and Hochberg to correct for multiple comparisons by controlling the FDR, as seen under the "Options" tab.

I notice that QIIME2 gives the same p and q values for Ctrl+V vs Ctrl+E, and that the Prism result gives the same p and q values for the Ctrl+V and DSS+V. Is this something of note?

I think that is a different test entirely - check out the Prism docs:

https://www.graphpad.com/guides/prism/7/statistics/index.htm?how_the_kruskal-wallis_test_works.htm

It looks like Dunn’s test is always used as that post-hoc test when the FDR happens.

I don’t think so…

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.