beta group significance - permanova v2020.8

Good afternoon dear all,
In the past few days, I've realized that something strange is happening with permanova results (for different datasets of my group).
To my knowledge, the first p-valeu presented in the visualization is related to the overall statisticall testing, independently of wherever groups you are comparing, and then you have p-values of pairwise comparisons.
In the past (e.g. v2018.6 and v2019.4), when you compared only 2 groups, the first p-value and the pairwise comparison p-value were the same, but this is not happening in this version, and I believe they should match..
Is there anyone else with this problem, or have a clue of what might be happening? which p-value should I report when I am comparing only two groups? (I've uploaded one example)
thank you in advance,
best regards,
Sara

weighted-unifrac-group-significance.qzv (579.4 KB)

Hi @smd!

I am not confident enough to comment on the validity of this statement, perhaps @jwdebelius can say something?

I don't think this is the case, please see the following examples, from previous versions of the Moving Pictures tutorial

I'm not sure that there is actually a problem here, but perhaps @jwdebelius can explain what is happening.

:qiime2:

Hi @smd,

I think the first issue that needs to be addressed is how p-values are calculated for permanovas (this also applies to mantel, permdisp, and adonis). First, you calculate a test statistic for your data. Then, the data gets shuffed n number of times (by default, n=999) and test statistics get calculated for all of those. The p-value is calcualted as \frac{1 + \textrm{number more extreme}}{1 + \textrm{total number of permutations}}.

When your data is the most extreme or one of the most extreme configurations, then you get a p-value that coverages to \frac{1}{1 + \textrm{total number of permutations}}. In this case, the p-value is stable (often p=0.001, 999 permutations). If you data is not more extreme, and say there are about 30-50 permutations (on average) that are more extreme than the original data, than you may get a set of p-values between 0.030 and 0.060 (999 permutations), depending on where the random numbers fall.

The reason we have to do this is because beta diversity is a distance. By their nature, distances are always between two points (my desk :computer: and my teapot :tea:); (the US :us: and canada :canada:); (the castle :european_castle: and the dragon’s lair :dragon:). Distances (and dismilarity) are not independent due to the triangle inequality. …The problem is that contentional statistical tests assume that the values you’re test are independent. So, we can use a t-value to calculate a statistic, but we can’t use that to calculate our p-value because we’ve broken one of the major mathematical assumptions. So, the solution is to re-shuffle the data so we’re not testing against a known distribution, we’re testing against a distribution that has the same properties as our original data. This way, we can get around the pesky interdependence problem.

So, to answer your questions.

This is correct. The first gives the full p-value and the second is post-hoc. (Although post-hoc is probably less useful for two groups.)

I did some digging in the 2020.11 release, and it looks like the pairwise calculations are done independently from the group calculation, even if there are only 2 groups being compared. I’m having an issue finding the code from the previous versions, but I suspect this is not new behavior.

It’s not a problem when you’re working with a distribution-based p-value: as long as your test statistic is consistent, it doesn’t matter how many times you calculate a t-value, given the same degrees of freedom, it will give you the same p-value. When you do permutation, like I explained above, you may not get the exact same value.

I’d go with the first and turn of post-hoc testing for more than two groups, but honestly, I dont think there’s always a good answer here. (I would recommend reporting it as (p=0.034, 999 permutations) so people know how many you ran.

Best,
Justine

6 Likes

Thank you @thermokarst and @jwdebelius you were very helpfull!

Based on tour explanation, I belive the reason why i’ve reported that “In the past (e.g. v2018.6 and v2019.4), when you compared only 2 groups, the first p-value and the pairwise comparison p-value were the same, but this is not happening in this version” was due to presence of extreme groups.

2 Likes

Hi @smd,

I think that’s probably true, especially if both were equal to 0.001 (or the lowest value for your permutations.)

Best,
Justine

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.