alpha correlation and adjusted p-value

Dear all,
I have a small doubt/question (probably stupid) realted with qiime diversity alpha-correlation . When I run it, I obtain the correspondig Test statistic an p-value but not the q-value (the p value adjusted). I am pretty sure that if the p-value is not adjusted, there should be a good reason but I cannot find it.
Kind regards

Hi!
The alpha-correlation plugin calculates the p-value for each numeric column in your metadata file (all features vs all metadata values in the column). Though several columns may be tested, each test is independent and can be used independently. So, there is no need to correct p-values since no pairwise or series of tests were performed.
I hope that I understood your question right.

Best,

2 Likes

Great! Now it is clear. Thank you for you answer. Best regards

If each column of metadata is computed independently, why does adding a column of metadata affect the p-value for everything else? I noticed this when I averaged several technical replicates in my data and put them into a new column. When re-calculated, all of the correlation statistics for the other columns changed very slightly. Not enough to affect my results, but it did make me wonder if the app is calculating differences in rank across the whole spreadsheet rather than one column at a time.

1 Like

That puzzles me as well.
Did you rerun alpha-group-significance only, or did you also rerun the core-metrics command?

I re-ran through core-metrics (q2 2023.7) Here is a table showing the before and after:

Before 	Before 	Before 	After 	After 	After

parameter Spearman P-value n= Spearman P-value n=
A -0.1378 0.217 82 -0.1316 0.2386 82
B -0.2026 0.0229 126 -0.2025 0.023 126
C -0.1097 0.2213 126 -0.1072 0.2323 126
D 0.4312 0 126 0.4303 0 126
E -0.2337 0.0084 126 -0.2347 0.0081 126
F -0.165 0.1891 65 -0.165 0.1891 65
G 0.0753 0.4019 126 0.0771 0.3909 126
H 0.1255 0.1668 123 0.1268 0.1622 123
I 0.035 0.7005 123 0.038 0.6762 123
J

-0.007 	0.9385 	123

K

-0.0469 	0.6065 	123

J.1 0.0852 0.3485 123 0.0813 0.3711 123
J.2 -0.0247 0.7867 123 -0.0284 0.7551 123
J.3 -0.0419 0.6457 123 -0.0464 0.6106 123
K.1 -0.0196 0.8298 123 -0.0224 0.8056 123
K.2 -0.0403 0.6582 123 -0.0465 0.6093 123
K.3 -0.0529 0.5613 123 -0.0525 0.5645 123
L 0.0626 0.4916 123 0.0602 0.5081 123
M 0.0974 0.2841 123 0.1013 0.265 123
N 0.0707 0.4371 123 0.0654 0.4726 123
O 0.0609 0.5037 123 0.0609 0.5037 123
P 0.0534 0.5572 123 0.0564 0.5355 123
Q 0.1118 0.2183 123 0.1135 0.2114 123
R -0.105 0.2479 123 -0.1078 0.2354 123
S 0.0356 0.6957 123 0.0371 0.6836 123
T -0.1955 0.0302 123 -0.1975 0.0286 123
U 0.0609 0.5037 123 0.0609 0.5037 123
V -0.0742 0.4146 123 -0.0742 0.4146 123
W 0.1126 0.2148 123 0.1112 0.2207 123
X 0.0876 0.3354 123 0.0861 0.3437 123
Y -0.0696 0.4446 123 -0.0747 0.4116 123
Z -0.0404 0.657 123 -0.0405 0.6566 123
AA -0.0677 0.4568 123 -0.0686 0.451 123
AB -0.0534 0.5572 123 -0.0534 0.5572 123
AC -0.1673 0.0643 123 -0.1716 0.0577 123
AD -0.1351 0.1363 123 -0.1336 0.1407 123
AE -0.0534 0.5572 123 -0.0534 0.5572 123
AF -0.1626 0.0723 123 -0.1626 0.0723 123
AG 0.0609 0.5037 123 0.0609 0.5037 123
AH -0.0867 0.379 105 -0.0883 0.3704 105
AI -0.0383 0.6737 123 -0.0363 0.6898 123
AJ -0.1327 0.1436 123 -0.1344 0.1382 123
AK -0.1101 0.2255 123 -0.1152 0.2045 123
AL -0.1608 0.0757 123 -0.1639 0.0701 123
AM -0.0189 0.8355 123 -0.0192 0.8332 123
AN -0.0796 0.3999 114 -0.0807 0.3931 114
AO 0.025 0.7811 126 0.0251 0.7802 126

(BTW, pulling this out of the q2 viewer one at a time is excruciatingly slow. A .tsv download feature would help a lot)

Hi @adeVries, I don't see this when I try to reproduce it with data from the Moving Pictures tutorial. Could you provide the two .qzv files that you're looking at as attachments to this post. I've attached two that you could load with QIIME 2 View - 2.qzv contains about 30 extra numeric metadata columns with respect to 1.qzv, and I get identical test statistics, p-values, and sample sizes for the numeric columns that are shared between the two results.
1.qzv (343.0 KB)
2.qzv (411.0 KB)

So you can confirm that you rerun all the core metrics metrics?
In that case, the explanation is pretty simple - core metrics randomly rarefies the features to a certain threshold. Some variations in diversity metrics are expected, and they affect p-values. So, the changes you have observed are because you reran core metrics, not because you added a new column.

Please correct me if I didn't understand you correctly.