I got some unexpected results from beta-group-significance tests for my data. I was expecting to see significant difference between some groups based on pairwise permanova and anosim tests. However, the p value from both tests were p>0.05, which indicates there is no significant difference between the paired groups. If look at the boxplots (distance), it clearly shows some group has very different mean to other (i.e., DE-NW-23 v.s. UK-DS-49 pseudo-F 9.62 p=0.102; R=1 p=0.092). How this happens? Is that because I have too small number of samples for each group (3 samples in each group)? Thanks ahead!
I have a question for you. I checked your plot. It seems you have 14 samples with three replicas each (n=3), e.g. DE-NW-23(n=3), but why other samples have n=9?
What is the origin of n=9 in your case?
I have this (n=9) in my bet_group_significance but I thought they are related to my replicas except for Control_Week1 sample (I have 12 samples; three replicas for each). You can see that here. Your samples and your replicas are seemingly more than mine, so you surprisingly have the same number (n=9). I expected yours be higher than mine honestly; however, It is equal to mine!
I would appreciate if you make it clear to me.
My understanding is n=9 indicates the # of between sample distance. Not # of samples. For example, DE-NW-23 (3 replicates) vs DE-NW-51 (3 replicates) 9=3x3
Yes I think you are right, you are underpowered with so few replicates because I agree the boxplots would suggest very clear differences.
It seems like you might be splitting your data too finely, though, and there are larger natural groupings — if it fits with your hypothesis, you could compare DE vs. UK vs. US samples or DE-NW vs. DE-SA.
This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.