So I understand your pipeline, what step did you re-run from when you changed your metadata? (The feature table? Diversity? Testing?)
If you re-did from diversity-core metrics, then you re-rarified the table (which is a stochastic process) and then re-ran a permutative test (which is also a random process). So, you have two random processes that resulted in a slight difference in your results, but on the whole, give very similar numbers (167/1000 permutations were more extreme before and 188/1000 permutations were more extreme in the new results).
However, you don’t need to re-run your data when you edit your metadata. (God knows, if you did a lot of us would be in trouble!) There was a long discussion on the topic a while ago; it might be worth checking out to get a better basis for what’s going on under the hood and to help figure out which steps have to be re-done where.
Mistake in metadata file and re-running core metrics analysis.