Qiime2-2023.2 Plugin error for diversity

I would like to kindly request your assistance on an error I am getting when running Qiime2-2023.2.

I am analyzing V4 paired end sequences. All runs smoothly (demultiplex, dada2, phylogenetic tree reconstruction, rarefaction plot) up to diversity analysis. I ran the following command:

qiime diversity core-metrics-phylogenetic
--i-phylogeny rooted-tree.qza
--i-table table.qza
--m-metadata-file mappingALS.txt
--p-sampling-depth 7000 \
--output-dir alphabeta

The error I receive is indicated below:

I do not know if it is relevant or helpful to note at this point that carrying out taxonomic assignment with this data works perfectly well.

Any insights or advice would be greatly appreciated!

I thank you for your assistance,

Hello @Stavrosb, was there anything after that red text in the output from the command? If so, can you rerun the command and post that as well? Thank you.

1 Like

Hi @Oddant1, thank you for your response.

I have run the command again and am attaching the entire error message (including an additional line in the end that was missing from previous message):

1 Like

Thank you @Stavrosb! Unfortunately, that is what I was afraid of. That error message is spectacularly unhelpful, it basically just says "something didn't work in this step" and says nothing about what specifically didn't work or why.

@wasade do you have any idea what is going on here? @ebolyen is suggesting it might be an MPI issue? Given that it is just a "we returned 1 from main" in C++ there is very little information to go off of.


Hi @Stavrosb,

Could you try running export UNIFRAC_USE_GPU=N prior to the diversity calculations? What I suspect is occurring is the code is detecting an available GPU, and attempting to use it but is unable too. I certainly agree the error message should be more informative, but before opening an issue, I want to make sure this resolves the problem.


1 Like

Hi @wasade and @Oddant1 ,
thank you both so much for taking the time to look into this!

@wasade, we ran the export UNIFRAC_USE_GPU=N command prior to the diversity calculations and unfortunately got the same error message. Please find details of the command that was run below:

Some further information that may be relevant to you:
We are running QIIME2 in a docker environment. We have tried the latest image 2023.2 and the previous image 2022.11 of QIIME2 (Installing QIIME 2 using Docker — QIIME 2 2023.2.0 documentation). We have run “export UNIFRAC_USE_GPU=N” in both versions and we get the same error message.
Furthermore, as I mentioned in my original message, all other commands work just fine.

Thank you for your efforts, your assistance is greatly appreciated.


1 Like

This is extremely helpful, thank you @Stavrosb. I've opened up an internal discussion, and will follow up shortly.

Hi @Stavrosb,

We've previously seen a similar situation occur. In that case, use of export UNIFRAC_USE_GPU=Y was a viable workaround. We further introduced a bug fix to failover on autodetection in version 1.2.1 to address the discovered issue there, although that version isn't yet part of a QIIME 2 environment.

What's unusual here is the workaround does not work. Could you try adding the following as well? It will tell us a little more about what's going on


One other possibility, is it feasible to request access to the GPU through your batch system?


Hi @wasade
thank you for your response.

After reading your latest message, we reviewed everything again from our side to make sure we werent missing anything. We identified an issue:
We ran “export UNIFRAC_USE_GPU=N” from another computer (another session from our ITs computer) but the same user. Unfortunately, env was not kept as permanent and resulted in the error we have been getting. We now managed to insert it in .bashrc, and it runs every time. Following this, the problem has been resolved!!

Thank you so much for taking the time to address our issue and for finding a solution!
Your time and efforts are greatly appreciated!


1 Like

Ah, that would explain it!! I'm glad the issue is sorted out!


This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.