QIIME is caching your current deployment.....Illegal instruction (core dumped)

Hello All - new user here trying to install Qiime2 on a native installation of Linux using Linux Mint.
Anaconda3 installed with Conda updated to 23.1.1 and instructions followed for Qiime2 installation:

wget https://data.qiime2.org/distro/core/qiime2-2023.2-py38-linux-conda.yml
conda env create -n qiime2-2023.2 --file qiime2-2023.2-py38-linux-conda.yml
rm qiime2-2023.2-py38-linux-conda.yml
conda info --envs
# conda environments:
#
base                  *  /home/hannah/anaconda3
qiime2-2023.2            /home/hannah/anaconda3/envs/qiime2-2023.2

When I try to activate Qiime I receive the following repeat error.

$ conda activate qiime2-2023.2
QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment.
Illegal instruction (core dumped)
$ qiime --help
QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment.
Illegal instruction (core dumped)

I managed to find a mention of this in the forum but no resolutions as the OP didn't reply.

Could someone please help me understand what may be the problem?
Many thanks.

Hello @Cosmic,

Can you run $uname -a in your terminal and share the output here?

Hello @colinvwood

Thank you for replying. The output is below

$ uname
Linux

That doesn't seem like an awful lot of information does it....

If it helps, Linux was installed as the sole OS overwriting a Win7 installation

The PC has a Pentium G640 2.8 Ghz processor with 8 GB (2 x 4Gb) memory.

Is it just too old a PC to work? :frowning_face:

Hi @Cosmic,

Sorry you're seeing this issue. The CPU is definitely on the older end of things, but in principle it should work. I have this script for debugging this part of the loading cycle:

If you could download and run that in your environment:

python qiime_more_info.py

we should see it load each plugin and print each import performed in the process. I still expect a core dump, but the last thing printed is the likely culprit.

2 Likes

Hi @ebolyen

I've tried to run the script and I'm not sure it's run correctly but I've listed the output below. Thanks for trying to help me sort this out.

hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ conda activate
(base) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ conda info --envs
# conda environments:
#
base                  *  /home/hannah/anaconda3
python                   /home/hannah/anaconda3/envs/python
qiime2-2023.2            /home/hannah/anaconda3/envs/qiime2-2023.2

(base) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ cd qiime2-2023.2
bash: cd: qiime2-2023.2: No such file or directory
(base) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ activate qiime2-2023.2
/home/hannah/anaconda3/bin/activate: 5: /home/hannah/anaconda3/envs/qiime2-2023.2/etc/conda/activate.d/activate-r-base.sh: [[: not found
(base) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ activate qiime2-2023.2
/home/hannah/anaconda3/bin/activate: 5: /home/hannah/anaconda3/envs/qiime2-2023.2/etc/conda/activate.d/activate-r-base.sh: [[: not found
(base) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ conda activate qiime2-2023.2
QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment.
#!/usr/bin/env python
import sys
from qiime2.sdk import PluginManager

class SnoopImports:
    def find_spec(self, name, path=None, target=None):
        print(name, path)
        return None

def main():
    for entry in PluginManager.iter_entry_points():
        print(f"=> LOADING {entry.module_name}")
        entry.load()


sys.meta_path = [SnoopImports()] + sys.meta_path

if __name__ == '__main__':
    main()
Illegal instruction (core dumped)
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ #!/usr/bin/env python
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ import sys
Command 'import' not found, but can be installed with:
sudo apt install graphicsmagick-imagemagick-compat  # version 1.4+really1.3.38-1ubuntu0.1, or
sudo apt install imagemagick-6.q16                  # version 8:6.9.11.60+dfsg-1.3ubuntu0.22.04.3
sudo apt install imagemagick-6.q16hdri              # version 8:6.9.11.60+dfsg-1.3ubuntu0.22.04.3
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ from qiime2.sdk import PluginManager
Command 'from' not found, but can be installed with:
sudo apt install mailutils
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ 
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ class SnoopImports:
Command 'class' not found, did you mean:
  command 'clasp' from deb clasp (3.3.5-4ubuntu1)
  command 'iclass' from deb ivtools-bin (2.0.11d.a1-1build1)
Try: sudo apt install <deb name>
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$     def find_spec(self, name, path=None, target=None):
bash: syntax error near unexpected token `('
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$         print(name, path)
bash: syntax error near unexpected token `name,'
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$         return None
bash: return: None: numeric argument required
bash: return: can only `return' from a function or sourced script
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ 
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ def main():
bash: syntax error near unexpected token `('
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$     for entry in PluginManager.iter_entry_points():
bash: syntax error near unexpected token `('
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$         print(f"=> LOADING {entry.module_name}")
bash: syntax error near unexpected token `f"=> LOADING {entry.module_name}"'
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$         entry.load()
> 
> 
> sys.meta_path = [SnoopImports()] + sys.meta_path
bash: syntax error near unexpected token `sys.meta_path'
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ 
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ if __name__ == '__main__':
>     main()
> python qiime_more_info.py
bash: syntax error near unexpected token `python'
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ python qiime_more_info.py
python: can't open file 'qiime_more_info.py': [Errno 2] No such file or directory
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ conda activate python
(python) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ python qiime_more_info.py
Command 'python' not found, did you mean:
  command 'python3' from deb python3
  command 'python' from deb python-is-python3
(python) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ 

Sorry for not being more clear @Cosmic!

You need to save the script as a file and run that file. I've attached it in that form here.
qiime_more_info.py (417 Bytes)

Hi @ebolyen - sorry for being dopey.
I have run the script and there is a lot of output code - so much so it exceeds the character limit so I have uploaded it as a .txt file attached to this message.

Thanks.

qiime_more_info_output.txt (233.3 KB)

2 Likes

Thanks @Cosmic!

Looks like the troublesome plugin to load is q2-diversity, and more specifically its the chained import from q2-diversity-lib which then imports to unifrac:

=> LOADING q2_diversity.plugin_setup
q2_diversity None
q2_diversity._alpha ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/q2_diversity']
q2_diversity_lib None
q2_diversity_lib._version ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/q2_diversity_lib']
q2_diversity_lib.alpha ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/q2_diversity_lib']
q2_diversity_lib._util ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/q2_diversity_lib']
q2_diversity._alpha._pipeline ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/q2_diversity/_alpha']
q2_diversity._alpha._visualizer ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/q2_diversity/_alpha']
statsmodels.sandbox ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/statsmodels']
...
statsmodels.tools.sm_exceptions ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/statsmodels/tools']
q2_diversity._beta ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/q2_diversity']
q2_diversity_lib.beta ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/q2_diversity_lib']
unifrac None
unifrac._methods ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/unifrac']
bp None
bp._bp ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/bp']
...
bp.skbio ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/bp']
bp._version ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/bp']
unifrac._meta ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/unifrac']
unifrac._api ['/home/hannah/anaconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/unifrac']
Illegal instruction (core dumped)

Could you please give us the results of this command (inside the same QIIME 2 environment), which will tell us the version and build of unifrac you have installed:

conda list unifrac

as well as this command which will tell us what CPU instructions you have available:

cat /proc/cpuinfo | grep flags | uniq

Thanks so much!

Hi @ebolyen

Output below as requested - hope it helps.

Thanks

iime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ conda list unifrac
# packages in environment at /home/hannah/anaconda3/envs/qiime2-2023.2:
#
# Name                    Version                   Build  Channel
unifrac                   1.1.1            py38h17adfb0_1    bioconda
unifrac-binaries          1.1.1                h15a0faf_4    bioconda

and

(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ cat /proc/cpuinfo | grep flags | uniq
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer xsave lahf_lm epb pti ssbd ibrs ibpb stibp xsaveopt dtherm arat pln pts md_clear flush_l1d

1 Like

Thanks @Cosmic,

I'd like to try something I haven't really done before, we're going to try to capture the core-dump so that I can inspect it in the debugger and hopefully find the offending instruction. I should be able to work backwards from that to figure out why exactly unifrac(-binaries) is causing this issue.

There's apparently a million ways to do this, but since you are on linux mint, it's probably in one of two places:

  1. it happens to be right in the same directory that you ran the command. If so you'll see some file or directory named core or something suggestive of that.

  2. systemd is in charge of it via coredumpctl. I've never used this controller, but it should be as easy as just running the program coredumpctl (worst case add --help to learn more). If it isn't installed, then it's not systemd managing these and hopefully it's case 1.

Let me know if either of those seem the be the case for you. I appreciate you taking the time to work this out.

Ultimately, we're not really going to be able to fix this for the current release, since the compiled binaries are just not compatible with your CPU, but we may be able to fix future builds to avoid this as you aren't the only one who's run across this issue. It's just been really difficult to track down exactly what's wrong.


Actually it may be possible to compile these libraries yourself which may be a good approach for you and others with very old hardward, but I still need more information as to the actual cause before we go that route.

Right now, the best I can do is guess it's one of the popcount operations since those tend to be inter-cpu-generation instructions and there's like a dozen of them, I see

249dcd:	f3 48 0f bd d0       	lzcnt  %rax,%rdx

in libssu.so disassembly for example, which theoretically predates the sandy-bridge architecture, but apparently these are technically the same opcode as a mostly backwards-compatible instruction, so I'm kind of out of good ways to look at this.

That said you seem to have SSE 4.2, so you probably do have that instruction.

Hi @ebolyen - thanks for spending the time on this. I'm guessing the easiest solution is going to be for me to get my hands on a PC and turn that over the Linux OS?

That said, I've tried to see what I can find and ran coredumpctl which came up with following output which, I guess is showing my multiple attempts to run this.

Sun 2023-04-02 23:44:23 BST 64120 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 32.2M
Sun 2023-04-02 23:45:02 BST 64433 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Sun 2023-04-02 23:50:24 BST 64673 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Sun 2023-04-02 23:50:36 BST 64903 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Sun 2023-04-02 23:51:51 BST 65134 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Sun 2023-04-02 23:51:58 BST 65146 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Sun 2023-04-02 23:53:28 BST 65163 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Tue 2023-04-04 22:41:52 BST  3453 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Tue 2023-04-04 22:42:08 BST  3712 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Tue 2023-04-04 22:42:29 BST  3723 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Tue 2023-04-04 22:48:03 BST  4133 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Tue 2023-04-04 22:48:17 BST  4360 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Tue 2023-04-04 22:50:31 BST  4414 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Wed 2023-04-05 00:07:35 BST 20302 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 32.2M
Wed 2023-04-05 00:08:16 BST 20637 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Wed 2023-04-05 00:13:08 BST 20980 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Wed 2023-04-05 00:13:25 BST 21225 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Wed 2023-04-05 10:55:34 BST  2241 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Wed 2023-04-05 11:12:43 BST  3156 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Thu 2023-04-06 09:36:11 BST  2620 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Thu 2023-04-06 09:38:41 BST  2900 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Thu 2023-04-06 09:40:43 BST  2994 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Thu 2023-04-06 20:48:06 BST  3548 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Thu 2023-04-06 23:22:16 BST  3432 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Thu 2023-04-06 23:22:48 BST  3721 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Thu 2023-04-06 23:23:12 BST  3733 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 29.8M
Fri 2023-04-07 22:04:50 BST  2439 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
Sat 2023-04-08 11:08:17 BST  2324 1000 1000 SIGILL present  /home/hannah/anaconda3/envs/qiime2-2023.2/bin/python3.8 30.1M
~

Going to that path I've looked for anything coredump related and can only identify dumpsexp and corelist.

I did a bit of searching though and found the core dumps are stored in /var/lib/systemd/coredump and have listed below what is in there. If any of that is any use and you want it please let me know.

/var/lib/systemd/coredump/core.python.1000.1fc005fc1bcc4147929a87a305b8810c.3733.1680819789000000.zst
/var/lib/systemd/coredump/core.qiime.1000.1fc005fc1bcc4147929a87a305b8810c.3432.1680819734000000.zst
/var/lib/systemd/coredump/core.qiime.1000.1fc005fc1bcc4147929a87a305b8810c.3721.1680819766000000.zst
/var/lib/systemd/coredump/core.qiime.1000.40f3a9c71333454f9d852871c78cf6fe.2324.1680948493000000.zst
/var/lib/systemd/coredump/core.qiime.1000.49431ae84ebf4f6b8a2d572f1e578907.2620.1680770168000000.zst
/var/lib/systemd/coredump/core.qiime.1000.49431ae84ebf4f6b8a2d572f1e578907.2900.1680770318000000.zst
/var/lib/systemd/coredump/core.qiime.1000.49431ae84ebf4f6b8a2d572f1e578907.2994.1680770441000000.zst
/var/lib/systemd/coredump/core.qiime.1000.b6ca3d6b152643308dfd256224580f92.2439.1680901488000000.zst
/var/lib/systemd/coredump/core.qiime.1000.f62615b899514a3ba445287efaf42386.3548.1680810484000000.zst
/var/lib/systemd/coredump/core.qiime.1000.fb233e359189466d804039c1a5c0329a.20302.1680649648000000.zst
/var/lib/systemd/coredump/core.qiime.1000.fb233e359189466d804039c1a5c0329a.20637.1680649692000000.zst
/var/lib/systemd/coredump/core.qiime.1000.fb233e359189466d804039c1a5c0329a.20980.1680649986000000.zst
/var/lib/systemd/coredump/core.qiime.1000.fb233e359189466d804039c1a5c0329a.21225.1680650002000000.zst
/var/lib/systemd/coredump/core.qiime.1000.ffe53462ac5c4bee8a97600251c180f8.2241.1680688531000000.zst
/var/lib/systemd/coredump/core.qiime.1000.ffe53462ac5c4bee8a97600251c180f8.3156.1680689560000000.zst
1 Like

Hey @Cosmic!

Great job sleuthing, I think any of those should be fine, although the core.python.100 one is probably our script which should be a little bit smaller?

You should be able to just attach one of those, but if the forum gives you a fuss about the extension, just throw it in a .zip file and I can go from there.

Hello

I appear to be having a similar issue. However I don't have the (core dumped). I am using Ubuntu 22.04 LTS and installed miniconda and then natively installed qiime2. When using any qiime command I get this illegal instruction. Interestingly it does look like the environment activates. Below is what happens when I activate the environment

(base) dominic@LAPTOP-D67E3C7N:~ conda activate qiime2-2023.2 QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment. Illegal instruction (qiime2-2023.2) dominic@LAPTOP-D67E3C7N:~

Running the python script included above leads to the illegal instruction under unifrac._api as above also

unifrac._meta ['/home/dominic/miniconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/unifrac']
unifrac._api ['/home/dominic/miniconda3/envs/qiime2-2023.2/lib/python3.8/site-packages/unifrac']
Illegal instruction

The following commands included in this thread then output this

(qiime2-2023.2) dominic@LAPTOP-D67E3C7N:~$ conda list unifrac

packages in environment at /home/dominic/miniconda3/envs/qiime2-2023.2:

Name Version Build Channel

unifrac 1.1.1 py38h17adfb0_1 bioconda
unifrac-binaries 1.1.1 h15a0faf_4 bioconda

(qiime2-2023.2) dominic@LAPTOP-D67E3C7N:~$ cat /proc/cpuinfo | grep flags | uniq
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase smep erms invpcid rdseed smap clflushopt xsaveopt xsavec xgetbv1 xsaves flush_l1d arch_capabilities
vmx flags : vnmi invvpid ept_x_only ept_ad ept_1gb tsc_offset vtpr ept vpid unrestricted_guest ept_mode_based_exec
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase smep erms invpcid rdseed smap clflushopt xsaveopt xsavec xgetbv1 xsaves flush_l1d arch_capabilities
vmx flags : vnmi invvpid ept_x_only ept_ad ept_1gb tsc_offset vtpr ept vpid unrestricted_guest ept_mode_based_exec
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase smep erms invpcid rdseed smap clflushopt xsaveopt xsavec xgetbv1 xsaves flush_l1d arch_capabilities
vmx flags : vnmi invvpid ept_x_only ept_ad ept_1gb tsc_offset vtpr ept vpid unrestricted_guest ept_mode_based_exec
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase smep erms invpcid rdseed smap clflushopt xsaveopt xsavec xgetbv1 xsaves flush_l1d arch_capabilities
vmx flags : vnmi invvpid ept_x_only ept_ad ept_1gb tsc_offset vtpr ept vpid unrestricted_guest ept_mode_based_exec

As you can see the core dumped doesn't appear. Just thought I would put this in here as I haven't seen any resolution of such a problem!

Thanks!

1 Like

Thanks @dnfarsi, your CPU flag list will probably come in handy as you seem to have more.

I also notice that neither you nor @Cosmic have AVX instructions, which have become pretty common at this point, so it's probably going to be related to that. Once I have a core-dump I will report back with more info on the illegal instruction in @Cosmic's case.

As for what to do in the meanwhile, it may be that Intel's MKL library is causing the issue as it almost certainly expects AVX instructions. There is a conda package which will cause an environment to "resolve" with BLAS instead of MKL. Let me know if installing this happens to force a bunch of packages to change (I wouldn't mind a copy-paste of that output as well):

 conda install -c conda-forge -c bioconda -c qiime2 nomkl

It seems that the nomkl doesn't do anything in one of my environments, and using these instructions on switching BLAS implementation, the q2 env is already not using MKL generally speaking:

https://conda-forge.org/docs/maintainer/knowledge_base.html#switching-blas-implementation

So... any kind of good answer is TBD I guess

Hello @ebolyen, thanks for the help. Below is the output from the provided code. Let me know if it is of any use!

(qiime2-2023.2) dominic@LAPTOP-D67E3C7N:~$ conda install -c conda-forge -c bioconda -c qiime2 nomkl
Collecting package metadata (current_repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: /home/dominic/miniconda3/envs/qiime2-2023.2

  added / updated specs:
    - nomkl


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    nomkl-1.0                  |       h5ca1d4c_0           4 KB  conda-forge
    ------------------------------------------------------------
                                           Total:           4 KB

The following NEW packages will be INSTALLED:

  nomkl              conda-forge/noarch::nomkl-1.0-h5ca1d4c_0


Proceed ([y]/n)? y


Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment.
Illegal instruction
1 Like

HI @ebolyen

Uploading: core.qiime.1000.e5713b8353054346b35f4b32ac97b3c4.8835.1681163558000000.tar.gz...

When I went back to the coredump folder it had cleared out the core.python.1000 so I have compressed one of the qiime dumps and attached to this msg.

I ran conda install -c conda-forge -c bioconda -c qiime2 nomkl and said yes to the packages but then stupidly shut the terminal by mistake without grabbing the output.

I ran it again and the following output came up which probably isn't any use but at least we know it didn't change anything about the core dump I guess...sorry.

hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ conda activate
(base) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ conda activate qiime2-2023.2
QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment.
Illegal instruction (core dumped)
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ conda install -c conda-forge -c bioconda -c qiime2 nomkl
Collecting package metadata (current_repodata.json): done
Solving environment: done

# All requested packages already installed.

QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment.
Illegal instruction (core dumped)
(qiime2-2023.2) hannah@hannah-HP-Compaq-Pro-4300-SFF-PC:~$ 


Hope that coredump output is helpful.
I'm out in the field this week so won't have Linux access but will pick up again next weekend when I'm back.
Thanks for your efforts!!

1 Like

Hey @cosmic, sounds good and thanks for continuing to assist!

It looks like that upload didn't quite work. I suspect this file is on the larger side, so you may need to wait for it to finish before posting.