Docker image using singularity & conda installation not working (core dumped)


I downloaded the docker image as per documentation:

singularity pull docker://

I then run the image:

singularity run docker:// /bin/bash
Singularity> qiime --help
Illegal instruction (core dumped)

It seems like the image is not compatible with my os?


NAME="Rocky Linux"
VERSION="9.3 (Blue Onyx)"
ID_LIKE="rhel centos fedora"
PRETTY_NAME="Rocky Linux 9.3 (Blue Onyx)"

Singularity version:

singularity version 3.8.7-3.fc37

I tried the conda installation as well:

conda activate qiime2-amplicon-2023.9                                                                                                                 
QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment.                                                
Illegal instruction (core dumped)

I don't have any other issues with running singularity or conda on the server. It has so far only happened with the qiime2 installation.

Any suggestions?

Hello @makrez. I'm not familiar with Rocky Linux, but after googling it, it appears to be nearly identical to RedHat "Bug for bug compatible with redhat enterprise." We use RedHat on our HPC cluster and can run QIIME 2 natively on it through conda, so I'm not sure the OS is the issue here. Historically we most often see this issue when people are using a very old CPU. Do you know what CPU this server is using? It's doubtful that it's old enough to be a problem, but it's worth checking.


Hi @Oddant1

Thanks for your input. The server is less than a year old. We have, however, virtual CPUs.

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         40 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  64
  On-line CPU(s) list:   0-63
Vendor ID:               AuthenticAMD
  Model name:            QEMU Virtual CPU version 2.5+
    CPU family:          15
    Model:               107
    Thread(s) per core:  1
    Core(s) per socket:  32
    Socket(s):           2
    Stepping:            1
    BogoMIPS:            4499.99
    Flags:               fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl cpuid extd_apicid tsc_known_freq pni
                          ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lahf_lm cmp_legacy 3dnowprefetch vmmcall                                                                 
Virtualization features:
  Hypervisor vendor:     KVM
  Virtualization type:   full
Caches (sum of all):
  L1d:                   4 MiB (64 instances)
  L1i:                   4 MiB (64 instances)
  L2:                    32 MiB (64 instances)
  L3:                    1 GiB (64 instances)
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-63
  Gather data sampling:  Not affected
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec rstack overflow:  Not affected
  Spec store bypass:     Not affected
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization                                                                                            
  Spectre v2:            Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected                                                                                   
  Srbds:                 Not affected
  Tsx async abort:       Not affected

I read through a few posts and it seemed to be an issue with older hardware, I agree. But I am skeptical that this is the case here.

Can you please download this script, run it in the docker image, and post the results here? You should still get the core dump, but it will show exactly what was being imported when the core dump occurred.

I agree that it likely isn't that the CPU is too old, but we do use some compiled binaries in some places, and it is possible you have the wrong binaries for the CPU. I'm not sure exactly how or why that would happen here, but we do see it from time to time, and singularity adds some layers of indirection so who knows.

1 Like
unifrac None
unifrac._methods ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/unifrac']                                                                                          
bp None
bp._bp ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.time ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.numpy ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp._io ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.time ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.numpy ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.pandas ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.json ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp._conv ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.skbio ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.numpy ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp._insert ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.pandas ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.json ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp.skbio ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
bp._version ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/bp']
unifrac._meta ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/unifrac']
unifrac._api ['/opt/conda/envs/qiime2-amplicon-2023.9/lib/python3.8/site-packages/unifrac']
Illegal instruction (core dumped)

I pasted the last lines from the output. It seems to be a unifrac problem?

That's what I suspected. Looking back at your CPU flags, I don't see avx in there which has caused some serious headaches for us with the unifrac binaries in the past. Let me ask someone who was involved in that for help.


Thanks for pointing out that the avx flag is missing. The issue was that we use VMs on our server and for portability reasons, we chose certain CPU settings. We have now changed it to 'host' and avx instructions are now available. It works now!

Thank you so much for your help and pointing me into the right direction.

If any users want to reproduce what I have done, here is the summary:

# Download the script above
singularity exec /path/to/image.sif ;

# check your CPU flags (on linux):

If the avx flag is not present, talk to your sysadmin.


Awesome! I'm glad you were able to get it resolved.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.