classify-sklearn segmentation fault {SIGKILL(-9)}

Hello! I am a beginner in qiime2 using. I am using qiime2-2020.2 and run such command:

nohup bash -c ‘qiime feature-classifier classify-sklearn
–i-classifier silva-132-99-nb-classifier.qza
–i-reads UMrep-seqs-dada2O.qza
–o-classification taxonomy-v4.qza
–p-read-orientation ‘same’
–p-n-jobs 5’ &

I have this problem:
A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. The exit codes of the workers are {SIGKILL(-9)}

I find information about threads (try 5 and 12, I have 6 CPU, same result) and full memory, i have cleaned it - the same situation: here is my memory now:

df -ha
Filesystem Size Used Avail Use% Mounted on
sysfs 0 0 0 - /sys
proc 0 0 0 - /proc
udev 18G 0 18G 0% /dev
devpts 0 0 0 - /dev/pts
tmpfs 3,6G 12M 3,6G 1% /run
/dev/vda1 1011G 783G 227G 78% /
securityfs 0 0 0 - /sys/kernel/security
tmpfs 18G 0 18G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 18G 0 18G 0% /sys/fs/cgroup
cgroup 0 0 0 - /sys/fs/cgroup/systemd
pstore 0 0 0 - /sys/fs/pstore
cgroup 0 0 0 - /sys/fs/cgroup/cpu,cpuacct
cgroup 0 0 0 - /sys/fs/cgroup/cpuset
cgroup 0 0 0 - /sys/fs/cgroup/perf_event
cgroup 0 0 0 - /sys/fs/cgroup/blkio
cgroup 0 0 0 - /sys/fs/cgroup/devices
cgroup 0 0 0 - /sys/fs/cgroup/hugetlb
cgroup 0 0 0 - /sys/fs/cgroup/memory
cgroup 0 0 0 - /sys/fs/cgroup/net_cls,net_prio
cgroup 0 0 0 - /sys/fs/cgroup/freezer
cgroup 0 0 0 - /sys/fs/cgroup/pids
systemd-1 - - - - /proc/sys/fs/binfmt_misc
mqueue 0 0 0 - /dev/mqueue
debugfs 0 0 0 - /sys/kernel/debug
fusectl 0 0 0 - /sys/fs/fuse/connections
hugetlbfs 0 0 0 - /dev/hugepages
binfmt_misc 0 0 0 - /proc/sys/fs/binfmt_misc
tmpfs 3,6G 0 3,6G 0% /run/user/1000

Help me, please

1 Like

Hello @Butterfly, can you rerun your command with --verbose and post the results of that here? Additionally, what sort of environment are you running QIIME 2 in (local install, virtual machine, HPC, etc.)?

Also as a potential quick fix you could try removing the --p-n-jobs from the command entirely. The command may take some time to run on a single thread, but that may cause it to work.

EDIT: Also df -ha shows the state of your Filesystem (long term storage like your hard drive) not your memory (RAM).

2 Likes

Hi, Oddant1!
Thank you very much, I run this command and it has done successfully:
qiime feature-classifier classify-sklearn
–i-classifier silva-132-99-nb-classifier.qza
–i-reads UMrep-seqs-dada2O.qza
–o-classification taxonomy-v4.qza
–p-read-orientation ‘same’
About the interface - I have local install on linux and there was a feature to run qiime2: command “source activate qiime2-2020.2” did n’t work, only “conda activate qiime2-2020.2”+double (or three times) pressed “enter”… First time I thought, that this is precisely the reason

Glad it helped. The --p-n-jobs parameter can be a bit dangerous. If you have the resources for it and you know how to set it then it can speed things up, but otherwise it can cause a lot of problems (like the one you just encountered). In particular it can significantly increase the RAM needed to run an already RAM intensive command. Generally speaking you’re going to want to leave it alone unless you’re running on a system with large amounts of memory and processing power (probably an HPC).

As for your activating QIIME 2 there, yes the expected command would be conda activate qiime2-2020.2 and not source activate qiime2-2020.2. You should only have to press enter once though. It may take a while for the conda environment to activate depending on the speed of your computer. I suggest you try entering 'conda activate qiime2-2020.2` and just waiting a minute after pressing enter. If you NEED to press enter multiple times for it to work, something is probably misconfigured somewhere, and I have no idea what that would even be.

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.