Plugin error from feature-classifier [Errno 11] Resource temporarily unavailable

I’m running the feature-classifier with the following command:

qiime feature-classifier classify-sklearn
–i-classifier silva-132-99-nb-classifier.qza
–i-reads uchime-dn-out/rep-seqs-nonchimeric-wo-borderline.qza
–p-n-jobs 60
–p-reads-per-batch 10000
–o-classification taxonomy.qza

and the error is :

Plugin error from feature-classifier:

[Errno 11] Resource temporarily unavailable

Debug info has been saved to /mnt/lustre/user/wubin/tmp/qiime2-q2cli-err-jgis2flp.log

and in the file “/mnt/lustre/user/wubin/tmp/qiime2-q2cli-err-jgis2flp.log”, there are:

=======================================================================
OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable
OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable
OpenBLAS blas_thread_init: RLIMIT_NPROC 1024 current, 8272992 max
OpenBLAS blas_thread_init: RLIMIT_NPROC 1024 current, 8272992 max
OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable
OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable
OpenBLAS blas_thread_init: RLIMIT_NPROC 1024 current, 8272992 max
OpenBLAS blas_thread_init: RLIMIT_NPROC 1024 current, 8272992 max
OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable
OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable
OpenBLAS blas_thread_init: RLIMIT_NPROC 1024 current, 8272992 max
OpenBLAS blas_thread_init: RLIMIT_NPROC 1024 current, 8272992 max
OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable
OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable



Traceback (most recent call last):
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/numpy/core/init.py”, line 40,
from . import multiarray
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/numpy/core/multiarray.py”, line 12
from . import overrides
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/numpy/core/overrides.py”, line 6,
from numpy.core._multiarray_umath import (
ImportError: PyCapsule_Import could not import module “datetime”

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/lib/python3.6/runpy.py”, line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/lib/python3.6/runpy.py”, line 109, in _get_module_details
import(pkg_name)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/init.py”, line 64, in
from .base import clone
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/base.py”, line 10, in <mod
import numpy as np
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/numpy/init.py”, line 142, in <
from . import core
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/numpy/core/init.py”, line 71,
raise ImportError(msg)
ImportError:

IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!

Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
Here is how to proceed:

  • If you’re working with a numpy git repository, try git clean -xdf
    (removes all files not under version control) and rebuild numpy.
  • If you are simply trying to use the numpy version that you have installed:
    your installation is broken - please reinstall numpy.
  • If you have already reinstalled and that did not fix the problem, then:
    1. Check that you are using the Python you expect (you’re using /home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/bin/
      and that you have no directories in your PATH or PYTHONPATH that can
      interfere with the Python and numpy versions you’re trying to use.

    2. If (1) looks fine, you can open a new issue at
      https://github.com/numpy/numpy/issues. Please include details on:

      • how you installed Python
      • how you installed numpy
      • your operating system
      • whether or not you have multiple versions of Python installed
      • if you built from source, your compiler versions and ideally a build log

      Note: this error has many possible causes, so please don’t comment on
      an existing issue about this - open a new one instead.

Original error was: PyCapsule_Import could not import module “datetime”

=======================================================================

***1
the “python” I used was exactly “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.4/bin/python”, and I didn’t set any PYTHONPATH
I didn’t install this python separately, I installed qiime2-2019.4 and the python(and the numpy package) was also installed as a prerequisite.

wget https://data.qiime2.org/distro/core/qiime2-2019.4-py36-linux-conda.yml
conda env create -n qiime2-2019.7 --file qiime2-2019.4-py36-linux-conda.yml
***2
my operating system infomation is like this:

$uname -a
Linux fat01.local 2.6.32-642.el6.x86_64 #1 SMP Tue May 10 17:27:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$head -n 1 /etc/issue
CentOS release 6.8 (Final)

***3
I do have other python at:
/home/wubin/01.Program/02.software/miniconda3/bin/python

then, how can I fix this?

Hi @WindTalker

Note you are installing the 2019.4 release, but calling it 2019.7 — not the cause of your issue here, but just wanted to bring that to your attention first.

sounds like this installation is broken — recommend that you remove, use conda clean --all, then install 2019.7.

have you installed other releases of QIIME 2 successfully on this system?

sorry, I did this by “conda env create -n qiime2-2019.4 --file qiime2-2019.4-py36-linux-conda.yml”, not qiime2-2019.7

I think this may be related to this issue. What kind of machine is running this command? Is it on physical hardware, T1 hypervisor, or regular virtual machine like VirtualBox or Docker?

This is certainly an interesting issue, thanks for sharing!

1 Like

physical hardware, on an Computing cluster

1 Like

I installed qiime2-2019.7, and then:

====================================================================================================================================
Plugin error from feature-classifier:

A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. The exit codes of the workers are {SIGSEGV(-11)}

Debug info has been saved to /mnt/lustre/user/wubin/tmp/qiime2-q2cli-err-m7y852lp.log
Exception in thread QueueFeederThread:
Traceback (most recent call last):
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/externals/loky/backend/queues.py”, line 150, in feed
obj
= dumps(obj, reducers=reducers)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/externals/loky/backend/reduction.py”, line 243, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/externals/loky/backend/reduction.py”, line 236, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/externals/cloudpickle/cloudpickle.py”, line 267, in dump
return Pickler.dump(self, obj)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 409, in dump
self.save(obj)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 521, in save
self.save_reduce(obj=obj, *rv)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 634, in save_reduce
save(state)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 821, in save_dict
self._batch_setitems(obj.items())
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 847, in _batch_setitems
save(v)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 521, in save
self.save_reduce(obj=obj, *rv)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 634, in save_reduce
save(state)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 821, in save_dict
self._batch_setitems(obj.items())
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 852, in _batch_setitems
save(v)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 521, in save
self.save_reduce(obj=obj, *rv)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 634, in save_reduce
save(state)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 821, in save_dict
self._batch_setitems(obj.items())
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 847, in _batch_setitems
save(v)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 781, in save_list
self._batch_appends(obj)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 808, in _batch_appends
save(tmp[0])
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 736, in save_tuple
save(element)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 751, in save_tuple
save(element)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 521, in save
self.save_reduce(obj=obj, *rv)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 634, in save_reduce
save(state)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 821, in save_dict
self._batch_setitems(obj.items())
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 847, in _batch_setitems
save(v)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 781, in save_list
self._batch_appends(obj)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 805, in _batch_appends
save(x)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 781, in save_list
self._batch_appends(obj)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 805, in _batch_appends
save(x)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 521, in save
self.save_reduce(obj=obj, *rv)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 634, in save_reduce
save(state)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 476, in save
f(self, obj) # Call unbound method with explicit self
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 821, in save_dict
self._batch_setitems(obj.items())
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 847, in _batch_setitems
save(v)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/pickle.py”, line 482, in save
rv = reduce(obj)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/_memmapping_reducer.py”, line 340, in call
os.chmod(dumped_filename, FILE_PERMISSIONS)
FileNotFoundError: [Errno 2] No such file or directory: ‘/dev/shm/joblib_memmapping_folder_26641_5826356891/26641-139629688723216-a42756b53492454a809a805592c65597.pkl’

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/threading.py”, line 916, in _bootstrap_inner
self.run()
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/threading.py”, line 864, in run
self._target(*self._args, **self._kwargs)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/externals/loky/backend/queues.py”, line 175, in _feed
onerror(e, obj)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/externals/loky/process_executor.py”, line 310, in _on_queue_feeder_error
self.thread_wakeup.wakeup()
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/externals/loky/process_executor.py”, line 155, in wakeup
self._writer.send_bytes(b"")
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/multiprocessing/connection.py”, line 183, in send_bytes
self._check_closed()
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/multiprocessing/connection.py”, line 136, in _check_closed
raise OSError(“handle is closed”)
OSError: handle is closed

in the file “/mnt/lustre/user/wubin/tmp/qiime2-q2cli-err-m7y852lp.log”, there’re:

OpenBLAS blas_thread_init: pthread_create failed for thread 5 of 64: Resource temporarily unavailable
OpenBLAS blas_thread_init: RLIMIT_NPROC 1024 current, 8272992 max
OpenBLAS blas_thread_init: pthread_create failed for thread 6 of 64: Resource temporarily unavailable
OpenBLAS blas_thread_init: RLIMIT_NPROC 1024 current, 8272992 max
OpenBLAS blas_thread_init: pthread_create failed for thread 7 of 64: Resource temporarily unavailable
OpenBLAS blas_thread_init: RLIMIT_NPROC 1024 current, 8272992 max



Traceback (most recent call last):
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/runpy.py”, line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/runpy.py”, line 109, in _get_module_details
import(pkg_name)
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/init.py”, line 112, in
from .memory import Memory, MemorizedResult, register_store_backend
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/memory.py”, line 32, in
from ._store_backends import StoreBackendBase, FileSystemStoreBackend
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/_store_backends.py”, line 16, in
from .backports import concurrency_safe_rename
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/joblib/backports.py”, line 12, in
import numpy as np
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/numpy/init.py”, line 142, in
from . import core
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/numpy/core/init.py”, line 40, in
from . import multiarray
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/numpy/core/multiarray.py”, line 12, in
from . import overrides
File “/home/wubin/01.Program/02.software/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/numpy/core/overrides.py”, line 6, in
from numpy.core._multiarray_umath import (
KeyboardInterrupt

Hey @WindTalker,

Hope you had a good weekend.

Given these two facts:

We may be in a situation where OpenBLAS is trampling over the cluster's resource manager, spawning threads it isn't allowed to. I can't say I've seen this before, but it's my best guess at the moment.

Could you describe how you submit the command to the cluster? I see that you have --p-n-jobs set to 60, which in principle should probably work, but it's large enough of a number that there's a few ways I could see that going wrong if your queuing system disagrees.

I run this well before on the same cluster. with exact “–p-n-jobs 60”。

I just updated my conda, and then, this error happened.

finally, I solved this by setting “export OMP_NUM_THREADS=1”

but I don’t know what the meaning is this

Good sleuthing! I was about to recommend that variable next. However 1 is maybe too small of a number, as what that is forcing OpenBLAS to do is to use only a single thread, which is the opposite of what you are probably going for.

If you could provide details about your queuing system and submission command, we might be able to diagnose this further.