Error when using Silva 132 classifier

Hi, first post here (and not very familiar with bioinformatics stuff in general) so I apologize if I overlook some things… I’m running qiime2-2019.4 on Ubuntu.

I’m encountering the following error message when I use the Silva 132 classifier:

Plugin error from feature-classifier:
Debug info has been saved to /tmp/qiime2-q2cli-err-6gckvp1z.log

Upon checking the debug info, I get this:

Traceback (most recent call last):
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2cli/commands.py”, line 311, in call
results = action(**arguments)
File “</home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/decorator.py:decorator-gen-347>”, line 2, in classify_sklearn
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/qiime2/sdk/action.py”, line 231, in bound_callable
output_types, provenance)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/qiime2/sdk/action.py”, line 365, in callable_executor
output_views = self._callable(**view_args)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2_feature_classifier/classifier.py”, line 214, in classify_sklearn
reads, classifier, read_orientation=read_orientation)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2_feature_classifier/classifier.py”, line 169, in _autodetect_orientation
result = list(zip(*predict(first_n_reads, classifier, confidence=0.)))
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2_feature_classifier/_skl.py”, line 45, in predict
for chunk in _chunks(reads, chunk_size)) for m in c)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py”, line 917, in call
if self.dispatch_one_batch(iterator):
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py”, line 759, in dispatch_one_batch
self._dispatch(tasks)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py”, line 716, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py”, line 182, in apply_async
result = ImmediateResult(func)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py”, line 549, in init
self.results = batch()
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py”, line 225, in call
for func, args, kwargs in self.items]
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py”, line 225, in
for func, args, kwargs in self.items]
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2_feature_classifier/_skl.py”, line 52, in _predict_chunk
return _predict_chunk_with_conf(pipeline, separator, confidence, chunk)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2_feature_classifier/_skl.py”, line 66, in _predict_chunk_with_conf
prob_pos = pipeline.predict_proba(X)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/utils/metaestimators.py”, line 118, in
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/pipeline.py”, line 382, in predict_proba
return self.steps[-1][-1].predict_proba(Xt)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/naive_bayes.py”, line 104, in predict_proba
return np.exp(self.predict_log_proba(X))
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/naive_bayes.py”, line 84, in predict_log_proba
jll = self._joint_log_likelihood(X)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/naive_bayes.py”, line 731, in joint_log_likelihood
return (safe_sparse_dot(X, self.feature_log_prob
.T) +
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/sklearn/utils/extmath.py”, line 168, in safe_sparse_dot
ret = a * b
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/scipy/sparse/base.py”, line 473, in mul
return self._mul_multivector(other)
File “/home/silkwormproject/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/scipy/sparse/compressed.py”, line 482, in _mul_multivector
other.ravel(), result.ravel())
MemoryError

Any idea how to resolve this error?

Thanks in advance for the help! :slight_smile:

Hi!
I think you are running out of the RAM.
If you are using multi-thread mode, try to disable it.
Also, in my case, it was solved by decreasing
--p-reads-per-batch
for me worked 10000

5 Likes

Noted. Will try this, thanks!

If I may ask though, how would I go about disabling multi-thread mode?

If you used this parameter in your run:
--p-n-jobs
skip it this time.
If not, you are fine.

1 Like