Plugin error from feature-classifier: Debug info has been saved to /tmp/qiime2-q2cli-err-ys63sum_.log

Dear supporters,
I don’t know how to solve the error I get running the following

qiime feature-classifier classify-sklearn
–i-classifier /classifier.qza \
–i-reads /rep-seqs_16S_All1617.qza
–o-classification taxonomy.qza

Plugin error from feature-classifier:
Debug info has been saved to /tmp/qiime2-q2cli-err-ys63sum_.log

The error shows:

Traceback (most recent call last):
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/q2cli/commands.py”, line 274, in call
results = action(**arguments)
File “”, line 2, in classify_sklearn
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/qiime2/sdk/action.py”, line 231, in bound_callable
output_types, provenance)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/qiime2/sdk/action.py”, line 366, in callable_executor
output_views = self._callable(**view_args)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/q2_feature_classifier/classifier.py”, line 215, in classify_sklearn
confidence=confidence)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/q2_feature_classifier/_skl.py”, line 45, in predict
for chunk in _chunks(reads, chunk_size)) for m in c)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py”, line 779, in call
while self.dispatch_one_batch(iterator):
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py”, line 625, in dispatch_one_batch
self._dispatch(tasks)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py”, line 588, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/externals/joblib/_parallel_backends.py”, line 111, in apply_async
result = ImmediateResult(func)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/externals/joblib/_parallel_backends.py”, line 332, in init
self.results = batch()
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py”, line 131, in call
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py”, line 131, in
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/q2_feature_classifier/_skl.py”, line 52, in _predict_chunk
return _predict_chunk_with_conf(pipeline, separator, confidence, chunk)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/q2_feature_classifier/_skl.py”, line 66, in _predict_chunk_with_conf
prob_pos = pipeline.predict_proba(X)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/utils/metaestimators.py”, line 115, in
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/pipeline.py”, line 357, in predict_proba
return self.steps[-1][-1].predict_proba(Xt)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/naive_bayes.py”, line 104, in predict_proba
return np.exp(self.predict_log_proba(X))
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/naive_bayes.py”, line 84, in predict_log_proba
jll = self._joint_log_likelihood(X)
File “/home/laufer/miniconda3/envs/qiime2-2018.4/lib/python3.5/site-packages/sklearn/naive_bayes.py”, line 726, in joint_log_likelihood
self.class_log_prior
)
MemoryError

As I run it on a big server, there should not be a problem with space or RAM.

Sorry, of course it was a problem with the RAM of the server, not a qiime problem. The topic can be closed/deleted. Sorry again.

2 Likes

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.