Memory ERROR: qiime feature-classifier classify-sklearn

Hi all,
I’m in the next stage:

$ qiime feature-classifier classify-sklearn --i-classifier silva-132-99-nb-classifier.qza --i-reads rep-seqs_97.qza --p-confidence 0.8 --p-read-orientation reverse-complement --o-classification rep-seqs_97_SILVAtaxonomy.qza

And I have the following error:

Plugin error from feature-classifier:
Debug info has been saved to /tmp/qiime2-q2cli-err-9ir5_cmn.log

Traceback (most recent call last):
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/q2cli/commands.py”, lin
e 274, in call
results = action (** arguments)
File "</home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/decorator.py:decorator
-gen-338> ", line 2, in classify_sklearn
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/qiime2/sdk/action.py”,
line 231, in bound_callable
output_types, provenance)
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/qiime2/sdk/action.py”,
line 365, in callable_executor
output_views = self.callable (** view_args)
File "/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/q2_feature_classifier/c
lassifier.py ", line 215, in classify_sklearn
confidence = confidence)
File "/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/q2_feature_classifier/

skl.py ", line 45, in predict
for chunk in _chunks (reads, chunk_size)) for m in c)
File "/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/externals/jobli
b / parallel.py ", line 917, in call
if self.dispatch_one_batch (iterator):
File "/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/externals/jobli
b / parallel.py ", line 759, in dispatch_one_batch
self._dispatch (tasks)
File "/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/externals/jobli
b / parallel.py ", line 716, in _dispatch
job = self._backend.apply_async (batch, callback = cb)
File "/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/externals/jobli
b / _parallel_backends.py ", line 182, in apply_async
result = ImmediateResult (func)
File "/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/externals/jobli
b / _parallel_backends.py ", line 549, in init
self.results = batch ()
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/q2_feature_classifier/_skl.py”, line 52, in _predict_chunk
return _predict_chunk_with_conf (pipeline, separator, confidence, chunk)
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/q2_feature_classifier/_skl.py”, line 66, in _predict_chunk_with_conf
prob_pos = pipeline.predict_proba (X)
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/utils/metaestimators.py”, line 118, in
out = lambda * args, ** kwargs: self.fn (obj, * args, ** kwargs)
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/pipeline.py”, line 382, ​​in predict_proba
return self.steps [-1] [- 1] .predict_proba (Xt)
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/naive_bayes.py”, line 104, in predict_proba
return np.exp (self.predict_log_proba (X))
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/naive_bayes.py”, line 84, in predict_log_proba
jll = self._joint_log_likelihood (X)
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/naive_bayes.py”, line 731, in joint_log_likelihood
return (safe_sparse_dot (X, self.feature_log_prob
.T) +
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/sklearn/utils/extmath.py”, line 168, in safe_sparse_dot
ret = a * b
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/scipy/sparse/base.py”, line 473, in mul
return self._mul_multivector (other)
File “/home/patricia/miniconda2/envs/qiime2-2019.1/lib/python3.6/site-packages/scipy/sparse/compressed.py”, line 477, in _mul_multivector
dtype = upcast_char (self.dtype.char, other.dtype.char))
MemoryError

From what I was reading in the forums, I understand that my computer has limited capacity.
What would be the way to do the taxonomic assignment? Is there another method available that allows me to finish the analysis on my computer?

Characteristics of the computer
core i7, 16GB RAM

Thank you very much in advance

This is a fairly common issue, please see these posts for troubleshooting advice: https://forum.qiime2.org/search?q=feature%20classifier%20memory%20error

hint: try a different classifier (e.g., greengenes) or use the reads-per-batch parameter.

An off-topic reply has been split into a new topic: The scikit-learn version (0.19.1) used to generate this artifact does not match the current version of scikit-learn installed (0.20.2). Please retrain your classifier for your current deployment to prevent data-corruption errors

Please keep replies on-topic in the future.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.