Erorr about training the classifier

Hey, everyone.I had promblems when I training the classifier:
qiime feature-classifier fit-classifier-naive-bayes --i-reference-reads ref-se
qs.qza --i-reference-taxonomy taxonomy_all_levels.qza --o-classifier silva_132_99_16S_classifier.qza

Plugin error from feature-classifier:

Debug info has been saved to /tmp/qiime2-q2cli-err-0j3v09sv.log

Here’s the log.
/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2_feature_classifier/classifier.py:101: UserWarning: The TaxonomicClassifier artifact that results from this method was trained using scikit-learn version 0.19.1. It cannot be used with other versions of scikit-learn. (While the classifier may complete successfully, the results will be unreliable.)
warnings.warn(warning, UserWarning)
Traceback (most recent call last):
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2cli/commands.py”, line 274, in call
results = action(**arguments)
File “”, line 2, in fit_classifier_naive_bayes
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/qiime2/sdk/action.py”, line 231, in bound_callable
output_types, provenance)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/qiime2/sdk/action.py”, line 362, in callable_executor
output_views = self._callable(**view_args)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2_feature_classifier/classifier.py”, line 316, in generic_fitter
pipeline)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2_feature_classifier/_skl.py”, line 32, in fit_pipeline
pipeline.fit(X, y)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/pipeline.py”, line 250, in fit
self.final_estimator.fit(Xt, y, **fit_params)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2_feature_classifier/custom.py”, line 41, in fit
classes=classes)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/naive_bayes.py”, line 527, in partial_fit
Y = label_binarize(y, classes=self.classes
)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/preprocessing/label.py”, line 522, in label_binarize
Y = Y.toarray()
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/scipy/sparse/compressed.py”, line 964, in toarray
return self.tocoo(copy=False).toarray(order=order, out=out)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/scipy/sparse/coo.py”, line 252, in toarray
B = self._process_toarray_args(order, out)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/scipy/sparse/base.py”, line 1039, in _process_toarray_args
return np.zeros(self.shape, dtype=self.dtype, order=order)
MemoryError

Hi @laoshiren,
The MemoryError is common when large databases (such as the Silva) are used without sufficient memory dedicated to the task. Are you able to increase the memory available here?, If you’re using VirtualBox you’ll want to manually increase the memory in the launch settings. Or if you have access to a more powerful machine that should do too. If those aren’t an option you may be able to use a smaller database such as Greengenes or perhaps even use one of the pre-trained classifiers available at data resources page.

Actually I use the ‘Silva 132 99% OTUs full-length sequences’ pre-trained classifiers on the data resources page. But I got a similar erorr…(my primers are 515F and 907R, the V4-V5 region) My VirtualBox only has a maximum memory of 11G. Is that mean I must use a better computer with more RAM?

qiime feature-classifier classify-sklearn --i-classifier silva-132-99-nb-classifier.qza --i-reads rep-seqs.qza --o-classification taxonomy.qza

Traceback (most recent call last):
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2cli/commands.py”, line 274, in call
results = action(**arguments)
File “”, line 2, in classify_sklearn
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/qiime2/sdk/action.py”, line 225, in bound_callable
spec.view_type, recorder)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/qiime2/sdk/result.py”, line 266, in _view
result = transformation(self._archiver.data_dir)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/qiime2/core/transform.py”, line 70, in transformation
new_view = transformer(view)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2_feature_classifier/_taxonomic_classifier.py”, line 72, in _1
pipeline = joblib.load(os.path.join(dirname, ‘sklearn_pipeline.pkl’))
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/numpy_pickle.py”, line 578, in load
obj = _unpickle(fobj, filename, mmap_mode)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/numpy_pickle.py”, line 508, in _unpickle
obj = unpickler.load()
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/pickle.py”, line 1043, in load
dispatchkey[0]
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/numpy_pickle.py”, line 341, in load_build
self.stack.append(array_wrapper.read(self))
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/numpy_pickle.py”, line 184, in read
array = self.read_array(unpickler)
File “/home/qiime2/miniconda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/numpy_pickle.py”, line 130, in read_array
array = unpickler.np.empty(count, dtype=self.dtype)
MemoryError

HI @laoshiren,

That’s a bit odd since you’re not training a classifier here and if you have selected 11G RAM to your VirtualBox (not just what the max could be in the launch settings) I would have thought that would be enough for just assigning taxonomy. But I suppose it’s not unheard of either, see here for a similar post. The apparent solution there was:

Use greengenes instead of SILVA — that should help a lot. You can also adjust the reads-per-batch parameter (e.g. try 2000) to have a longer but lower memory job

You can also use a more powerful machine if you have access to.

1 Like

Thank you very much.:grin:

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.