Adjusting chunk size isn't enough to fix classifier RAM issue

I'm trying to train a classifier using the following command:

qiime feature-classifier fit-classifier-naive-bayes
--i-reference-reads silva-138-trunc178-ref-seqs.qza
--i-reference-taxonomy ../silva-138-99-tax.qza
--p-classify--chunk-size 250
--o-classifier trunc178-classifier.qza

I have a computer with 16 GB of RAM, 12 of which are allocated to my virtual machine. Initially I was having an issue with insufficient RAM available for the array, so I poked around the forum and found the tip for changing the chunk-size. I tried that with a few different sizes. From 20000 to 5000, the error message said there still wasn't enough RAM. From 2500 to 1000, it simply spat out the word "Killed." For 500 and 250, it's back to saying that it can't allocate 4.47 GiB. So now I'm confused because there should be 12 GB available. Any suggestions? Here is the full output from the final chunk-size of 250 I tried as show above:

/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/q2_feature_classifier/ UserWarning: The TaxonomicClassifier artifact that results from this method was trained using scikit-learn version 0.24.1. It cannot be used with other versions of scikit-learn. (While the classifier may complete successfully, the results will be unreliable.)
warnings.warn(warning, UserWarning)
Traceback (most recent call last):
File "/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/q2cli/", line 339, in call
results = action(**arguments)
File "", line 2, in fit_classifier_naive_bayes
File "/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/qiime2/sdk/", line 245, in bound_callable
outputs = self.callable_executor(scope, callable_args,
File "/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/qiime2/sdk/", line 391, in callable_executor
output_views = self._callable(**view_args)
File "/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/q2_feature_classifier/", line 330, in generic_fitter
pipeline = fit_pipeline(reference_reads, reference_taxonomy,
File "/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/q2_feature_classifier/", line 32, in fit_pipeline, y)
File "/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/sklearn/", line 346, in fit, y, **fit_params_last_step)
File "/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/q2_feature_classifier/", line 40, in fit
self.partial_fit(cX, cy, sample_weight=csample_weight,
File "/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/sklearn/", line 589, in partial_fit
File "/home/qiime2/miniconda/envs/qiime2-2021.11/lib/python3.8/site-packages/sklearn/", line 777, in update_feature_log_prob
smoothed_fc = self.feature_count
+ alpha
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 4.47 GiB for an array with shape (73259, 8192) and data type float64

Plugin error from feature-classifier:

Unable to allocate 4.47 GiB for an array with shape (73259, 8192) and data type float64

That means that something canceled the job.

That is strange!

Yes! In your Ubuntu VM, open up the System Monitor and see how much memory your VM thinks it has. (I'm wondering if you set 12 GB of ram, but it has not taken effect yet...)

System Monitor says that there is 11.4 GiB total available, and without using it for anything else 1.6 GiB is being used. What would have killed it at those chunk sizes, but not the bigger or smaller ones? I didn't do anything when that happened.

Very strange! I wonder if the plugin requests > 11.4 GiB for a moment, then it's canceled / killed, then it throws that error.

Can you run it again with the System Monitor open and see how memory usage changes over time? That error is still related to memory, so I want to watch memory while it's running to find more clues.

With a chunk size of 250, the memory eased up until it hit the 11.4 GiB max after a minute or two and then errored out. With size 2500, the memory eased up again and spat out "Killed" instead of an error when it maxed out, although I think it took a little longer to hit that max.

1 Like

Hey Emily,

Thank you for your patience, and thank you for trying those other settings.

Because reducing the chunk size of the query sequences is not helping, this indicates that the database size is causing the issue.

What database are you running? Silva 138 is pretty large, and may be too large for 12 GB. Do you have another computer or server with more RAM you could use for running this script?

Hmmm, not immediately. I'll have to ask around. I could have sworn I used a version of SILVA from a few years ago that ran on 12-ish just fine, but that was also with QIIME 1 so I guess it may have been a less intense command. Do you have any other suggestions in case I don't end up finding access to a better computer?

The releases of SILVA have increased steadily for the past few years in size (and thus also in memory requirements with various bioinformatics tools). But indeed the differences between different classifiers could also explain this (qiime1 used uclust for classification by default, so a very different method).

It looks like you are using a classifier trained on trimmed sequences:

You could use the RESCRIPt plugin to further dereplicate these (see the tutorial on this forum for examples). That will decrease database size and memory usage quite a bit.

Good luck!


This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.