Running the following script / getting the following error...is this just a memory issue?
$ qiime feature-classifier classify-sklearn --i-classifier gg-13-8-99-515-806-nb-classifier.qza --i-reads SPN_S3_R1_R2.join.filteredseqs.qza --o-classification taxonomy.qza
Log entry below:
I was running these data sets through QIIME without any huge memory issues...
Any help would be appreciated!
Try adding the flag
--p-pre-dispatch n_jobs or instead of n_jobs, 1. The default is to pre-dispatch 2*n_jobs, so you automatically use twice the amount of memory required to run a single job.
Thanks for the suggestion, unfortunately the script failed and reported the same error as before. When I was originally writing the post, the forum paired me up with another post that was similar to mine and the replies mentioned the ‘–p-chunk-size’ option as a route that could help?
Thanks for the assist
--p-chunk-size should be the parameter you are looking for, but hopefully @BenKaehler can provide some additional clarity. Thanks!
Hi @Sausage_Mahoney, thanks @thermokarst and @ezke.
--p-chunk-size (from 262144) may help, but we’ve never had problems with
classify-sklearn running out of memory before.
How much memory does your machine have?
If you’re willing to share
SPN_S3_R1_R2.join.filteredseqs.qza I would be happy to debug.
Thanks for looking into this…I figured out what happened. I was trying to run the classifier using a file that had each sequence collected identified as it’s own unique feature (I was pulling some QIIME files into QIIME2). At any rate, it was trying to classify ~60,000 some odd features and the RAM was imploding. I have it all sorted out now.
This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.