Try adding the flag --p-pre-dispatch n_jobs or instead of n_jobs, 1. The default is to pre-dispatch 2*n_jobs, so you automatically use twice the amount of memory required to run a single job.
Hi Zach-
Thanks for the suggestion, unfortunately the script failed and reported the same error as before. When I was originally writing the post, the forum paired me up with another post that was similar to mine and the replies mentioned the '--p-chunk-size' option as a route that could help?
Hi @Sausage_Mahoney! --p-chunk-size should be the parameter you are looking for, but hopefully @BenKaehler can provide some additional clarity. Thanks!
Hi Ben-
Thanks for looking into this...I figured out what happened. I was trying to run the classifier using a file that had each sequence collected identified as it's own unique feature (I was pulling some QIIME files into QIIME2). At any rate, it was trying to classify ~60,000 some odd features and the RAM was imploding. I have it all sorted out now.