Feature-Classifier error? Missing taxonomy.qza even though my code states it as output?

Hi all,

Weird error, running my code on an HPC, getting errors when I run this:


module add metaseq
source activate qiime2-2018.4

echo “Loaded QIIME2-2018.4”
echo “Ready to use”

qiime feature-classifier classify-sklearn \
–i-classifier ~/gg-13-8-99-515-806-nb-classifier.qza
–i-reads ~/Mur.VC1.VC2.Smoke.Cluster/QIIME2_5_merge_filter/Lung.Cancer.Mouse/id-filtered-seqs.qza
–o-classification ~/Mur.VC1.VC2.Smoke.Cluster/QIIME2_8_taxonomy/Lung.Cancer.Mouse/taxonomy.qza

I get this error:

Traceback (most recent call last):
File “/local/apps/metaseq/”, line 11, in
File “/local/apps/metaseq/”, >line 722, in call
return self.main(*args, **kwargs)
File “/local/apps/metaseq/”, >line 697, in main
rv = self.invoke(ctx)
File “/local/apps/metaseq/”, >line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/local/apps/metaseq/”, >line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/local/apps/metaseq/”, >line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File “/local/apps/metaseq/”, >line 535, in invoke
return callback(*args, **kwargs)
File “/local/apps/metaseq/>packages/q2cli/commands.py”, line 294, in call
path = result.save(output)
File “/local/apps/metaseq/>packages/qiime2/sdk/result.py”, line 147, in save
File “/local/apps/metaseq/>packages/qiime2/core/archive/archiver.py”, line 347, in save
self.CURRENT_ARCHIVE.save(self.path, filepath)
File “/local/apps/metaseq/>packages/qiime2/core/archive/archiver.py”, line 162, in save
allowZip64=True) as zf:
File “/local/apps/metaseq/”, line 1009, in >init
self.fp = io.open(file, filemode)
FileNotFoundError: [Errno 2] No such file or directory: >’/ifs/home/wub02/Projects/Mur.VC1.VC2.Smoke.Cluster/QIIME2_8_taxonomy/Lung.Cancer.Mouse/taxonomy.qza’

I think that there’s probably a missing plug-in, but I’m not entirely sure. I am actually running it locally now, so if it runs it’s likely something that the HPC folks need to update.


There is some weird behavior exhibited by the HPC.

Instead of submitting a qsub job through their scheduler, I ran it just as a source, but it ran without issue. It was probably the node that did not run appropriately.


Hey @ben!

Shot in the dark, is it possible one of the nodes isn’t connected to the shared filesystem where your artifact is?

I’ve never seen this before, but it looks like the file just didn’t exist outright. Does this happen if you were to do something silly like:

qsub cat example_file.txt

I’ll try it, but I couldn’t reproduce the error.

I bet you that you are correct though. I actually ran this a few other times, both running it as source on the node I was logged into (which they recommend not doing, but I did anyway so I could see the error). And submitting it as a qsub.

Both other times ran smoothly without error. As far as my expertise goes (it’s not much) I cannot dictate what node I sign-on to or which node I send my jobs to. I did report it earlier, and the HPC admins may have fixed it. I’m marking this as solved.


This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.