Silva-132-99-515-806-nb-classifier.qza OSError: [Errno 28] No space left on device

I’m using QIIME2 through Docker and ran into this problem.
It happens that the feature-classifier is not working when using the provided SILVA database.
However it works fine with Greengenes’s. Any suggestions?

The error is down below…

#####################################################################

SILVA

#####################################################################
docker run -t -i -v $(pwd):/data qiime2/core:2018.6 qiime feature-classifier classify-sklearn
–i-classifier silva-132-99-515-806-nb-classifier.qza
–i-reads rep-seqs.qza
–o-classification taxonomy.qza
Traceback (most recent call last):
File “/opt/conda/envs/qiime2-2018.6/bin/qiime”, line 11, in
sys.exit(qiime())
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/click/core.py”, line 722, in call
return self.main(*args, **kwargs)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/click/core.py”, line 697, in main
rv = self.invoke(ctx)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/click/core.py”, line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/click/core.py”, line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/click/core.py”, line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/click/core.py”, line 535, in invoke
return callback(*args, **kwargs)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/q2cli/commands.py”, line 244, in call
arguments, missing_in, verbose, quiet = self.handle_in_params(kwargs)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/q2cli/commands.py”, line 326, in handle_in_params
kwargs, fallback=cmd_fallback)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/q2cli/handlers.py”, line 375, in get_value
artifact = qiime2.sdk.Result.load(path)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/qiime2/sdk/result.py”, line 65, in load
archiver = archive.Archiver.load(filepath)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/qiime2/core/archive/archiver.py”, line 299, in load
rec = archive.mount(path)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/qiime2/core/archive/archiver.py”, line 199, in mount
root = self.extract(filepath)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/site-packages/qiime2/core/archive/archiver.py”, line 210, in extract
zf.extract(name, path=str(filepath))
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/zipfile.py”, line 1335, in extract
return self._extract_member(member, path, pwd)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/zipfile.py”, line 1399, in _extract_member
shutil.copyfileobj(source, target)
File “/opt/conda/envs/qiime2-2018.6/lib/python3.5/shutil.py”, line 82, in copyfileobj
fdst.write(buf)
OSError: [Errno 28] No space left on device

#####################################################################

Greengenes

#####################################################################
docker run -t -i -v $(pwd):/data qiime2/core:2018.6 qiime feature-classifier classify-sklearn
–i-classifier gg-13-8-99-515-806-nb-classifier.qza
–i-reads rep-seqs.qza
–o-classification taxonomy.qza
Saved FeatureData[Taxonomy] to: taxonomy.qza

Hi @lumluk!

Thanks for posting, and thank you for posting the full error trace. This is a problem with your system, not with the classifier itself. The key line is at the very bottom:

OSError: [Errno 28] No space left on device

So you are running out of disc space! This makes sense — the SILVA database is a good bit larger than greengenes, takes at least twice as much memory to run, and some of the temp outputs could also take up more disc space. So you are probably filling up your temp directory.

See here for details on diagnosing and fixing (try changing your temp dir most likely):

1 Like

Hi @Nicholas_Bokulich, thank you for the reply.

I relocate the tmp folder of the docker to my local machine, which have plenty of space, but it still not working.
Now the error is gone, but it didn’t produce any result (taxonomy) file either.
Somehow the program terminated without having the saving text - “Saved FeatureData[Taxonomy] to: taxonomy.qza” text after the command.
Any other suggestion?

$> docker run -t -i -v $(pwd):/data -v /Users/tmp:/tmp qiime2/core:2018.6 qiime feature-classifier classify-sklearn \
  --i-classifier silva-132-99-515-806-nb-classifier.qza \
  --i-reads rep-seqs.qza \
  --o-classification taxonomy.qza
$>

Can you please re-run this variant of that command and provide any output? Thanks!

docker run -t -i \
  -v $(pwd):/data \
  -v /Users/tmp:/tmp \
  qiime2/core:2018.6 \
  qiime feature-classifier classify-sklearn \
    --i-classifier silva-132-99-515-806-nb-classifier.qza \
    --i-reads rep-seqs.qza \
    --o-classification taxonomy.qza \
    --verbose

Also, can you provide a little more info about your host environment? Windows 10? Mac? Docker toolbox? Something else?Thanks!

1 Like

I’ve tried, no results either.
However, I natively install qiime2 and the command works fine.
Can it be docker’s problem?

Ah, bummer.

Either that, or it is related to something on your host system. If you can answer the questions I asked above, that would help steer us in the right direction:

Thanks! :qiime2: :t_rex:

I’m running docker on MacOS HighSierra

Ah ha! Okay, so let's circle back to your original problem:

Setting a new tmp dir in your docker commands isn't going to fix that issue, and it might actually cause other problems. I think you need to actually configure the docker disk size, since out of the box it assumes pretty tiny virtual disk of

35%20AM

You should be able to boost that slider up to something more reasonable, assuming your host system has more available disk space to give!

So, make that tweak, then re-run your original command, without changing the temp dir. Let us know how it goes! :qiime2: :t_rex:

1 Like

I already gave it 208GB.

44

Well, sounds like 208 GB still isn’t enough, right? QIIME 2 isn’t the only thing using this disk space (for example, in your screenshot it mentions that almost 70 GB are already in use from other docker containers and images).

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.