Feature-classifier no space left on device error in docker

Thank you for the suggestion and apologies in advance for the long note below. I have updated to the latest version of docker as well as the newest 2019 version of qiime2. Now my original problem is solved but I am encountering a new problem with memory usage. Now when I run the command:

qiime feature-classifier classify-sklearn
--i-classifier /data/Silva_DB/silva-132-99-nb-classifier.qza
--i-reads rep-seqs-trunc-Ian_Pat1.qza
--o-classification taxonomy-Ian_Pat1.qza
--p-reads-per-batch 10000
--verbose

I get the error below (note, I added the --p-reads-per-batch to the command to try and reduce the memory usage - not sure if that helps?):

 Plugin error from feature-classifier:  
     [Errno 28] No space left on device

I looked on the forum and saw these two posts:
1. Docker QIIME2 / No space left on device
2. No enough space error when using feature classifier User Support

They say that the error is because the default JOBLIB_TEMP_FOLDER for sklearn is /dev/shm is limited to 64MB by docker, which will cause the error above. The solution appears to be to add the following configuration to myDocker file:

ENV JOBLIB_TEMP_FOLDER=/tmp

I tried searching for information on how to add this to the configuration file (I guess this may be obvious to others but I'm not experienced using docker) and everything I found seemed to say to create a daemon.json file here:

'C:\ProgramData\Docker\config\daemon.json'

I don't see that I have a ProgramData\Docker\config directory (I see the ProgramData directory but not a Docker or config directory below it). Should I create these directories and the daemon.json file? As a workaround, I tried setting the variable with this command:
> set ENV JOBLIB_TEMP_FOLDER=/tmp

I thought this worked because when I checked its value

echo $ENV JOBLIB_TEMP_FOLDER=/tmp

it showed that the value was "/tmp". But, when I tried running the classifier again, I got the same error.

Can you please offer some advice on how to get around this problem?

Finally, I wanted to mention that the page describing docker installation (Installing QIIME 2 using Docker — QIIME 2 2019.1.0 documentation ), says to check the docker installation as follows:

3. Confirm the installation

Run the following to confirm that the image was successfully fetched.
docker run -t -i -v $(pwd):/data qiime2/core:2019.1 qiime

This doesn't work for me unless I change the "( )" to "{ }" brackets. So the final command is: docker run -t -i -v ${pwd}:/data qiime2/core:2019.1 qiime

Is this something odd about my system?

To actually run docker, I am doing this (replacing "qiime" with "bash":
docker run -t -i -v ${pwd}:/data qiime2/core:2019.1 bash

Is that the proper way to run docker or should I be doing something else? There is nothing on the qiime2 saying to do this so I'm wondering if this isn't what you had in mind.

Hi @Skeet! Sorry you ran into this problem. The fix you referenced above is more for us to do, than for you, although it is a helpful hint. As far as editing the daemon.json config --- I have no clue, that is way outside of my experience. Here is what I would recommend doing:

docker run -t -i --env JOBLIB_TEMP_FOLDER=/tmp -v ${pwd}:/data qiime2/core:2019.1 bash

BTW:

This is probably due to your shell/os/environment. Either way it is all good!

Keep us posted! :qiime2: :ghost:

Thank you for the help. I used the command you suggested to start qiime in docker. This did seem to help (ran longer before encountered problem) but I still got an error message about running out of memory. It was a different error message than before).

I first used this to start qiime:
docker run -t -i --env JOBLIB_TEMP_FOLDER=/tmp -v ${pwd}:/data qiime2/core:2019.1 bash

Then I used this command to run the classifier (adding the --p-reads-per-batch in hopes of reducing the memory requirement):
qiime feature-classifier classify-sklearn
--i-classifier /data/Silva_DB/silva-132-99-nb-classifier.qza
--i-reads rep-seqs-trunc-Ian_Pat1.qza
--o-classification taxonomy-Ian_Pat1.qza
--p-reads-per-batch 10000
--verbose

The results was this:
Traceback (most recent call last):
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/q2cli/commands.py", line 274, in call
results = action(**arguments)
File "</opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/decorator.py:decorator-gen-338>", line 2, in classify_sklearn
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/qiime2/sdk/action.py", line 225, in bound_callable
spec.view_type, recorder)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/qiime2/sdk/result.py", line 287, in _view
result = transformation(self._archiver.data_dir)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/qiime2/core/transform.py", line 70, in transformation
new_view = transformer(view)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/q2_feature_classifier/_taxonomic_classifier.py", line 71, in _1
tar.extractall(dirname)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 2010, in extractall
numeric_owner=numeric_owner)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 2052, in extract
numeric_owner=numeric_owner)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 2122, in _extract_member
self.makefile(tarinfo, targetpath)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 2171, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 252, in copyfileobj
dst.write(buf)
OSError: [Errno 28] No space left on device

Plugin error from feature-classifier:

[Errno 28] No space left on device

See above for debug info.

Here is an image of what Task Manager was showing on my machine (in case this might be helpful):

Any suggestions would be appreciated. Thank you again for the help.

Hey there @Skeet!

Okay, looks like something else is running out of disk space. I suspect you will need to allocate more disk to your docker instance, but I am not 100% sure (or, really even sure how to do that). One last check though, after running your classify-sklearn command and letting it fail, in your container can you run df -h and return the results here? Thanks! :qiime2:

1 Like

Here is what I get when I run df -h

(qiime2-2019.1) root@4e355844b2e7:/data/ZymoData_test# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          59G   42G   15G  75% /
tmpfs            64M     0   64M   0% /dev
tmpfs           991M     0  991M   0% /sys/fs/cgroup
//10.0.75.1/C   931G  612G  320G  66% /data
/dev/sda1        59G   42G   15G  75% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           991M     0  991M   0% /proc/acpi
tmpfs           991M     0  991M   0% /sys/firmware

How about setting the tmp dir to a location in your data dir? You know that that mount has plenty of space…

docker run -t -i \
  --env JOBLIB_TEMP_FOLDER=/data/tmp \
  --env TMPDIR=/data/tmp \
  -v ${pwd}:/data \
  qiime2/core:2019.1 bash

HI Matthew,
I apricate your continued support in troubleshooting this problem.

  1. I tried your solution and, at first, got an error like this:
    docker: Error response from daemon: error while creating mount source path '/host_mnt/c/git/somerepo/somefolder': mkdir /host_mnt/c: file exists.

I did some troubleshooting and found a page which discusses the error : Errror mkdir /host_mnt/c: file exists when restarting docker container with mount · Issue #1560 · docker/for-win · GitHub

Based on reading this it sounded like this might actually be a bug with docker that loses users' credentials after a time. So, I rebooted my machine and the problem went away. (Just sharing this info in case other docker users encounter this problem).

  1. Then I ran the qiime2 classify-sklearn command shown below and got the error: [Errno 28] No space left on device. (I've included all the info below - just for completeness).

  2. I did some research to explore space issues with docker and found this site which suggested the following clean-up steps:
    1. Cleanup exited processes:
    docker rm (docker ps -q -f status=exited) 2. Cleanup dangling volumes: docker volume rm (docker volume ls -qf dangling=true)
    3. Cleanup dangling images: (I couldn't get this to run - got error)
    docker rmi $(docker images --filter "dangling=true" -q --no-trunc)

After running these (which did seem to clean some things up), I ran my qiime command again. This time it ran longer before giving me the message "Killed". From what I have seen on the community, that means I ran out of space.

So, I'm feeling pretty stumped. I'm just wondering if other docker users are running into space issue problems like this or if there is something odd about my system (but I don't think there really should be). I'm getting close to abandoning the docker approach. I'm surprised to keep running into so many problems since I keep hearing about docker as a great solution. I ran qiime1 using VirtualBox before but found it quite challenging to get installed properly and it seemed to be very touchy regarding memory (as I'm discovering with docker). I don't really want to go back to that. I'm thinking of trying AWS next. Do you know if people are having success with AWS?

Everything below is the result of my attempt to run the classify-sklearn command that generated the message : [Errno 28] No space left on device
(qiime2-2019.1) root@9a6093490b08:/data/ZymoData_test# qiime feature-classifier classify-sklearn \

--i-classifier /data/Silva_DB/silva-132-99-nb-classifier.qza
--i-reads rep-seqs-trunc-Ian_Pat1.qza
--o-classification taxonomy-Ian_Pat1.qza
--p-reads-per-batch 10000
--verbose
Traceback (most recent call last):
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 2171, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 252, in copyfileobj
dst.write(buf)
OSError: [Errno 28] No space left on device

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/q2cli/commands.py", line 274, in call
results = action(**arguments)
File "</opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/decorator.py:decorator-gen-338>", line 2, in classify_sklearn
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/qiime2/sdk/action.py", line 225, in bound_callable
spec.view_type, recorder)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/qiime2/sdk/result.py", line 287, in _view
result = transformation(self._archiver.data_dir)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/qiime2/core/transform.py", line 70, in transformation
new_view = transformer(view)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/site-packages/q2_feature_classifier/_taxonomic_classifier.py", line 71, in _1
tar.extractall(dirname)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 2010, in extractall
numeric_owner=numeric_owner)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 2052, in extract
numeric_owner=numeric_owner)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 2122, in _extract_member
self.makefile(tarinfo, targetpath)
File "/opt/conda/envs/qiime2-2019.1/lib/python3.6/tarfile.py", line 2171, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
OSError: [Errno 28] No space left on device

Plugin error from feature-classifier:

[Errno 28] No space left on device

See above for debug info.

Really? Where did the bit about "somerepo/somefolder" come from? Can you please copy and paste the exact command (or commands you ran), and the complete error message?

I suspect there is just something subtly wrong here, but we will get to the bottom of it.