Plugin error from feature-classifier: [Errno 28] No space left on device. How can i resolve it?

Hello everyone,

I'm trying to use a pre-trained Naive Bayes taxonomic classifier to assign taxonomic information to the ASV sequences, but there is a problem: 'Plugin error from feature-classifier: [Errno 28] No space left on device'.
Can you help me??
Thanks,

Matteo

Hello Matteo,

Welcome to the forums! :qiime2:

There's no space left on your hard drive (or where ever you were saving your output file).

Can you post the command you ran?

Can you also post some disk usage stats from your computer? On Linux you can run df -h, and Windows and OSX have summaries in the Settings / System Preferences. :window: :apple:

1 Like

Hi @colinbrislawn,
thank you for your answer!

The command is:

qiime feature-classifier classify-sklearn --i-classifier silva-138-99-nb-classifier.qza --i-reads output_data_qiime2/FeatureDataDADA.qza --o-classification output_data_qiime2/taxonomy.qza

Can you also post some disk usage stats from your computer?

Filesystem Size Used Avail Use% Mounted on
udev 48G 4,0K 48G 1% /dev
tmpfs 9,5G 1,3M 9,5G 1% /run
/dev/dm-1 39G 25G 13G 67% /
none 4,0K 0 4,0K 0% /sys/fs/cgroup
none 5,0M 0 5,0M 0% /run/lock
none 48G 160K 48G 1% /run/shm
none 100M 32K 100M 1% /run/user
/dev/sde2 237M 50M 175M 23% /boot
/dev/mapper/data--vg-data--lv 426G 71M 404G 1% /data
/dev/sde1 511M 3,4M 508M 1% /boot/efi

I'm working in remote with ssh

1 Like

Thanks!

So, Posix / Linux! :penguin:

Your filesystem looks good to me. I made a table:

Filesystem Size Used Avail Use% Mounted on
udev 48G 4,0K 48G 1% /dev
tmpfs 9,5G 1,3M 9,5G 1% /run
/dev/dm-1 39G 25G 13G 67% /
none 4,0K 0 4,0K 0% /sys/fs/cgroup
none 5,0M 0 5,0M 0% /run/lock
none 48G 160K 48G 1% /run/shm
none 100M 32K 100M 1% /run/user
/dev/mapper/data--vg-data--lv 426G 71M 404G 1% /data
/dev/sde1 511M 3,4M 508M 1% /boot/efi

Can you also run pwd so I can see which of these drives you are working in?

Cattura

1 Like

/home/hpc/Desktop/WILLIAMS

@colinbrislawn

Ah! Are you running this script on the head node of your HPC, or submitting this into a SLURM queue to run on a worker node?

EDIT: Also, /home/hpc/Desktop/WILLIAMS starts in /home/ also / which is is listed by df as /dev/dm-1 with 13 GB available. Strange! :thinking:

This is why I was wondering if there's another location of /home/ mounted differently on worker nodes. Or perhaps it's not mounted on worker nodes, thus why there's no space left on a non-existent drive.

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.