Missing plugin(s) necessary to load Artifact error

I am receiving the error message:

"It looks like you have an Artifact but are missing the plugin(s) necessary to load it. Artifact has type 'FeatureData[Sequence]' and format 'DNASequencesDirectoryFormat'".

When I enter this command:

$ qiime feature-table summarize --i-table RP4_representative-sequences.qza --o-visualization RP4_test-vis.qza

This Artifact type and format are what is expected for the feature-table summarize command though. I get essentially the same error with any other command I try though. I have been able to run these commands successfully on the same files on another computer but recently switched to a virtual machine so I expect it has something to do with that... I am running Qiime 2 version 2024.2.0 through conda.

Here is the full text I receive after entering the command:

Traceback (most recent call last):

File "/home/zwl1/miniconda3/envs/qiime2-amplicon-2024.2/lib/python3.8/site-packages/q2cli/util.py", line 492, in _load_input_file
artifact = qiime2.sdk.Result.load(fp)
File "/home/zwl1/miniconda3/envs/qiime2-amplicon-2024.2/lib/python3.8/site-packages/qiime2/sdk/result.py", line 74, in load
cache = get_cache()
File "/home/zwl1/miniconda3/envs/qiime2-amplicon-2024.2/lib/python3.8/site-packages/qiime2/core/cache.py", line 113, in get_cache
_CACHE.temp_cache = Cache()
File "/home/zwl1/miniconda3/envs/qiime2-amplicon-2024.2/lib/python3.8/site-packages/qiime2/core/cache.py", line 379, in new
path = _get_temp_path()
File "/home/zwl1/miniconda3/envs/qiime2-amplicon-2024.2/lib/python3.8/site-packages/qiime2/core/cache.py", line 158, in _get_temp_path
raise ValueError(f"Directory '{cache_dir}' already exists without "
ValueError: Directory '/home/zwl1/mymount/ls58/Zach/data/qiime2' already exists without proper permissions '0o41777' set. Current permissions are '0o40755.' This most likely means something other than QIIME 2 created the directory '/home/zwl1/mymount/ls58/Zach/data/qiime2' or QIIME 2 failed between creating '/home/zwl1/mymount/ls58/Zach/data/qiime2' and setting permissions on it.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/zwl1/miniconda3/envs/qiime2-amplicon-2024.2/lib/python3.8/site-packages/q2cli/click/type.py", line 116, in _convert_input
result, error = q2cli.util._load_input(value)
File "/home/zwl1/miniconda3/envs/qiime2-amplicon-2024.2/lib/python3.8/site-packages/q2cli/util.py", line 397, in _load_input
artifact, error = _load_input_file(fp)
File "/home/zwl1/miniconda3/envs/qiime2-amplicon-2024.2/lib/python3.8/site-packages/q2cli/util.py", line 498, in _load_input_file
raise ValueError(
ValueError: It looks like you have an Artifact but are missing the plugin(s) necessary to load it. Artifact has type 'FeatureData[Sequence]' and format 'DNASequencesDirectoryFormat'

There was a problem loading 'RP4_representative-sequences.qza' as an artifact:

It looks like you have an Artifact but are missing the plugin(s) necessary to load it. Artifact has type 'FeatureData[Sequence]' and format 'DNASequencesDirectoryFormat'

See above for debug info.

Thanks,
Zach

Hello @Zach_LaTurner, do you share this computer with other people? If so, can you run ls -al /home/zwl1/mymount/ls58/Zach/data/qiime2 and post the contents here? If not, can you please remove the /home/zwl1/mymount/ls58/Zach/data/qiime2 directory and try to rerun the command? Thank you.

I do not share it with other people. But there are two ways that I can access the virtual machine, one is through a GUI and another way is by logging in via command line.

If I remove that directory I still get the same error. Possibly related, I was having memory issues on the virtual machine when I was running the feature-classifier command. The solution I found online was to redirect the temporary files to a mounted server. The errors I am talking about now started directly after that, but I stopped having memory issues when I made that change. Sorry if terminology doesn't make sense, not a programmer by training.

1 Like

Yes that probably has something to do with it. Can you try running chmod 1777 /home/zwl1/mymount/ls58/Zach/data/qiime2 then stat /home/zwl1/mymount/ls58/Zach/data/qiime2 and posting the results here?

I suspect there is something to do with the way temp is mounted that is preventing us from setting the permissions we want.

Ok here are the results:

File: /home/zwl1/mymount/ls58/Zach/data/qiime2
Size: 0 Blocks: 0 IO Block: 1048576 directory
Device: 2fh/47d Inode: 13200470720 Links: 2
Access: (0755/drwxr-xr-x) Uid: ( 1000/ zwl1) Gid: ( 1000/ zwl1)
Context: system_u:object_r:cifs_t:s0
Access: 2024-04-01 14:38:51.272362000 -0500
Modify: 2024-04-01 14:38:51.272362000 -0500
Change: 2024-04-01 14:38:51.272362000 -0500
Birth: 2024-04-01 14:38:51.272362000 -0500

That should have set the permissions on the directory to 1777, but you can see under "Access" they're still 0755. I suspect there's some kind of mask or something on the mount that's preventing the permissions from being set. I don't have much experience with this, but can you look back at how you created that mount and see if there is a way to play with the permissions?

Its not clear to me how to change permissions, I think I do specify read and write capabilities when I set up the mount. I will contact our IT department to see what they say.

1 Like

Would you happen to know how much temporary storage space the feature-classifier classify-sklearn might need? IT is going to increase my storage space to try to accomdate that so I don't have to reroute the temporary storage.

Unfortunately I do not have a good estimate of that. It's going to depend on how large your data is, but I couldn't tell you exactly how your input size is going to map to temp storage used.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.