Problem Analyzing demultiplexed file

I encountered an error while trying to analyze a demultiplexed file. Find attached my command and the error message. My qiime2 is installed in conda.
Qiime

Hi @oast!
I haven’t seen this error before, but it looks like maybe you don’t have enough memory to handle the size of your data. Can you tell us a little more about your computing environment? E.g. are you on a personal machine or a cluster? Are you using a virtual machine? How much memory does your system have available?

Best,
CK

I am using a virtual machine
It has up to 1TB memory available

How much RAM have you allocated to the VM, @oast? I suspect memory, not storage, is the culprit here, and our VM images are set up to run with a relatively low memory allocation by default so they don’t break things for users with older machines.

The VM has a 64GB RAM

Thanks, @oast! Sorry if this seems pedantic, but I still suspect this is an out-of-memory error. Would you mind sharing a screen capture of your memory-allocation settings for the VM?

Best,
Chris

1 Like

Thanks for the assistance.
Let me clarify some things, I am working on a personal linux machine but I connect to the machine virtually to run data analysis.
The RAM memory of the computer is 64GB.

Cool! Let’s keep clarifying.

  1. What OS is being run on the remote machine you are connecting to? (Not your local linux machine - the machine that’s actually running QIIME2)
  2. Is QIIME 2 installed natively on that computer? OR is it installed in a VM (e.g. virtualbox)?
  3. Is the remote computer a “normal” computer with 64 GB RAM? (e.g. you are just connecting to a desktop in your office from home) OR is it something else - e.g. one node of an HPC cluster? A cloud compute system?
  4. Are you an admin of the remote machine, or are you connecting to a machine administered by someone else?

I’m asking all of these questions, because resource access permissions and device/process provisioning can restrict the amount of RAM a given to a user or command. E.g. When I run a command on our HPC cluster, the cluster has an enormous amount of RAM, but my command may only have access to the 16GB I requested for the command, and the 10 GB of storage in the directory where I’m storing my data.

  1. Ubuntu Linux operating system
  2. It was installed natively.
  3. It is a normal computer.
  4. I am an admin of the remote machine.

Alright, @oast, you’ve got me stumped. That error message is quite clear - the system on which you are trying to run that QIIME 2 command does not have access to the 3.x GB of memory it needs to do so.

Here are some suggestions:

  1. Double-check the remote system you’re running commands on. Make sure it does, in fact, have the resources you believe it does, and make sure that you are logging into it in a way that doesn’t somehow, prevent you from using them.
  2. Try reducing the number/size of processes that are running on the remote before running your command. 64GB is 4x the memory I have on my machine, and I’ve never had this issue. It’s possible that, if everything checks out with system resources (above), there are other heavy processes using up the system RAM. Try shutting down everything you safely can, and then re-running.
  3. Talk to someone on site with the machine. A sysadmin would be ideal, but anyone with access to it may be able to help you figure out what’s going on.
  4. Consider renting a computer on the cloud with adequate RAM
  5. Consider whether disabling Golay error correction is a good choice for you. This will save you some RAM, but you might lose more reads, and it won’t fix the underlying lack of memory issue, so you might run into the same issue when calculating diversity, assigning taxonomies, etc.

Good luck, and let us know how this turns out!
Chris

1 Like