Yes, I moved forward but not too far, just one step beyond . I tried to run dada2 with the options below since I work with 515F/806R primer combination and I found out in the forum that the options below should be fine. However I got an error message (please see the attached files which are screenshots of my computer). I attach as well the quality plots.
qiime dada2 denoise-paired
Could you help me to elucidate what is going on?. Many thanks in advance!!!!!!
I’m glad you got your data imported. Looks like you are running out of memory when running the dada2 denoising step.
How much RAM / memory does your computer have?
How much RAM / memory does your VM have?
We might be able to solve this problem just by increasing the amount of memory given to your VM.
Another option is to pass a smaller value for
--p-n-reads-learn. The default is 1000000, so passing
--p-n-reads-learn 200000 might let you run with the memory you have.
I hope this helps. Keep up the good work,
Shot in the dark: I encountered this type of error using our Linux system because I was not letting my job have enough time to complete the task.
I have 8 Gb of RAM in my computer and I use 2 Gb for the VM. I tried to run de DADA2 command increasing the RAM in the VM (trying with 4, 5 and 6 Gb) but I got an error message which says that windows has not enough memory left and qiime2 stopps. Finally I decided to go back to 2 Gb of RAM and I used the option --p-n-reads-learn as you suggested. Qiime2 was runnig for almost 24 hours and I got the error message and the logfile that I attach hereqiime2-q2cli-err-5te7x05_.log.txt
. Please, any idea of what is going wrong?. Many thanks in advance.
Many thanks for your input. I let qiime2 to run overnight. All the other programs were shutted down to avoid interferences in the use of RAM.
I like all the methods you are trying and think you are on the right track. Both using more RAM for your VM and closing down other programs to reduce memory usage will be needed. Combine this with a smaller training data set say
and this could work for you.
I wish I had an exact number for you, but a bit of trial and error is needed here. Keep the VM ram high, and try smaller numbers of
--p-n-reads-learn until it works.
Let me know what you find. Having a reference would be helpful for future qiime 2 users.
Many thanks for your reply. When you say “a smaller data set” you mean just reducing the --p-n-reads-learn to 100000, right?. Not reducing the ammount of total sequenced samples… (considering these as my full data set).
Yep, I mean reducing the training data set using
To speed this process up, you might consider renting a super-computer from Amazon and running qiime on it. This process is a little technical the first time you set it up, but can remove a major bottleneck in your analysis.
Seriously, have you considered renting a super-computer? I’ve highlighted three good ones on this list, and they cost under $2 a hour. For the price of running PCR on one sample, you could do all this analysis and get some results. https://www.ec2instances.info/?min_storage=20&selected=c3.8xlarge,h1.4xlarge,c3.4xlarge
I think @colinbrislawn has some great advice, I just wanted to jump in and emphasize @MartinLubell’s point that a
-9 exit code can also come from killing the job prematurely and may have nothing to do with memory.
DADA2 can take a while on very large datasets. (One user’s dataset I was debugging with took almost a week, but that was before some speed improvements we’ve since made).
If you aren’t careful and your computer/VM goes to sleep, it may kill the processes that are running (causing the same -9 exit code).
I will first try to solve the problem in my own computer or using other available computers in our research group. But it is good to know that this option exists. I will discuss it within our group.
Today I noticed this issue. Qiime2 was running with --p-n-reads-learn 100000 and 3 Gb memory during 24 hours and the computer restarted automatically. I will try decreasing --p-reads-learn and increasing RAM memory. Many thanks.
This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.