dada2: An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

Hello All,
I have read the similar topic that has been posted before. But suggested solutions didn’t work for me, so creating this topic again.
The data is pair-ends. The samples amplified at the expected size (390bp). The qiime2 version is 2019.10.
Firstly, I had trimmed adapters from demultiplexed reads.
qiime cutadapt trim-paired
--i-demultiplexed-sequences demultiplexed-seqs.qza
--p-front-f GTGTGCCAGCMGCCGCGGTAA
--p-error-rate 0
--o-trimmed-sequences trimmed-seqs.qza
--verbose

qiime demux summarize
--i-data trimmed-seqs.qza
--o-visualization trimmed-seqs.qzv

Output visualizations are as follows.
image

Then, I used dada2 to denoise data:
qiime dada2 denoise-paired
--i-demultiplexed-seqs trimmed-seqs.qza
--p-trunc-len-f 240
--p-trunc-len-r 185
--p-n-threads 10
--o-table table-dada2.qza
--o-representative-sequences rep-seqs-dada2.qza
--o-denoising-stats denoising-stats-data2.qza
--verbose

Loading required package: Rcpp
Error in names(answer) <- names1 :
'names' attribute [75] must be the same length as the vector [68]
Execution halted
Traceback (most recent call last):
File "/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/q2_dada2/_denoise.py", line 257, in denoise_paired
run_commands([cmd])
File "/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/q2_dada2/_denoise.py", line 36, in run_commands
subprocess.run(cmd, check=True)
File "/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['run_dada_paired.R', '/tmp/tmp8aveqrj0/forward', '/tmp/tmp8aveqrj0/reverse', '/tmp/tmp8aveqrj0/output.tsv.biom', '/tmp/tmp8aveqrj0/track.tsv', '/tmp/tmp8aveqrj0/filt_f', '/tmp/tmp8aveqrj0/filt_r', '240', '185', '0', '0', '2.0', '2.0', '2', 'consensus', '1.0', '10', '1000000']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/q2cli/commands.py", line 328, in call
results = action(**arguments)
File "</share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/decorator.py:decorator-gen-459>", line 2, in denoise_paired
File "/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/qiime2/sdk/action.py", line 240, in bound_callable
output_types, provenance)
File "/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/qiime2/sdk/action.py", line 383, in callable_executor
output_views = self._callable(**view_args)
File "/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/q2_dada2/_denoise.py", line 272, in denoise_paired
" and stderr to learn more." % e.returncode)
Exception: An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

Plugin error from dada2:

An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

See above for debug info.
slurmstepd: error: Detected 1 oom-kill event(s) in step 19554325.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.

I am really appreciated with your help.

Good afternoon Terren,

I think this is our most important clue:

So, it looks like you used up all the memory, and the slurm job manager killed/canceled your dada2 job.

One way to use less memory is to set --p-n-reads-learn to a lower value. It's 1 million by default, by setting it to 10k should use less memory and still work well.

Try --p-n-reads-learn 10000 and see if the job can finish.

Colin

Thanks. I added --p-n-reads-learn 10000, but it still failed. The .err file is as follows.

Loading required package: Rcpp
Duplicate sequences in merged output.
Duplicate sequences in merged output.
Duplicate sequences in merged output.
Traceback (most recent call last):
File “/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/q2_dada2/_denoise.py”, line 257, in denoise_paired
run_commands([cmd])
File “/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/q2_dada2/_denoise.py”, line 36, in run_commands
subprocess.run(cmd, check=True)
File “/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/subprocess.py”, line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command ‘[‘run_dada_paired.R’, ‘/tmp/tmpeaj4qqtc/forward’, ‘/tmp/tmpeaj4qqtc/reverse’, ‘/tmp/tmpeaj4qqtc/output.tsv.biom’, ‘/tmp/tmpeaj4qqtc/track.tsv’, ‘/tmp/tmpeaj4qqtc/filt_f’, ‘/tmp/tmpeaj4qqtc/filt_r’, ‘240’, ‘185’, ‘0’, ‘0’, ‘2.0’, ‘2.0’, ‘2’, ‘consensus’, ‘1.0’, ‘1’, ‘10000’]’ died with <Signals.SIGKILL: 9>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/q2cli/commands.py”, line 328, in call
results = action(**arguments)
File “</share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/decorator.py:decorator-gen-459>”, line 2, in denoise_paired
File “/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/qiime2/sdk/action.py”, line 240, in bound_callable
output_types, provenance)
File “/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/qiime2/sdk/action.py”, line 383, in callable_executor
output_views = self._callable(**view_args)
File “/share/apps/bio3user/miniconda3/envs/qiime2-2019.10/lib/python3.6/site-packages/q2_dada2/_denoise.py”, line 272, in denoise_paired
" and stderr to learn more." % e.returncode)
Exception: An error was encountered while running DADA2 in R (return code -9), please inspect stdout and stderr to learn more.

Plugin error from dada2:

An error was encountered while running DADA2 in R (return code -9), please inspect stdout and stderr to learn more.

See above for debug info.
slurmstepd: error: Detected 1 oom-kill event(s) in step 19572649.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.

1 Like

Interesting...

I wonder if this is a clue:

Are you submitting this into a slurm queue, or running this directly on a laptop or desktop? Do you know how memory the computer or worker node has?

Colin

Yes, I have submitted into a slurm queue of a sever. I do not know the memory of the worker node. But I have succeeded to run another script with mem=8G. In this case, I used --mem=8000M but failed.

Ah OK! You should list the maximum memory of the worker node. (8 GB is not going to be enough.)

Can you ssh into the worker nodes directly and use top to see total node memory? I’m sure the HPC people would also know, so maybe you can ask them.

Once you know the total, this script should run just fine. We are so close!

Colin

Thanks. I edited --mem=60000M, then the scirpt had been running. But it was a long time and has not compledted.

I’m glad you fixed the memory needed. 60 gb is plenty.

Dada2 can take awhile. How many threads and how many --p-n-reads-learn are you using?

–p-n-reads-learn 10000
It has been running for more than 24 hours.

1 Like

Can you ssh into the node and use top to see what’s running?

24 hours is long… but not impossible for a large data set. How large are your input fastq files?

Colin

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.