DADA2 return code 1 error

Hello everyone, I’ve been having some trouble filtering my sequences using the dada2 command. The first time running, I got back an error message that read:

"Plugin error from dada2:

An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

Debug info has been saved to /tmp/qiime2-q2cli-err-mth5g2nm.log"

We used the following command:

qiime dada2 denoise-paired --i-demultiplexed-seqs demux-paired-end.qza --p-trim-left-f 0 --p-trim-left-r 0 --p-trunc-len-f 149 --p-trunc-len-r 149 --p-n-threads 0 --p-max-ee 2.0 --p-trunc-q 2 --p-chimera-method ‘consensus’ --o-table table.qza --o-denoising-stats stats.qza --o-representative-sequences rep-seqs.qza

When running with the --verbose tag, we got back the following message:

"Command: run_dada_paired.R /tmp/tmpiyukm72i/forward /tmp/tmpiyukm72i/reverse /tmp/tmpiyukm72i/output.tsv.biom /tmp/tmpiyukm72i/track.tsv /tmp/tmpiyukm72i/filt_f /tmp/tmpiyukm72i/filt_r 149 149 0 0 2.0 2 consensus 1.0 0 1000000

R version 3.5.1 (2018-07-02)
Loading required package: Rcpp
DADA2: 1.10.0 / Rcpp: 1.0.1 / RcppParallel: 4.4.2

  1. Filtering …
  2. Learning Error Rates
    150386892 total bases in 1009308 reads from 50 samples will be used for learning the error rates.
    150386892 total bases in 1009308 reads from 50 samples will be used for learning the error rates.
  3. Denoise remaining samples …
  4. Remove chimeras (method = consensus)
    Error in isBimeraDenovoTable(unqs[[i]], …, verbose = verbose) :
    Input must be a valid sequence table.
    Calls: removeBimeraDenovo -> isBimeraDenovoTable
    Execution halted
    Traceback (most recent call last):
    File “/home/sjhustad/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2_dada2/”, line 231, in denoise_paired
    File “/home/sjhustad/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2_dada2/”, line 36, in run_commands, check=True)
    File “/home/sjhustad/miniconda3/envs/qiime2-2019.4/lib/python3.6/”, line 418, in run
    output=stdout, stderr=stderr)
    subprocess.CalledProcessError: Command ‘[‘run_dada_paired.R’, ‘/tmp/tmpiyukm72i/forward’, ‘/tmp/tmpiyukm72i/reverse’, ‘/tmp/tmpiyukm72i/output.tsv.biom’, ‘/tmp/tmpiyukm72i/track.tsv’, ‘/tmp/tmpiyukm72i/filt_f’, ‘/tmp/tmpiyukm72i/filt_r’, ‘149’, ‘149’, ‘0’, ‘0’, ‘2.0’, ‘2’, ‘consensus’, ‘1.0’, ‘0’, ‘1000000’]’ returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/home/sjhustad/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2cli/”, line 311, in call
results = action(**arguments)
File “</home/sjhustad/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/>”, line 2, in denoise_paired
File “/home/sjhustad/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/qiime2/sdk/”, line 231, in bound_callable
output_types, provenance)
File “/home/sjhustad/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/qiime2/sdk/”, line 365, in callable_executor
output_views = self._callable(**view_args)
File “/home/sjhustad/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2_dada2/”, line 246, in denoise_paired
" and stderr to learn more." % e.returncode)
Exception: An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

Plugin error from dada2:

An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

See above for debug info."

We are sure that our samples are demultiplexed and that the forward and reverse reads are not identical, as some others reported in similar threads. We have 279 samples with forward and reverse barcodes that were imported using the Casava format. I have been using qiime2-2019.4. If anyone has any insight I would greatly appreciate it!

The more samples you are running, the more memory is consumed. Maybe you don’t have enough of RAM. Did you try to run it without indicating the tick ‘threads’ at all?

I did not, but the command that we had there is actually just the default command, so I can’t imagine that it would take much more memory than normal. I actually misspoke and added up the forward and reverse files, so there are actually half that amount (with one extra file that was taken out before import). We’ve also used these same parameters in the past with the same number of samples, albeit with fewer reads per sample.

Hi! In your command you are providing

In this case you are using all available threads, its faster, but consumes more memory and may cause an error you observed. Providing 1 or another number can reduce usage of RAM but increase overall time of the process.
Even processing 100 samples may take a lot of RAM.

That’s also an important difference

The issue is that your sequences are all being filtered out prior to chimera checking, probably because they are all failing to merge.

what gene/primers are you targeting and what is the expected amplicon length? you need at least 12 nt of overlap for dada2 to allow merging (20 nt in earlier releases)… 149 + 149 - 12 = 286 might be insufficient?

If adjusting your trimming parameters does not fix this issue, I recommend installing the latest version of QIIME 2 (a new release is expected later this month)

I am not aware of cases where the isBimeraDenovoTable error is due to memory — if you know of one, please share the link @timanix! Memory errors in dada2 usually return different error codes from what we are seeing here. Thanks!

We are trying to amplify a region of the 16S rRNA gene (V4) using illumina MiniSeq primers. I believe the specific region we are after is around 250 bp. We were worried that the quality of our sequences was lower than normal, do you think that this could be contributing to their inability to merge? I will try to rerun the command with the suggested alternations to the trimming parameters, but as I’m working with 15.9 GB of RAM, I’m not sure how effective that will be. Will post any results!

V4 is usually longer (~290 nt) but it depends on the specific primers you are using. 149 + 149 - 12 = 286 is going to be just a little too short… perhaps try truncating just a little less to see if you can squeeze out some more reads.

Certainly could contribute! But I think truncation is the main source of trouble — the error you are encountering seems to suggest that all reads are being filtered out at the merging step, whereas low quality would cause reads to drop at the pre-filtering step (prior to denoising).

Thank you for your feedback! The problem is that our reads are only 149 basepairs long in the forward or reverse direction, so there is actually no truncation occurring by having those parameters. From what you have said, it seems like this is the root of the problem.

As a side not, running the command with fewer parameters returned the same error message, so it seems that the possible problem of not having enough memory can be ruled out.

Oh indeed yes — the problem is that you do not have enough read length to overlap the reads, forming a longer read. I recommend working with only the forward reads.

1 Like