Error encountered while running DADA2

Hi, I am trying to work on the sequence quality control using few of my sequences using DADA2. Also, I have no prior experience in this.
The command I used is as follows:
qiime dada2 denoise-paired --i-demultiplexed-seqs demux.qza --p-trim-left-f 20 --p-trim-left-r 20 --p-trunc-len-f 100 --p-trunc-len-r 100 --p-n-threads 12 --o-representative-sequences rep-seqs.qza --o-table table.qza

But I am getting an error that says:
Plugin error from dada2:

An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

Debug info has been saved to /tmp/qiime2-q2cli-err-uzeztcgz.log

On checking the log, it shows the following:
Running external command line application(s). This may print messages to stdout and/or stderr.
The command(s) being run are below. These commands cannot be manually re-run as they will depend on temporary files that no longer exist.

Command: run_dada_paired.R /tmp/tmp0fmgeqwf/forward /tmp/tmp0fmgeqwf/reverse /tmp/tmp0fmgeqwf/output.tsv.biom /tmp/tmp0fmgeqwf/filt_f /tmp/tmp0fmgeqwf/filt_r 100 100 20 20 2.0 2 consensus 1.0 12 1000000

R version 3.4.1 (2017-06-30)
Loading required package: Rcpp
DADA2 R package version: 1.6.0

  1. Filtering Error in filterAndTrim(unfiltsF, filtsF, unfiltsR, filtsR, truncLen = c(truncLenF, :
    These are the errors (up to 5) encountered in individual cores…
    Error : cannot allocate vector of size 95.4 Mb
    Error : cannot allocate vector of size 95.4 Mb
    Execution halted
    Traceback (most recent call last):
    File “/home/qiime2/miniconda/envs/qiime2-2018.2/lib/python3.5/site-packages/q2_dada2/”, line 179, in denoise_paired
    File “/home/qiime2/miniconda/envs/qiime2-2018.2/lib/python3.5/site-packages/q2_dada2/”, line 35, in run_commands, check=True)
    File “/home/qiime2/miniconda/envs/qiime2-2018.2/lib/python3.5/”, line 398, in run
    output=stdout, stderr=stderr)
    subprocess.CalledProcessError: Command ‘[‘run_dada_paired.R’, ‘/tmp/tmp0fmgeqwf/forward’, ‘/tmp/tmp0fmgeqwf/reverse’, ‘/tmp/tmp0fmgeqwf/output.tsv.biom’, ‘/tmp/tmp0fmgeqwf/filt_f’, ‘/tmp/tmp0fmgeqwf/filt_r’, ‘100’, ‘100’, ‘20’, ‘20’, ‘2.0’, ‘2’, ‘consensus’, ‘1.0’, ‘12’, ‘1000000’]’ returned non-zero exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/home/qiime2/miniconda/envs/qiime2-2018.2/lib/python3.5/site-packages/q2cli/”, line 246, in call
results = action(**arguments)
File “”, line 2, in denoise_paired
File “/home/qiime2/miniconda/envs/qiime2-2018.2/lib/python3.5/site-packages/qiime2/sdk/”, line 228, in bound_callable
output_types, provenance)
File “/home/qiime2/miniconda/envs/qiime2-2018.2/lib/python3.5/site-packages/qiime2/sdk/”, line 363, in callable_executor
output_views = self._callable(**view_args)
File “/home/qiime2/miniconda/envs/qiime2-2018.2/lib/python3.5/site-packages/q2_dada2/”, line 194, in denoise_paired
" and stderr to learn more." % e.returncode)
Exception: An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

Before this I had imported the metadata using the command:
qiime tools import \

–type ‘SampleData[PairedEndSequencesWithQuality]’
–input-path pe-64-manifest
–output-path paired-end-demux.qza
–source-format PairedEndFastqManifestPhred33

Hello Kiran,

Welcome to the forums!

I’m glad you posted your full error text, including that log file. Inside the log file, I think I found the main error:

Looks like you might be running out of RAM. How much RAM / memory does your computer or Virtual Machine have?



Thanks for your reply @colinbrislawn
RAM size of my virtual machine was 2 GB, and RAM of my laptop on which the QIIME VB is installed, is 8GB. I guess I shall have to increase the storage space. Also, other than storage space what are the potential places where I can go wrong (from the error message of the log file that I had posted)? Would it be possible for you to tell?

Hello Kiran,

Increasing the RAM allocated to the VM to, say, 6 GB would be a good start. If that creates the same problem, you could try decreasing the --p-n-reads-learn from 1,000,000 to a level like --p-n-reads-learn 100,000.

Just a note here, I think your storage space / disk space is fine! It’s the RAM / memory that’s running low, and may need to be increased.

Nope! We just get to solve one problem at a time, and keep making progress. :slight_smile:

Keep up the good work. Let me know what you find.


This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.