Error when using DADA2 (return code 1)

Hi there,

Im trying to perform an Alpha diversity analysis using the "Moving Pictures" guide and it says to use DADA2 to form a feature table. I do so but I receive an error message as seen below.

(qiime2-2022.11) fa00899@ssh02:~/Documents/Test/raw_read_files/7HB$ qiime dada2 denoise-single --i-demultiplexed-seqs 7HBDemux.qza --p-trunc-len 0 --o-table 7HBTable.qza --o-representative-sequences 7HBRepresentativeSequences.qza --o-denoising-stats 7HBDenoiseStats.qza
Plugin error from dada2:

An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

Debug info has been saved to /tmp/qiime2-q2cli-err-n27cy5g5.log

And the error message is:

(qiime2-2022.11) fa00899@ssh02:~/Documents/Test/raw_read_files/7HB$ cat /tmp/qiime2-q2cli-err-w618n7xs.log
Running external command line application(s). This may print messages to stdout and/or stderr.
The command(s) being run are below. These commands cannot be manually re-run as they will depend on temporary files that no longer exist.

Command: run_dada.R --input_directory /tmp/qiime2/fa00899/data/eb869357-315f-4370-85c0-4d3d9f2e5123/data --output_path /tmp/tmpbr4gobd6/output.tsv.biom --output_track /tmp/tmpbr4gobd6/track.tsv --filtered_directory /tmp/tmpbr4gobd6 --truncation_length 0 --trim_left 0 --max_expected_errors 2.0 --truncation_quality_score 2 --max_length Inf --pooling_method independent --chimera_method consensus --min_parental_fold 1.0 --allow_one_off False --num_threads 1 --learn_min_reads 1000000 --homopolymer_gap_penalty NULL --band_size 16

R version 4.2.2 (2022-10-31)
Loading required package: Rcpp
DADA2: 1.26.0 / Rcpp: 1.0.9 / RcppParallel: 5.1.6
2) Filtering .
3) Learning Error Rates
17782172 total bases in 54338 reads from 1 samples will be used for learning the error rates.
Error rates could not be estimated (this is usually because of very few reads).
Error in getErrors(err, enforce = TRUE) : Error matrix is NULL.
6: stop("Error matrix is NULL.")
5: getErrors(err, enforce = TRUE)
4: dada(drps, err = NULL, errorEstimationFunction = errorEstimationFunction,
selfConsist = TRUE, multithread = multithread, verbose = verbose,
MAX_CONSIST = MAX_CONSIST, OMEGA_C = OMEGA_C, ...)
3: learnErrors(filts, nreads = nreads.learn, multithread = multithread,
HOMOPOLYMER_GAP_PENALTY = HOMOPOLYMER_GAP_PENALTY, BAND_SIZE = BAND_SIZE)
2: withCallingHandlers(expr, warning = function(w) if (inherits(w,
classes)) tryInvokeRestart("muffleWarning"))
1: suppressWarnings(learnErrors(filts, nreads = nreads.learn, multithread = multithread,
HOMOPOLYMER_GAP_PENALTY = HOMOPOLYMER_GAP_PENALTY, BAND_SIZE = BAND_SIZE))
Traceback (most recent call last):
File "/user/HS501/fa00899/miniconda3/envs/qiime2-2022.11/lib/python3.8/site-packages/q2_dada2/_denoise.py", line 220, in _denoise_single
run_commands([cmd])
File "/user/HS501/fa00899/miniconda3/envs/qiime2-2022.11/lib/python3.8/site-packages/q2_dada2/_denoise.py", line 36, in run_commands
subprocess.run(cmd, check=True)
File "/user/HS501/fa00899/miniconda3/envs/qiime2-2022.11/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['run_dada.R', '--input_directory', '/tmp/qiime2/fa00899/data/eb869357-315f-4370-85c0-4d3d9f2e5123/data', '--output_path', '/tmp/tmpbr4gobd6/output.tsv.biom', '--output_track', '/tmp/tmpbr4gobd6/track.tsv', '--filtered_directory', '/tmp/tmpbr4gobd6', '--truncation_length', '0', '--trim_left', '0', '--max_expected_errors', '2.0', '--truncation_quality_score', '2', '--max_length', 'Inf', '--pooling_method', 'independent', '--chimera_method', 'consensus', '--min_parental_fold', '1.0', '--allow_one_off', 'False', '--num_threads', '1', '--learn_min_reads', '1000000', '--homopolymer_gap_penalty', 'NULL', '--band_size', '16']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/user/HS501/fa00899/miniconda3/envs/qiime2-2022.11/lib/python3.8/site-packages/q2cli/commands.py", line 352, in call
results = action(**arguments)
File "", line 2, in denoise_single
File "/user/HS501/fa00899/miniconda3/envs/qiime2-2022.11/lib/python3.8/site-packages/qiime2/sdk/action.py", line 234, in bound_callable
outputs = self.callable_executor(scope, callable_args,
File "/user/HS501/fa00899/miniconda3/envs/qiime2-2022.11/lib/python3.8/site-packages/qiime2/sdk/action.py", line 381, in callable_executor
output_views = self._callable(**view_args)
File "/user/HS501/fa00899/miniconda3/envs/qiime2-2022.11/lib/python3.8/site-packages/q2_dada2/_denoise.py", line 244, in denoise_single
return _denoise_single(
File "/user/HS501/fa00899/miniconda3/envs/qiime2-2022.11/lib/python3.8/site-packages/q2_dada2/_denoise.py", line 229, in _denoise_single
raise Exception("An error was encountered while running DADA2"
Exception: An error was encountered while running DADA2 in R (return code 1), please inspect stdout and stderr to learn more.

What can I do to fix this? I assume my reads are too short? but I haven't truncated them cos it set it to 0

Thanks

@Fazaar1889,

Have you tried running it but leaving --p-trunc-len off? Truncating will cut all reads to no longer than the specified truncation length, and if they end up with a length of zero, then no output can be produced.

1 Like

I believe that trunc-len is a required command. Setting it to 0 won't truncate it. I may be wrong! I can try next time I have access. If it is required, what length do you suggest I put in? I know its dependent on my sample but how much should I enter based on my sample?

@Fazaar1889, you are right, setting it to 0 won't truncate any reads, not sure how I forgot that :crazy_face: How long are your reads/what primers are you using? It sounds like the reads may not be long enough to get enough overlap to merge. If this turns out to not be the issue I will have to chat with some of the other developers about what is going on and get back to you about it.

Hi Keegan, thanks for getting back. It's probably right that my reads are too short. Turns out I actually didn't need to do this specific analysis. Thanks for the help though! Really appreciate it!

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.