recurring error while running dada2 return code -11

Hi, I'm trying to run Dada2 with denoise-paired, but I'm getting an error. My data comes from an Illumina 16S V3-V4. I've tried changing several parameters, as I saw on the forum that this might be the solution, but I haven't had any luck. Here's what I'm trying to run and the error I'm getting:

qiime dada2 denoise-paired
--i-demultiplexed-seqs paired_end_demux_16S.qza
--p-trim-left-f 0
--p-trim-left-r 0
--p-trunc-len-f 220
--p-trunc-len-r 220
--p-max-ee-f 2
--p-max-ee-r 2
--p-trunc-q 2
--p-n-threads 1
--p-n-reads-learn 100000
--o-table table_paired_end_16S.qza
--o-representative-sequences rep-seqs_paired_end_16S.qza
--o-denoising-stats denoising-stats_paired_end_16S.qza
--verbose
Running external command line application(s). This may print messages to stdout and/or stderr.
The command(s) being run are below. These commands cannot be manually re-run as they will depend on temporary files that no longer exist.

Command: run_dada.R --input_directory /tmp/tmp_7hsf77d/forward --input_directory_reverse /tmp/tmp_7hsf77d/reverse --output_path /tmp/tmp_7hsf77d/output.tsv.biom --output_track /tmp/tmp_7hsf77d/track.tsv --filtered_directory /tmp/tmp_7hsf77d/filt_f --filtered_directory_reverse /tmp/tmp_7hsf77d/filt_r --truncation_length 220 --truncation_length_reverse 220 --trim_left 0 --trim_left_reverse 0 --max_expected_errors 2 --max_expected_errors_reverse 2 --truncation_quality_score 2 --min_overlap 12 --pooling_method independent --chimera_method consensus --min_parental_fold 1.0 --allow_one_off False --num_threads 1 --learn_min_reads 100000

Warning message:
package ‘optparse’ was built under R version 4.2.3
R version 4.2.2 (2022-10-31)
Loading required package: Rcpp
DADA2: 1.26.0 / Rcpp: 1.0.11 / RcppParallel: 5.1.6
2) Filtering ....................................................................................................................................
3) Learning Error Rates
55306460 total bases in 251393 reads from 1 samples will be used for learning the error rates.
55306460 total bases in 251393 reads from 1 samples will be used for learning the error rates.
3) Denoise samples ....................................................................................................................................
..............................................
*** caught segfault ***
address 0x10, cause 'memory not mapped'

Traceback:
1: asMethod(object)
2: as(quality(srt), "matrix")
3: qtables2(fq)
4: derepFastq(filts[[j]])
An irrecoverable exception occurred. R is aborting now ...
Traceback (most recent call last):
File "/home/qiime/.miniconda/envs/qiime2-2023.7/lib/python3.8/site-packages/q2_dada2/_denoise.py", line 326, in denoise_paired
run_commands([cmd])
File "/home/qiime/.miniconda/envs/qiime2-2023.7/lib/python3.8/site-packages/q2_dada2/_denoise.py", line 36, in run_commands
subprocess.run(cmd, check=True)
File "/home/qiime/.miniconda/envs/qiime2-2023.7/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['run_dada.R', '--input_directory', '/tmp/tmp_7hsf77d/forward', '--input_directory_reverse', '/tmp/tmp_7hsf77d/reverse', '--output_path', '/tmp/tmp_7hsf77d/output.tsv.biom', '--output_track', '/tmp/tmp_7hsf77d/track.tsv', '--filtered_directory', '/tmp/tmp_7hsf77d/filt_f', '--filtered_directory_reverse', '/tmp/tmp_7hsf77d/filt_r', '--truncation_length', '220', '--truncation_length_reverse', '220', '--trim_left', '0', '--trim_left_reverse', '0', '--max_expected_errors', '2', '--max_expected_errors_reverse', '2', '--truncation_quality_score', '2', '--min_overlap', '12', '--pooling_method', 'independent', '--chimera_method', 'consensus', '--min_parental_fold', '1.0', '--allow_one_off', 'False', '--num_threads', '1', '--learn_min_reads', '100000']' died with <Signals.SIGSEGV: 11>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/qiime/.miniconda/envs/qiime2-2023.7/lib/python3.8/site-packages/q2cli/commands.py", line 478, in call
results = self._execute_action(
File "/home/qiime/.miniconda/envs/qiime2-2023.7/lib/python3.8/site-packages/q2cli/commands.py", line 539, in _execute_action
results = action(**arguments)
File "", line 2, in denoise_paired
File "/home/qiime/.miniconda/envs/qiime2-2023.7/lib/python3.8/site-packages/qiime2/sdk/action.py", line 342, in bound_callable
outputs = self.callable_executor(
File "/home/qiime/.miniconda/envs/qiime2-2023.7/lib/python3.8/site-packages/qiime2/sdk/action.py", line 566, in callable_executor
output_views = self._callable(**view_args)
File "/home/qiime/.miniconda/envs/qiime2-2023.7/lib/python3.8/site-packages/q2_dada2/_denoise.py", line 339, in denoise_paired
raise Exception("An error was encountered while running DADA2"
Exception: An error was encountered while running DADA2 in R (return code -11), please inspect stdout and stderr to learn more.

Plugin error from dada2:

An error was encountered while running DADA2 in R (return code -11), please inspect stdout and stderr to learn more.

Some people on the forum suggest it's a memory problem but I don't think that's my case because i have 125gb and 24 CPUs.

I would appreciate it if someone could help me!

Hello @francisca_baroffio,

What may be happening is that there are certain versions of certain libraries that the R process (or children processes) expected were available but were actually not. For example, if the derepFastq function relies on a certain dynamically loaded C++ library for which only the wrong version was available then you could get such a segmentation fault.

This is somewhat common on computer clusters (which I assume applies in your case, given the resources available). You can try to fix this by reinstalling the qiime2 environment (or even better, a newer release of the environment).

Looks like you ran out of memory, and the job was killed.

That should be enough RAM! Yet, the job was killed...

Maybe something is misconfigured, like the VM only has only 8 GBs or the HPC scheduler is not giving you access to all 125 GB.

Thanks, im going to try installing the newer release of the environment and see if that fix my problem