Q2-DADA2: Anticipated runtime?

Hi there!

I am experiencing the same issue.
I am currently using Qiime2 v2017.6 and running dada2 with PE outputs from a HiSeq run (218 samples with approximately 138 million total reads) on 32 (running for 2d+ now) and 56 threads (1.5d+) on two separate highmem nodes, with the recommended dada2 parameters. Approximately how much longer will it take for the process to complete on each node?

I have relaxed the parameters: maxEE 0.5 and error learning 30,000 to see how long it might take to process my entire data (not planning to use the outputs), and the process has been running for 1d+ on a 56 thread node.

Is it reasonable to continue to wait? If so how much longer? Are there other ways of speeding up the process without compromising data quality? Our computing facility has been kind enough to let me hug a few of our limited highmem nodes for a couple of days, but it won’t be for much longer.

Any advice on how best to proceed would be greatly appreciated.
N

Hi @nerdynella,
The of around 1 day runtime seems reasonable so far. See this discussion in the DADA2 documentation for information on how to estimate the expected run time.

Best,
Greg

1 Like

Thank you for your quick response Greg. I read Benjamin’s post earlier and optimized my current DADA2 runs above based on his recommendations. I was wondering if there was anything I could do on the Qiime2 end to speed things up. I guess I’ll just have to wait for a week and see if it completes successfully.

Is the size of the Qiime artifact file produced by DADA2 about the same as the demuz artifact used? If not, approx. how much more/less should I expect? I might be able to track the progress of this step based on used core memory.
Cheers
Nsa

Hi @nerdynella,
QIIME 2 is calling DADA2 directly - we unfortunately don’t have any tricks to make it go quicker. The artifacts that result from this step should be quite a bit smaller, but their size is going to be a function of the number of samples and the number of features in your data sets post-quality-filtering. So, the size of those files wouldn’t really be useful as a progress report on runtime. Additionally, the qza files will be created in the final stage of those analysis, so when those show up the job is effectively done.

1 Like

thank you for getting back @gregcaporaso, my ‘relaxed quality’ Q2-DADA2 run has just successfully completed! Below are the stats/parameters for anyone with similar data/inquiries:

hiSeq run: 218 samples with ~138 million PE reads (31G)

trunc_len (f & r): 244
trim_left (f & r): 0
trunc_q: 2
chimera: consensus
min..parent..abundance: 1
n_threads (0) - used all 56 threads on node (see below)
n_reads_learn: 30000 (this was just a test run)
platform: linux-x86_64

Host                = high mem node (28 cores/56 threads; 377.5G total mem)
Start Time       = 08/02/2017 12:04:47.352
End Time         = 08/04/2017 09:39:22.559
User Time        = 73:13:10:37
System Time      = 02:39:24
Wallclock Time   = 1:21:34:35
CPU              = 73:15:50:02
Max vmem         = 10.385G

I am hopeful that my other analysis (on 16, 32 and 56 threads) will complete soon.
I’ll post updates once they are complete.
Thanks,
Nsa

1 Like

updates: unfortunately my Q2-DADA2 run did not complete successfully :frowning_face: I am unsure what went wrong. Below are the parameters used:

hiSeq run: 218 samples with ~138 million PE reads (31G)

dada2 parameters used:
trunc_len (f & r): 244
trim_left (f & r): 0
n_threads (0)
max_ee: 2 (default)
trunc_q: 2 (default)
min…parent…: 1 (default)
n_reads_learn: 1000000 (default)
hashed_feature_ids: true (default)

I love the provenance feature in Qiime2!..ok, back to the issue

Compute resources
Host = high mem node (28 cores/56 threads; 377.5G total mem)
Start Time = 08/02/2017 07:39:25.265
End Time = 08/05/2017 10:04:23.484
User Time = 118:15:49:20
System Time = 03:48:58
Wallclock Time = 3:02:24:58
CPU = 118:19:38:18
Max vmem = 14.533G
Max rss = NA
Exit Status = 1

Output only table.qza

What did I do wrong? Please help.

Same analysis is still running on two separate 8 and 16 core nodes, I’ll post updates on how long each takes to complete.

thanks,
Nsa

Runtime using16 cores/32 threads; 503.7G total mem; same parameters as above

Start Time = 08/01/2017 12:30:56.569
End Time = 08/07/2017 05:54:58.067
User Time = 141:17:05:42
System Time = 05:20:20
Wallclock Time = 5:17:24:01
CPU = 141:22:26:02
Max vmem = 10.141G
Max rss = NA
Exit Status = 1

This also failed with no repseq.qza output

updates on runtime using16 threads; 251.9G mem; same parameters

Start Time = 08/01/2017 12:25:41.883
End Time = 08/07/2017 20:12:05.372
User Time = 85:02:35:21
System Time = 02:27:49
Wallclock Time = 6:07:46:23
CPU = 85:05:03:10
Max vmem = 8.913G
Max rss = NA
Exit Status = 1

There must be something wrong with the parameters I used as this also failed. n_reads_learn, max_ee or trunc_q too high maybe? because the run completes without problems if these parameters are relaxed. But how low is too low though?

At least I now have an estimate of how long it takes to run dada2 on one PE hiseq data.
Nsa

Hi @nerdynella, can you please provide the error messages and/or error logs that are emitted from QIIME 2 on these failures? We need that to help you work through your problem. Thanks!

@thermokarst I didn’t use the verbose command for these runs unfortunately, so there aren’t any error logs. I am running it again with --verbose; so far it seems to have completed step 2 (learning Error rates- convergence after 4 rounds) and is currently denoising remaining samples. I’ll post updates with any errors as soon as it completes.

Thanks,
Nsa

Hi @nerdynella: We implemented an auto-logging feature in QIIME 2 2017.5:

When a command fails and is not in --verbose mode, the command's output is saved to a temporary log file. This is useful when errors take a while to trigger; users no longer have to rerun the command with --verbose (and wait) to see the full error message.

An example when run without --verbose:

$ qiime dada2 denoise-paired \
  --i-demultiplexed-seqs paired-end-demux.qza \
  --o-table table \
  --o-representative-sequences rep-seqs \
  --p-trim-left-f 0 \
  --p-trim-left-r 0 \
  --p-trunc-len-f 300 \
  --p-trunc-len-r 300 \
  --p-n-threads 0

Plugin error from dada2:

Command '['run_dada_paired.R', '/var/folders/5y/k_d_lfy57pxbg3l7j_3r9y
sh0000gn/T/tmp6rkj5go1/forward', '/var/folders/5y/k_d_lfy57pxbg3l7j_3r
9ysh0000gn/T/tmp6rkj5go1/reverse', '/var/folders/5y/k_d_lfy57pxbg3l7j_
3r9ysh0000gn/T/tmp6rkj5go1/output.tsv.biom',
'/var/folders/5y/k_d_lfy57pxbg3l7j_3r9ysh0000gn/T/tmp6rkj5go1/filt_f',
'/var/folders/5y/k_d_lfy57pxbg3l7j_3r9ysh0000gn/T/tmp6rkj5go1/filt_r',
'300', '300', '0', '0', '2.0', '2', 'consensus', '1.0', '0',
'1000000']' returned non-zero exit status 1

Debug info has been saved to /var/folders/5y/k_d_lfy57pxbg3l7j_3r9ysh0000gn/T/qiime2-q2cli-err-4vorneiu.log.

The log file at /var/folders/5y/k_d_lfy57pxbg3l7j_3r9ysh0000gn/T/qiime2-q2cli-err-4vorneiu.log contains the same info you would see if you ran the command with --verbose (this file will be different for each run, and different on each computer)! It might be worth checking through your command line history or your temp directory to see if they are still hanging around.

@thermokarst we couldn’t find the files unfortunately, we looked at /tmp/ and /var/ folders…they’re prolly buried somewhere but we don’t know where to look.

so here goes:

  1. Denoise remaining samples …
    The sequences being tabled vary in length.
  2. Remove chimeras (method = consensus)
    Traceback (most recent call last):
    File “/opt/conda/lib/python3.5/site-packages/q2cli/commands.py”, line 222, in call
    results = action(**arguments)
    File “”, line 2, in denoise_paired
    File “/opt/conda/lib/python3.5/site-packages/qiime2/sdk/action.py”, line 203, in callable_wrapper
    output_types, provenance)
    File “/opt/conda/lib/python3.5/site-packages/qiime2/sdk/action.py”, line 305, in callable_executor
    output_views = callable(**view_args)
    File “/opt/conda/lib/python3.5/site-packages/q2_dada2/_denoise.py”, line 177, in denoise_paired
    run_commands([cmd])
    File “/opt/conda/lib/python3.5/site-packages/q2_dada2/_denoise.py”, line 35, in run_commands
    subprocess.run(cmd, check=True)
    File “/opt/conda/lib/python3.5/subprocess.py”, line 398, in run
    output=stdout, stderr=stderr)
    subprocess.CalledProcessError: Command ‘[‘run_dada_paired.R’, ‘/tmp/tmpxnsi7fxb/forward’, ‘/tmp/tmpxnsi7fxb/reverse’, ‘/tmp/tmpxnsi7fxb/output.tsv.biom’, ‘/tmp/tmpxnsi7fxb/filt_f’, ‘/tmp/tmpxnsi7fxb/filt_r’, ‘244’, ‘244’, ‘0’, ‘0’, ‘2.0’, ‘2’, ‘consensus’, ‘1.0’, ‘224’, ‘1000000’]’ returned non-zero exit status -15

Just a quick followup here:

@nerdynella & I had an out-of-band chat about this earlier today — it sounds like there is some interest in running DADA2 independently of QIIME 2 for these analyses. Hopefully @nerdynella will get a chance to follow up here and report back on any interesting findings (not of the data, but of the process itself — is the error occurring in DADA2, or does it have to do with the QIIME 2 plugin?). Ultimately we want to make sure that the q2-dada2 experience is as smooth as possible. Thanks @nerdynella!

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.