How much RAM is needed for feature classification?

Is there a way to figure out how much RAM the Naive Bayes classifier would need to classify a 20 MB OTU/rep set fasta file? Similarly, how much RAM would be necessary for deblur or DADA2 to process a 25-30 GB fastq file? Just wondering if I can make this work on my laptop while trying to get my lab to emerge from the MacOS dark ages.

1 Like

Hi @ctekellogg,

No, there is not really a straightforward way to estimate, but I can offer a few tips to lower memory consumption.

  1. The main factor driving memory use is the size of the reference sequence database. So using smaller reference sequence databases (e.g., greengenes rather than SILVA) and shorter sequences will reduce memory load.
  2. See this post for some other tips (chunk-size is now called reads-per-batch)

@benjjneb and @wasade may be able to offer some advice on memory consumption with these methods.

I hope that helps!

Just in addition to @Nicholas_Bokulich’s comments, if you’re worried about memory, don’t set n-jobs to anything other than one. Most of the memory usage is in loading the classifier object, and the number of times you do that is n-jobs.

For what it’s worth, it runs just fine using greengenes on my MacOS laptop :slight_smile:

1 Like

The DADA2 plugin is processing samples individually, so memory requirements should be nearly flat with increasing sample number, and will be driven instead by the "largest" sample you have (in terms of unique sequences, not raw reads). That is pretty dataset specific, so its hard to give you a one-size fits all answer, but I've generally found 16GB sufficient for just about anything. It never hurts to have more than enough memory, though!

4 Likes

Similar to @benjjneb’s comment on DADA2, Deblur’s profile will remain pretty flat over sample count and peak with the largest sample. I normally allocate 8GB per thread when executing (as that’s effectively the default on our compute resource), and that was sufficient deep studies such as Yatsunenko et al 2012 which averaged over 1M reads per sample.

Best,
Daniel

2 Likes

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.