That’s a solid machine, @elolimy. With a large number of reads, MAFFT can require a lot of memory. Unfortunately, MAFFT doesn’t seem to have a native option for memory chunking, so if this is a memory issue, here are some other things you could try.
- The
--p-partree
argument might be useful if you are running MAFFT on a large number of sequences. (e.g. over 100k) - If parttree isn’t a good option for you, a compute cluster at your institution or a rented computer could be good backup options.
It is possible that something else is killing your process. Some other variables to consider
- make sure your machine doesn’t suspend/sleep/hibernate while you’re running the command. This could cause a sigkill
- shut down any extra processes when you run
denoise-paired
. Firefox is great, but if you have as many tabs open as I do, it will eat up memory you may need.
Let me know if this helps. I’m new to this issue myself, and am interested in seeing how it works out.