Core-dumped error when training Silva V3V4 classifier

Thanx for your advice!!

But… i got intro another trouble.

Im running the bayesian silva 132 trained classifier with my data, but some minutes later it stops and says

“Segmentation Fault (Core dumped)”

i read on the forum it may be because space in disk, but i have 600 gb left, and when its runnin it starts to take around 30gb in the process, and then dump it, is it normal?

EDIT: As i see in another post… i think its a problem related to WSL
EDIT 2: It seems to be a problem with SILVA, GG achieved to classify and it was fast, maybe its size?

Hi @Francisco,
I moved this to its own thread as it was no longer related to the original topic you had raised in the other thread.
Glad to hear you were able to resolve the issue and thanks for the updates. The SILVA database is quite large and if your tmp folder is not large enough then you get the error as you have seen. In the future you can increase the space allocated to your tmp folder (you might have to ask your admin if this is being done on a cluster). The GG database is much smaller than Silva so it makes sense that it would complete without issues.
Good luck!

3 Likes

Hi!

I used Silva on another computer with Linux and a lot more RAM and it worked without problems, i believe its somenthing related to:

1.- WSL’s tmp folder capacity
2.- or the amount of RAM on my computer wanst enough.

Hi @Francisco,
To be honest I’m not familiar with the WSL at all so I don’t know how that integrates with your computer’s resources and tmp folders etc. But RAM is certainly a common enough issue when working with SILVA too.