splits the query sequences into smaller batches so that only a few are read into memory at one time.
A smaller number leads to less RAM used, at the cost of a little more time
The number of jobs to run in parallel. More jobs = more RAM but less time
There is no "best", since it depends on system specs, datasets, etc... but if you are getting memory errors use 1 job and a smaller batch size (2000?) and just be prepared to wait a bit longer for the job to complete.
Good luck!