classification taxonomy.qza Killed

hello,

Since my computer memory is 8 GB, I cannot trained the silva, so I used V3V4 classifier trained with sklearn 0.21.2. I am using QIIME2-2019.7 that is compatible with sklearn 0.21.2.
When I entered to following command to create taxonomy.qza :
qiime feature-classifier classify-sklearn --i-classifier silva_132_99_v3v4_q2_2019-7.qza --i-reads rep_seqs.qza --o-classification taxonomy.qza

I get the “Killed” message.

(qiime2-2019.7) [email protected]:~/metagenome$ qiime feature-classifier classify-sklearn --i-classifier silva_132_99_v3v4_q2_2019-7.qza --i-reads rep_seqs.qza --o-classification taxonomy.qza
Killed

Can you help me about this issue?
best wishes
arne

Hi @arne! Is there any more to that error message? Or maybe a logfile you can share?
I haven’t run into this myself, but a quick forum search tells me this could still be a memory issue.

How much memory have you allocated to your Virtualbox? You might try increasing the memory available to it.

Another alternative, if that doesn’t work and you don’t have access to a computer with more memory is:

Keep us posted!
Chris :horse:

Hello Chris,
I tried it but is says (1/1?) no such option: --p-reads-per-batch
No log file was created.
here is whole script
(qiime2-2019.7) [email protected]:~/metagenome$ qiime feature-classifier classify-sklearn --i-classifier silva_132_99_v3v4_q2_2019-7.qza --i-reads rep_seqs.qza --o-classification taxonomy.qza
Killed
(qiime2-2019.7) [email protected]:~/metagenome$ ls
aligned_rep_seqs.qza P2_2.fq
alpha_rarefaction.qzv P3_1.fq
q-manifest.txt P3_2.fq
c-metadata1.csv P4_1.fq
a-metadata.csv P4_2.fq
‘a-metadata-metagenomic.csv’ P5_1.fq
check_id_output P5_2.fq
core_metrics_results P6_1.fq
demux.qvz.qzv P6_2.fq
denoising_stats.qza P7_1.fq
denoising_stats.qzv P7_2.fq
gen_ma_me.bash P8_1.fq
manifest_toy.tsv P8_2.fq
masked_aligned_rep_seqs.qza P9_1.fq
metadata-a.csv P9_2.fq
metadata-a.txt paired-end-demux.qza
metadata.csv paired-end-demux_toy.qza
metadata.tsv pe-64-manifest
out.extendedFrags.fastq ref_seqs.qza
out.hist rep_seqs.qza
out.histogram rooted_tree.qza
out.notCombined_1.fastq silva132_99.qza
out.notCombined_2.fastq silva_132_99_v3v4_q2_2019-7.qza
P10_1.fq table.qza
P10_2.fq table.qzv
P11_1.fq tabulated-a-metadata.qzv
P11_2.fq tabulated-metadata.qzv
P1_1.fq tail_name.txt
P1_1.fq.gz unrooted_tree.qza
P1_2.fq validate_mapping_file_output
P2_1.fq
(qiime2-2019.7) [email protected]:~/metagenome$ qiime feature-classifier fit-classifier-naive-bayes --i-reference-reads ref_seqs.qza --p-reads-per-batch 1000 --i-reference-taxonomy silva132_99_ref_taxonomy.qza --o-classifier classifier.qza

Usage: qiime feature-classifier fit-classifier-naive-bayes
[OPTIONS]

Create a scikit-learn naive_bayes classifier for reads

Inputs:
–i-reference-reads ARTIFACT FeatureData[Sequence]
[required]
–i-reference-taxonomy ARTIFACT FeatureData[Taxonomy]
[required]
–i-class-weight ARTIFACT FeatureTable[RelativeFrequency]
[optional]
Parameters:
–p-classify–alpha NUMBER
[default: 0.001]
–p-classify–chunk-size INTEGER
[default: 20000]
–p-classify–class-prior TEXT
[default: ‘null’]
–p-classify–fit-prior / --p-no-classify–fit-prior
[default: False]
–p-feat-ext–alternate-sign / --p-no-feat-ext–alternate-sign
[default: False]
–p-feat-ext–analyzer TEXT
[default: ‘char_wb’]
–p-feat-ext–binary / --p-no-feat-ext–binary
[default: False]
–p-feat-ext–decode-error TEXT
[default: ‘strict’]
–p-feat-ext–encoding TEXT
[default: ‘utf-8’]
–p-feat-ext–input TEXT
[default: ‘content’]
–p-feat-ext–lowercase / --p-no-feat-ext–lowercase
[default: True]
–p-feat-ext–n-features INTEGER
[default: 8192]
–p-feat-ext–ngram-range TEXT
[default: ‘[7, 7]’]
–p-feat-ext–norm TEXT
[default: ‘l2’]
–p-feat-ext–preprocessor TEXT
[default: ‘null’]
–p-feat-ext–stop-words TEXT
[default: ‘null’]
–p-feat-ext–strip-accents TEXT
[default: ‘null’]
–p-feat-ext–token-pattern TEXT
[default: ‘(?u)\b\w\w+\b’]
–p-feat-ext–tokenizer TEXT
[default: ‘null’]
–p-verbose / --p-no-verbose
[default: False]
Outputs:
–o-classifier ARTIFACT TaxonomicClassifier
[required]
Miscellaneous:
–output-dir PATH Output unspecified results to a directory
–verbose / --quiet Display verbose output to stdout and/or stderr during
execution of this action. Or silence output if
execution is successful (silence is golden).
–citations Show citations and exit.
–help Show this message and exit.

                There was a problem with the command:                     

(1/1?) no such option: --p-reads-per-batch
(qiime2-2019.7) [email protected]:~/metagenome$

@arne, The error message is just telling you that --p-reads-per-batch isn’t a parameter for fit-classifier-naive-bayes. --p-reads-per-batch is an option for the classify-sklearn command you were trying to run in your original post.

The same command is still being killed in the output you just posted, likely for the same reason as before. fit-classifier-naive-bayes has a similar chunk-size parameter you can adjust to trade more time for lower memory needed. Pass an integer smaller than the default 20,000 if you decide to use it. Maybe start with 1000, and experiment from there?

Chris,
I used following command
(qiime2-2019.7) [email protected]:~/metagenome$ qiime feature-classifier classify-sklearn --i-classifier silva_132_99_v3v4_q2_2019-7.qza --i-reads rep_seqs.qza --p-reads-per-batch 500 --p-n-jobs 1 --o-classification taxonomy.qza

Plugin error from feature-classifier:

[Errno 28] No space left on device

Debug info has been saved to /tmp/qiime2-q2cli-err-i5grpwex.log

virtual disk space was 70 GB and this run filled all the space. Can you tell me how can I clean the junk file?
best regards
arne

Nice! A new error message means progress!
I’m not sure what you mean by “clean the junk file”, and haven’t spent any time with VirtualBox yet, so don’t know if it offers any special features for cleaning up a system.

If I were you, I’d probably just remove any files I don’t need for this analysis, or move them off the virtualbox image. There’s no special command to delete QIIME 2 Artifacts/Visualizations - they’re just zip archives, and can be deleted like anything else (after backing them up elsewhere if you’ll need them in future).

If you’re still short on space, you could add some extra space to your virtualbox, or look up how to free up disk space in VirtualBox and whatever linux distro you’re running on the virtualbox (Looks like Ubuntu if you’re using our image). Temp files are probably deleted by the system when you reboot, but if this is a heavily-used system, there could be some stuff cluttering your apt-cache or something?

Finally, if you’re just not sure where all your space went, this bash command will give you a sorted list of the largest files on your system, which might help you find candidates for removal. du -S | sort -n -r |more

Good luck!
Chris

p.s. If you have multiple cores available to your virtualbox image, setting classify-sklearn's --p-n-jobs parameter to the number of available cores will save you some time.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.