Taxonomy confidence thresholds for benchmarking

I need to set thresholds for the taxonomy classifiers for benchmarking. How do you recommend I should do this?

1 Like

Hi @Robert_Edgar,
Thanks for getting in touch! You can check out the thresholds that we use in this benchmarking study.

The easiest way to do this (e.g., if you want to benchmark the QIIME2 classifiers vs. another method), would be to use the datasets and precomputed results in our evaluation framework. That repo contains pre-existing simulated and mock community data as described in the pre-print above, as well as pre-computed taxonomy assignments from the QIIME2 classifiers, RDP, and a handful of classifiers available in QIIME1. So all you need to do to compare a new method is classify taxonomy to the query sequences for each community of interest, and compare against the precomputed results. A number of jupyter notebooks exist with examples, as well as for evaluating results as described in the pre-print.

I hope that helps.


This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.