Building a COI database from BOLD references

For the time being I’m using ANML as per your tutorial, as I’ve got some data generated with both ANML and ZBJ I’d like to use this on. But ultimately there could be a few other primer pairs that I eventually use. Right now my issues are with the initial alignment though (creating the file reference_MSA). It keeps having a weirdly high number of gaps, but I’ll attempt tweaking the input file a load today.

Yeah, that makes sense about the trimming of target/non-target taxa. I guess a potential issue could be that the amplicon from non-target taxa may not actually be for the CO1 region in question, but I assume that this would be unlikely, and in that case theres no guarantee that the amplicon is even for CO1 and not another gene entirely.

One thing I forgot to add in my previous post: my Mac had issues generating bold_rawMetadata_forQiime.tsv because of an unaccepted non-standard character in one of the metadata csv files. Changing the command from sort -uk1,1 to LC_ALL=C sort -uk1,1 fixed it.

I’ve now got to the final step! And have a slightly more basic question as a result. Can I ask what sort of computing system and parameters you’re using to make the classifier? When I run it on my laptop (FWIW a fairly solid MacBook Pro) or our cluster it quickly runs out of RAM.

On the cluster I get Plugin error from rescript: Unable to allocate 25.7 GiB for an array with shape (20000, 172343) and data type float64

Admittedly it was probably optimistic trying to do this on a laptop! I’m still learning my way around my new university cluster’s SLURM scheduler but it seems I’m hitting a wall with allocating enough space for the job to execute. I’ve tried increasing the number of nodes I’m using, increasing --p-n-jobs and decreasing --p-reads-per-batch, which I believe should split the rescript task over a larger number of nodes, but every time it seems that the issue is that all of the dataset has to be processed somehow by every node first, as the numbers in the error message don’t change. The seqs.qza file is 27MB and the taxa.qza file is 9.4MB so the fact that rescript is dealing with such a large object in the first 16 minutes of operation suggests that theres something going on that I could optimise, was wondering if you had any thoughts as to what :slight_smile:

1 Like

If I remember right, it’s somewhere between 40-80 GB RAM for these classifiers with 1+ million sequences. It’s a lot.

Reducing the number of reads per batch helps, but I’ve found increasing the number of jobs will increase the memory footprint. You can lower that n-jobs parameter to 1 or 2 if you’re stuck without having the ability to request more RAM.

Glad to hear you made it all the way to the end - congrats! Now you’re officially responsible for generating all the new COI classifiers :wink:

3 Likes

Oh god, wish me luck! :wink:

Things seem to be working for generating my ANML classifier now (well, its proceeded 90 minutes without crashing, when normally it dies in under 20). I think the important thing was tinkering with some cluster settings though. Hopefully in a few days we’ll find that it worked! Now to begin working on a ZBJ classifier…

Sadly I’m back with another error message! Our cluster was being temperamental so I ran it with reduced demands locally on my MBP. It just crashed after 20 days of running seemingly happily (:cry:)

I was using the following command, with ${primer} corresponding to the primer I was wanting to use, with the relevant files in an appropriately named directory. This is a modification so that someday I can do multiple primers at once on the cluster in an array job.

qiime rescript evaluate-fit-classifier \ --i-sequences ${primer}/final_${primer}_seqs.qza \ --i-taxonomy ${primer}/final_${primer}_taxa.qza \ --p-reads-per-batch 400 \ --p-n-jobs 1 \ --output-dir reference_libraries/fitClassifier_bold_${primer}

I got the following error in my terminal from rescript

Plugin error from rescript:

Missing one or more files for TSVTaxonomyDirectoryFormat: 'taxonomy.tsv'

This seems odd: rescript was running fine for the previous 20 days using the taxa.qza object. The log file gives the following:

Validation: 111.59s /Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/q2_feature_classifier/classifier.py:102: UserWarning: The TaxonomicClassifier artifact that results from this method was trained using scikit-learn version 0.21.2. It cannot be used with other versions of scikit-learn. (While the classifier may complete successfully, the results will be unreliable.) warnings.warn(warning, UserWarning) Training: 35045.83s Classification: 1683053.34s /Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/rescript/evaluate.py:76: UserWarning: The lists of input taxonomies and labels are different lengths. Additional taxonomies will be labeled numerically by their order in the inputs. Note that if these numbers match existing labels, those data will be grouped in the visualization. warnings.warn(msg, UserWarning) Traceback (most recent call last): File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/q2cli/commands.py", line 328, in __call__ results = action(**arguments) File "<decorator-gen-138>", line 2, in evaluate_fit_classifier File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/sdk/action.py", line 240, in bound_callable output_types, provenance) File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/sdk/action.py", line 477, in _callable_executor_ outputs = self._callable(scope.ctx, **view_args) File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/rescript/cross_validate.py", line 55, in evaluate_fit_classifier evaluation, = _eval([taxonomy], [observed_taxonomy]) File "<decorator-gen-334>", line 2, in evaluate_classifications File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/sdk/action.py", line 240, in bound_callable output_types, provenance) File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/sdk/action.py", line 477, in _callable_executor_ outputs = self._callable(scope.ctx, **view_args) File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/rescript/cross_validate.py", line 246, in evaluate_classifications expected_taxonomies = [t.view(pd.Series) for t in expected_taxonomies] File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/rescript/cross_validate.py", line 246, in <listcomp> expected_taxonomies = [t.view(pd.Series) for t in expected_taxonomies] File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/sdk/result.py", line 277, in view return self._view(view_type) File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/sdk/result.py", line 289, in _view result = transformation(self._archiver.data_dir) File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/core/transform.py", line 68, in transformation self.validate(view, validate_level) File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/core/transform.py", line 143, in validate view.validate(level) File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/plugin/model/directory_format.py", line 171, in validate getattr(self, field)._validate_members(collected_paths, level) File "/Users/davehemprichbennett/opt/anaconda3/envs/crag_conda/lib/python3.6/site-packages/qiime2/plugin/model/directory_format.py", line 105, in _validate_members % (self._directory_format.__class__.__name__, self.pathspec)) qiime2.core.exceptions.ValidationError: Missing one or more files for TSVTaxonomyDirectoryFormat: 'taxonomy.tsv'

I don’t know much about the inner workings of rescript, but it looks like it expected the qza object to contain a file that was specifically named taxonomy.tsv. My file ${primer}/final_${primer}_taxa.qza was working fine until that point, is there a step that I may have missed when creating the qza file? Or perhaps it would have worked if the file which was loaded into qiime earlier had specifically been named ‘taxonomy.tsv’?

Hey @hemprichbennett,

The first thing that jumped out to me (and is a message I've received multiple times in various projects myself) is this:

I believe the conclusion we reached in this post addressing this issue was that you need to ensure that the version of scikit-learn used to train and test the classifier need to be the same (though the particular QIIME2 version does not necessarily, as there can be multiple QIIME releases with the same scikit-learn versions...).

From the error message above, it's evident that the classifier file was trained using scikit-learn v 0.21.2. Can you confirm whether or not your QIIME environment is using the same version of scikit-learn?

Ooh, interesting. I’ll have a look into identifying the versions that were used for each, I’m not sure where that could have been set actually. As per the tutorial, I was using evaluate-fit-classifier to train and then test the classifier in a single command, so I would assume that the version of scikit-learn would be identical for both as there wouldn’t be any point where user input caused a different version to be loaded?

Thanks!

You are right @hemprichbennett, but good sleuthing @devonorourke!

That warning is just issued whenever fitting a classifier, telling you not to use it with other sklearn releases, but does not impact anything here. We can just ignore it for now.

@hemprichbennett this error is basically saying that after running for 20 days you reached the final stage (evaluation) only to find that there was an error with the expected taxonomy file.

No... the filenames of inputs at import do not matter, as those files are renamed internally when they are saved to an artifact file.

I am not 100% sure what happened, but from the sound of it this file failed to save internally — however, the message itself is a bit of a red herring (as explained below) so I suspect basically this job was too big and too long, and something unintentional happened along the way. Basically, evaluate-fit-classifier creates a new reference taxonomy file by subsetting to match the sequence IDs (since the IDs of the two must match but sometimes the taxonomy is a superset). The file must have saved, because this taxonomy is already being used (and validated) at the classifier fitting step.

You could run "qiime tools validate" on the input sequences and taxonomy to see if that dredges up any details, but it probably won't... I do not think that the inputs are to blame, I also do not think that this is a bug (no evidence of that yet), I am inclined to blame system failure (which may or may not occur if you re-run the job! I'm guessing you don't want to wait 20 days to find out :expressionless:).

For now, I think the best thing to do may be to see if you can run this with a smaller set of sequences. Either grabbing a random subset or clustering into OTUs at a high level (e.g., 90%) would accomplish this. Basically, what I am hoping to do here is just rule out the possibility of a bug (again, I don't see evidence of this, but this is the best first step)

2 Likes

Thanks @Nicholas_Bokulich for providing all the great details!

@hemprichbennett - you might be able to randomly select sequences with this tip, but it might be easier to do externally from QIIME. Alternatively, there might be a few RESCRIPt tricks you could use if you have a list of IDs you want to play with first (and filter from the larger dataset)… something like qiime rescript filter-taxa - see here (section on ** Creating a classifier from RefSeqs data**).

Hi @devonorourke and @hemprichbennett,

The great @misialq has recently added subsample-fasta to RESCRIPt. Which might make things a little easier. :slight_smile:

-Mike

2 Likes

Aha, subsample-fasta worked well thanks. I’ve successfully ran evaluate-fit-classifier on a proportion of 0.01 of the reads without any errors, am now trying it again on 0.1.

Interestingly I managed to run fit-classifier-naive-bayes on all of the reads overnight without issue, so presumably my previous error had occurred during the classifier evaluation stage. I guess at some point I’ll have to rerun the command on the full file and see if theres an inherent issue in trying to use evaluate-fit-classifier on these files, or if as I suspect @Nicholas_Bokulich was correct and it was an unfortunate system error. I expect in future I’ll keep the classifier fitting and evaluation steps separate, just in case any system error occurs during the evaluation and causes me to lose everything!

1 Like

great! you are part of the way there. you can just re-run the individual steps outside of this pipeline. Next steps:

  1. run "classify-sklearn" and use your trained classifier to classify the sequences that you used for training (the purpose of this classifier is to see best-case performance, when you know the correct classifications).
  2. run "rescript evaluate-classifications" to compare the true taxonomy vs. the predicted taxonomy for those reference reads.

All you are missing is the different subsampling/validation steps that evaluate-fit-classifier runs, but those are just safety checks and not necessarily needed.

With a database of this size that is probably wise!

2 Likes

@devonorourke I see there’s a caveat not to use the the bold_anml artifacts with other primers. I’m using ZBJ, so it’s completely contained within the ANML region. I don’t have the computing power to do the whole trimming/filtering process, but I was thinking using the bold_anml_classifier.qza would be better than using a naive bayes trained on untrimmed sequences. At least with sequences trimmed to the ANML site, most of the excess is trimmed away.

Does that logic check out, or am I missing something important in the process?

Hi @smayne11, thanks for your question.

I think you should give the ANML classifier a shot. I’d be curious to see how many of your sequence variants are classified (or unclassified), and among those with some taxonomic labels, what fraction are being assigned Family, Genus, or Species-level information.

If the classifier isn’t working for you and you want to generate your primer-specific classifier starting from the broader BOLD sequence and taxonomy files now hosted by QIIME2 here:

Even if you lack the local computing power, you might dive into renting a machine through something like Google Cloud or AWS - that’s the route I ended up taking for some of these compute-heavy tasks.

Good luck, and do please let me know how you make out with the ANML classifier.

2 Likes

Thanks so much! I will definitely let you know how it goes with the ANML classifier.

Looks like those are newer versions of the files from the tutorial. Is there somewhere I can go for the most up to date versions in the future? I can download those here, but can’t for the life of me find them anywhere on the QIIME2 website or forums.

I’m also planning on trying a classifier trained on bold_anml_seqs.qza and bold_anml_taxa.qza filtered to just records from the US & Canada using the raw metadata file. I assume that would be in the same place, but if not, do you have an updated version of it as well?

I believe the s3 buckets are the same files as those linked at the top of this post; at least, I didn’t change them on purpose, so if the files are different for some reason please let me know. The metadata file is also at the top of the post or click here, where you can download it directly from the OSF repo. I’ve been encouraged to submit a new post on the forum that clarifies just where these files will live and how to properly cite the resources - something I hope to do soon but haven’t had the time to get done properly.

A long time ago I once built a barplot of the various arthropod orders by their country of origin according to the BOLD records. My advice is to be wary of using geographic information as a requirement, as it may leave out may good quality references that lack any country name. I’d be interested to know how different your outcomes are between data classified by US&Canada only, versus those classified by any BOLD resources regardless of geographic information.

Good luck

1 Like

Got it, sorry for the confusion. I was just noticing the “2021.04” in the address and thinking that meant they were from this month.

In case you weren’t aware, the classifier linked above no longer works on the most recent Qiime version–specifically the new scikit-learn version. Easy to train a new one on the seqs and taxa you provide though, so not a big issue. Edit: looks like the server I have access to can’t handle this step for the full database. Only 64GB RAM, so not a surprise looking back at the tutorial. Looks like I won’t be able do a good comparison without getting access to more computing power, which might or might not be possible for me. Will still let you know how the US & Canada version works, which was possible with 64GB RAM.

Thanks for the warning on geographic filtering. I’ll let you know how that turns out too. I am mostly worried about adding a bunch of sequences that shouldn’t be in my study system (Western Mass.) at all and potentially decreasing the confidence of the classifier.

Feel free to send me a direct message if you want to have a longer discussion on how to (potentially) set up a cloud compute option to get the job done. I’m no longer actively involved in COI research, but am happy to chat if you think there are certain tasks you need more info about - especially if it’s for something you’re tackling out in the Berkshires (unless you’re calling Worcester “western” Mass…) :slight_smile:

1 Like

Alright, here are the results. I did end up figuring out how to train a new version of bold_anml_classifier.qza. It is in the linked folder at the bottom and named bold_anml_classifier2.qza to avoid confusion, though it should be exactly the same as the original. This should be usable on QIIME2v2021.2.0 and feature-classifierv2021.2.0. Can’t make any promises past that. More details on how I trained it using less RAM are below (tl;dr use feature-classifier instead of rescript).

I got a little carried away and did comparisons between the classifiers trained on the full anml dataset (anml, ~740,000 records), the anml dataset filtered for all records listing “United States” or “Canada” as their location (anmlUSCA, ~300,000 records), and two (untrimmed) training datasets I had already compiled. The untrimmed datasets aren’t a perfect comparison since they were downloaded and filtered differently, but I think they’re at least a little informative. They were downloaded directly from the BOLD online database in March 2021 and then filtered with a Python script I’ll link at the bottom. The main filters are: remove duplicate records, remove any records with unallowed characters, remove records that aren’t COI-5P. The two downloaded datasets are all records from a search for “United States” & “Canada” (USCA, ~200,000 records) and all records from a search for a list of eastern states and provinces, which actually is a larger dataset (EastUSCAOnt, ~280,000 records). Exact search terms for each are linked below. I had to download some in chunks, which possibly affects what records were included. All comparisons are made based on a ~1000 sample library of bird fecal samples (and blanks from both DNA extraction & PCR) denoised with dada2 (~3500 ASVs). COI was amplified using ZBJ primers. For each naive-Bayes classifier, I ran them at a confidence threshold of 0.7, 0.5, and 0.3. I also used BLAST to classify the data using each of the reference datasets and a few percent identity thresholds as a kind of reference.

Comparing anml to anmlUSCA:
In general, anmlUSCA identified features to a higher taxonomic level, but it depended a bit on the confidence parameter. At 0.7 they were almost exactly the same (anml = 59% to species, anmlUSCA = 60% to species). At 0.5 it was 71% and 76% and at 0.3 it was 81% and 91% to species for anml and anmlUSCA respectively. There’s a comparison qzv linked at the bottom if you want to look at more details. Same general trend holds for BLAST. Looks to me like adding the training data from outside of the US and Canada reduces the confidence of the classifier by adding a bunch of sequences that actually don’t exist in the study area, but potentially that’s real uncertainty which the filtered dataset doesn’t capture.

Comparing to untrimmed data (also filtered differently):
At the species level, the naive-Bayes classifiers trained on trimmed data classifies more, but at higher taxonomic levels it’s more mixed. BLAST is the other way around though, classifying more to species with the untrimmed data. My guess is that’s mostly incorrect classifications in BLAST (matching sequences outside the target sequence), but it’s hard to know.

Training bold_anml_classifier2.qza:
Using feature-classifier I was able to train a naive Bayes classifier with less computing power. I don’t know the exact stats, but it took about 8.5 hours and maxed out at <65 GB of RAM (the total memory of the server I was on).

qiime feature-classifier fit-classifier-naive-bayes
--i-reference-reads bold_anml_seqs.qza
--i-reference-taxonomy bold_anml_taxa.qza
--p-classify--chunk-size 2000
--o-classifier bold_anml_classifier2.qza

Though I used a chunk size of 2000, a chunk size of 4000 also looked like it was going to peak at <64 GB RAM before I stopped it for unrelated reasons. Chunk size of 5000 got killed because I hit my ~64 GB RAM limit.

Last, here’s a link to all the files, code, etc, but first a note. 1) This probably goes without saying, but the data I’m using to compare classifiers is my thesis data, so anyone can feel free to use it for tinkering with classifiers, but nothing else. If you’re interested in it past that, feel free to reach out. The link: OSF | COI Database Cont.

3 Likes

Hi!
Thank you very much for this complete tutorial. I really appreciate it!
I have just started following it because we are about to analyze some COI data.

I have an issue with the step 4.
I just run seqkit for the filtering of min/max lengths, but I have retained only 89555 sequences.
I used the bold_drep1_*.qza provided. I checked the sequences and it has 1718762.

The commands I have used are:

seqkit seq --min-len 660 --max-len 1000 -w 0 ./tmpdir_boldFullSeqs/dna-sequences.fasta > boldFull_lengthFiltd_seqs.fasta

grep -c '^>' boldFull_lengthFiltd_seqs.fastaBlockquote
The seqkit version v0.16.1

Do you have any idea of what can be the problem?
Thank you very much!
Laura