Diversity plug-in error

Continuing the discussion from Plugin error from diversity:

Hi
I’m having a similar issue to the above discussion. I also made my biom table in Qiime1 and then imported it into Qiime 2 without error. However, when I go to perform diversity analyses, I get the same error as Mircea. I’ve checked my Feature Table, and I think I should now try to update my sequences? Which ones though? Any ideas as to how to update all of the names?

Thanks!

Laura

Hi @LauraMason,
Could you please share the precise command that you are using, and the complete error output?

It looks like that earlier discussion may or may not have been resolved…

In my opinion (having run into similar issues myself when migrating data from QIIME1 to QIIME2), the easiest fix may be to just import the representative sequences into QIIME2 and align/build a tree from scratch. But let’s see the error that you are contending with now and strategize from there.

Thanks!

Hi Nicholas,
Here’s the code I was running & the error:
qiime diversity core-metrics-phylogenetic --i-phylogeny rooted-tree.qza --i-table funseqs_featuretable_gt3.qza --p-sampling-depth 1000 --m-metadata-file map_file.tsv --output-dir core-metrics-results

error " All feature_ids must be present as tip names in phylogeny. feature_ids not corresponding to tip names (n=1326)
And then it gives me a list of all 1326 (I’m assuming) otus

I’m a little stuck
Thanks

Laura

wow — sounds like all features (or close to it) are missing. Where did your rooted-tree.qza come from? Did you make this in qiime1 and then import, or do you have a file of representative sequences that you could use to re-align/re-build your phylogeny in QIIME2? That really may be the easiest fix here (unless if that's what you have done already).

Also, please follow the steps described in the post that you linked to above. Could you please provide the outputs to the commands that @thermokarst recommends in that thread? It would be useful to understand whether all feature IDs are missing from the tree and/or how the names differ between the tree and the feature table.

Hi Nicholas
Sorry for the slow response. I made my rooted tree in Qiime2 after importing the biom table from Qiime1. I’m going to repeat the process this weekend to see if I can figure out what I did and try to follow the steps above again. Thanks for your help, and I’m sure I’ll be posting again soon
Cheers
Laura

2 Likes

Hello
Attached are the commands I ran.

I tried changing the sampling depth to 500, and that seemed to help a little (fewer missing OTUs). However, when I tabulate the feature table, the missing OTUs are there. Also, I'm getting an error when I try to summarize my feature table, and I'm having an issue running FigTree, so I have not looked at the tree.

What do you think? Any input is appreciated!

QIim.txt (3.7 KB)

Hi @LauraMason,
Thanks for sharing the list of commands. In general this workflow looks fine but a few things stick out. Let’s just work through that file:

I have a hunch that using make_otu_table.py could be an issue here. For example, if that is not the final OTU map or for some other reason does not match up with the reference sequences that are output from OTU picking. Most (or all?) qiime1 OTU picking workflows output a final OTU table, so you don’t need to run this command yourself. What OTU picking command are you using in qiime1?

It is also suspicious that you have zero-frequency OTUs in your OTU table before anything has been done to it (or so I understand from the notes in your command log). This could very likely be the source of your issues (e.g., if you built a biom table from the wrong OTU map, e.g., prior to a filtering step, then those features would be in the OTU table but not the reference sequences). You could use filter_otus_from_otu_table.py to filter your OTU table to only OTUs found in the fasta file prior to importing to QIIME2 (or use feature-table filter-features after importing).

All these convoluted steps leave me wondering: why don’t you just import your raw sequences or qiime1-demultiplexed data into QIIME2 and do denoising/OTU picking in QIIME2? This might be the easiest way to ensure that everything runs smoothly in downstream steps.

Your feature-table summarize command is failing because you are providing the wrong input. Your command is:

qiime feature-table summarize --i-table split_library_output/rep_seqs_gt3.qza --o-visualization table.qzv --m-sample-metadata-file map_file.tsv

But you should be using the feature table (funseqs_featuretable_gt3.qza) as input instead.

Could you please run that command and then look at/share with us the outputs of summarize and tabulate-seqs?

Thanks!

1 Like

Hello,

I repeated filtering my otu table and my fasta file , and imported the correct files into Qiime2 The feature table summary:
number of samples: 36
Number of features: 2068
Total frequency: 500,122

However, when I look at the OTU table summary in Qiime1, it looks like I have only 7 samples. I’m really not sure what is going on with this.

As far as importing raw sequences into Qiime2, I am working with 454 data, and I thought the OTU (ASV?) picking algorithm was Illumina specific? Or is this where I would use vsearch as my OTU picking method? I tried that, ran into an error, and figured it was just easiest to use Qiime1 up until the biom table step & import it. What do you recommend?

OTU picking method (qiime1):
pick_otus.py -i seqs.fna -o seqs_otus -m usearch61 -s 0.97

Thank you for your help

Laura

Hi @LauraMason,

This is all sounding really fishy — it sounds like something may have gone wrong in qiime1, which is unsupported at this point so it makes diagnosing this issue a bit convoluted and difficult. Whenever I find myself in a position like this, I always feel that it is easier to wipe the slate and start over, rather than retrace my steps!

You are correct, dada2 is specific for Illumina — 454 support is forthcoming. You should use vsearch for OTU picking.

Helping you debug a q2-vsearch error would probably be a lot easier than trying to debug your current error, now that it sounds like the issues might stretch all the way back to qiime1. (if you go that route, please open a new forum thread so it's easier for other users to search :slightly_smiling_face: )

Were you able to import your 454 reads into QIIME2? I recall that you had issues with importing and I just want to make sure the vsearch error was not related to that.

Oh wow, that takes me back :older_man: . You're doing things the old school way — one of the OTU picking workflows in qiime1 like pick_open_reference_otus.py would save you some of the legwork (all of q2-vsearch's methods are similar workflows so that you don't need to manually convert OTU maps to feature tables). And might be the cause of this issue, as I noted above if you are building your OTU table off of the wrong OTU map.

So my personal preference is to do it all in QIIME2 and support would be easier — but I suppose that depends on the vsearch error you are having. If you cannot import your 454 data into QIIME2 to begin with, that changes matters...

Ok so just to recap, it would be best at this point to filter my reads for chimeras in Qiime1 (this was included in my original workflow), and then import them into Qiime2 for demultiplexing, OTU picking via vsearch etc? (You’re right, this is sounding suspicious).

As far as importing raw 454 data into Qiime2: My data are saved as .fna files. The Imports tutorial seems to cover this, but I wasn’t sure if it was the right step for my data the first time I tried it. The vsearch error I got was most likely operator, but I’ll post again if I run into problems.

Thanks so much!!

No need to use qiime1 for chimera filtering. QIIME2 can do this for you after OTU picking — see this tutorial.

The earlier you get your data into QIIME2, the better (since qiime1 is no longer officially supported, it will be more difficult for us to diagnose problems originating there).

If you only have .fna data, it looks like you will need to demultiplex in QIIME1 and import qiime1-demultiplexed data. This is covered in the vsearch tutorial.

So just to recap:

  1. you have 454 data as .fna (no quality scores)
  2. demultiplex in QIIME1
  3. import to QIIME2 as SampleData[Sequences] as covered in the OTU picking tutorial
  4. OTU pick with q2-vsearch, as described in the OTU picking tutorial
  5. Chimera filter using q2-vsearch, as described in the chimera tutorial
  6. QIIME on. :sun_with_face:

I hope that helps!

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.