How to tolerate tiny error barcode sequences when using the "iime cutadapt demux-single" command

Sorry I'm new to qiime, but this problem has been troubling me for a while and I didn't found a solution in tutorials.

In short, I dont have barcode.fq.gz file but tsv file. After I demultiplexed my fq.gz file with the following command:

qiime cutadapt demux-single
--i-seqs multiplexed-seqs.qza
--m-barcodes-file barcode.tsv
--m-barcodes-column Barcode
--o-per-sample-sequences demultiplexed-seqs.qza
--o-untrimmed-sequences untrimmed.qza

Without any problems, I got the file demultiplexed and not demultiplexed by the barcode.tsv. But after I carefully checked the untrimmed.qza files that weren't demultiplexed, I realized that their incorrect barcode was actually a mistake caused by sequencing. Like an additional base or a missing base.

How do I relax the demultiplexing process so that sequences of very similar but not identical barcode are grouped together?

Similarly in the subsequent process of removing abiotic sequences, do I face the problem of overly strict parameter conditions that make me lose sequences with minor errors. The command as follow:

qiime cutadapt trim-single
--i-demultiplexed-sequences demultiplexed-seqs.qza
--o-trimmed-sequences trimmed-seqs.qza

Did you try to play with

  --p-error-rate PROPORTION Range(0, 1, inclusive_end=True)
                        Maximum allowed error rate.             [default: 0.1]


You can calculate the proportion of error rate that is suitable for you and provide it to the plugin.


1 Like

An off-topic reply has been split into a new topic: Dada2 without denoising

Please keep replies on-topic in the future.

It is always a good idea to create a new topic when main issue changed. So I moved your second question as a new topic - in this way it will get more attention and will be answered by the most competent and available person.


1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.