Thanks for the kind responses.
I found that I have 30 samples contained in distinct directories.
So, I put those into a directory and ran the import command with
an extended manifest file:
sample-id,absolute-filepath,direction
91,$PWD/91/159-78_S76_L001_R1_001.fastq,forward
91,$PWD/91/159-78_S76_L001_R2_001.fastq,reverse
92,$PWD/91/159-79_S77_L001_R1_001.fastq,forward
92,$PWD/91/159-79_S77_L001_R2_001.fastq,reverse
....
119,$PWD/91/160-11_S10_L001_R1_001.fastq,forward
119,$PWD/91/160-11_S10_L001_R2_001.fastq,reverse
120,$PWD/91/160-12_S11_L001_R1_001.fastq,forward
120,$PWD/91/160-12_S11_L001_R2_001.fastq,reverse
Then, summarized it into a gzv format as follows:
demux_A4.qzv (287.7 KB)
After this, I wanted to denose the sequences to harvest OTU (or feature table?). So, I followed dada2 procedure:
qiime dada2 denoise-paired
--i-demultiplexed-seqs demux_A4.qza
--p-trim-left-f 13
--p-trim-left-r 13
--p-trunc-len-f 250
--p-trunc-len-r 250
--o-table table_A4.qza
--o-representative-sequences rep-seqs_A4.qza
--o-denoising-stats denoising-stats_A4.qza
denoising-stats_A4.qzv (1.2 MB)
It took a long time but I think I made a mistake on choosing the parameters or other parts.
Because when I run the code for clustering:
qiime vsearch cluster-features-open-reference
--i-table table_A4.qza
--i-sequences rep-seqs_A4.qza
--i-reference-sequences 85_otus.qza
--p-perc-identity 0.85
--o-clustered-table table-or-85.qza
--o-clustered-sequences rep-seqs-or-85.qza
--o-new-reference-sequences new-ref-seqs-or-85.qza
It finishes within 3 mins. But, it supposed be much longer from the tutorial.
So, These were things I have wrestled with so far.
I am still dabbler in this area; so, I might be using awkward wordings here and there.
But, if you could give me a small clue for the right direction, I would be very appreciated!