merging a new filtered dataset with my old dataset

for context this is my initial code that i used to upload my reads and create my data tables

Import (processed) demultiplexed reads into qiime2 using manifest file:

qiime tools import
--type 'SampleData[PairedEndSequencesWithQuality]'
--input-path Import/ManifestITS.txt
--output-path Import/ITS-paired-end-demux.qza
--input-format PairedEndFastqManifestPhred33V2

#Added simple quotation marks around SampleData[PairedEndSequencesWithQuality]

Generate visualization for imported reads

qiime demux summarize
--i-data Import/ITS-paired-end-demux.qza
--o-visualization Import/ITS-paired-end-demux.qzv
qiime tools view Import/ITS-paired-end-demux.qzv

mkdir DADA2_out

Denoise, dereplicate, merge reads into ASVs using DADA2. R1 and R2 were already trimmed extensively with CutAdapt (reason that most aren't 250 bp anymore). The ITS1 amplicon is small, and reads will overlap extensively. The advantage of trimming this much is that we will remove errors associated with 3' end of read.

qiime dada2 denoise-paired
--i-demultiplexed-seqs Import/ITS-paired-end-demux.qza
--p-trunc-len-f 220
--p-trunc-len-r 220
--verbose
--o-representative-sequences DADA2_out/ITS-rep-seqs-dada2.qza
--o-table DADA2_out/ITS-table-dada2.qza
--o-denoising-stats DADA2_out/ITS-stats-dada2.qza

Make a visualization of DADA2 stats artifact

qiime metadata tabulate
--m-input-file DADA2_out/ITS-stats-dada2.qza
--o-visualization DADA2_out/ITS-stats-dada2.qzv
qiime tools view DADA2_out/ITS-table-dada2.qzv

Make a visualization of DADA2 sequences artifact

qiime feature-table tabulate-seqs
--i-data DADA2_out/ITS-rep-seqs-dada2.qza
--o-visualization DADA2_out/ITS-rep-seqs-dada2.qzv
qiime tools view DADA2_out/ITS-rep-seqs-dada2.qzv

Make a new folder called BIOM

mkdir BIOM

Export table for making BIOM ASV count file

qiime tools export --input-path DADA2_out/ITS-table-dada2.qza --output-path BIOM

Export the BIOM ASV count file as tab-delimited text, can be viewed in Excel. Really just to show nicely how many times each Feature is found in each dataset --> can sort, etc.

biom convert -i BIOM/feature-table.biom -o BIOM/feature-table.tsv --to-tsv

following this i exported my reads as a fasta file with their respectively assigned ASV names so that i could use ITSx to extract the ITS1 region and used Vsearch to extract the reads that had sufficient alignments with the UNITE fungal database. This is in a fasta format.

My problem is that I want to import these back into my original qiime workflow to replace their respective counterparts that weren't trimmed and get rid of anything that wasn't a fungal sequence.

I have been struggling with how to do this and ultimately just want to be able to export an OTU table saying which sequence came from which sample... if anyone has done this before I am pretty new to bioinformatics so I would greatly appreciate any help :slight_smile:

Hello Emma,

Welcome to the forums! :qiime2:

You can do that! import per-feature unaligned sequence

Then you could filter your dada2 feature table to only include features that are in that imported fasta file.
https://docs.qiime2.org/2023.5/plugins/available/feature-table/filter-features/