Thanks @Nicholas_Bokulich. Thats exactly where I got stuck, though I finally managed to work it out by doing the following:
Demultiplexed data in QIIME 1;
split_libraries.py
-m region1_map.txt
-f 454/1.TCA.454Reads.fna
-q 454/1.TCA.454Reads.qual
-b 10
-L 500
-l 380
-d --record_qual_scores
-n 1000000
-o region1_w_Q_split_library
Merged resulting .fna and .qual files by
convert_fastaqual_fastq.py
-f 454/region1_w_Q_split_library/seqs.fna
-q 454/region1_w_Q_split_library/seqs_filtered.qual
-o region1_fastq
Created a fastq file for each sample
split_sequence_file_on_sample_ids.py
-i 454/region1_fastq/seqs.fastq --file_type fastq
-o region1_fastq_files
Compressed the fastq files to fastq.gz using gzip
gzip -r *
Finally imported to QIIME2 using “Fastq manifest” format
qiime tools import --type SampleData[SequencesWithQuality] --input-path se-33-manifest --output-path region1-demux.qza --input-format SingleEndFastqManifestPhred33
I was able to run DADA2 denoise-pyro after going through this. I am not sure if this is ideal or not but it worked. I appreciate if you could point out if there is a better way of doing this.
Best
Mehmet