MetONTIIME help

Hi everyone, does anyone have any step by step tutorials on how to use MetONTIIME. I will have nanopore data in the future and would like to use this but don't know where to start?.

Best wishes

Hello Martyn,

MetONTIIME pipeline runs in a single step, so I'm not sure what the tutorial would say...

Have you discovered the config file? Most settings are inside this one file, which you change before running the pipeline. There are many choices to make:

Hi Colin, thank you for this. So one the data has been imported it can be treated the same as if you are using QIIME2?

Best wishes

I think so! This pipeline just calls Qiime2, so output artifacts should be normal Qiime2 files.

While I try not to bother folks, in this case, let's reach out to @MaestSi for advice on using this pipeline!

Dear @colinbrislawn and @Mantella86 ,
yes, the pipeline just runs some QIIME2 commands and produces standard qza artifacts. I would just recommend doing QIIME2 tutorials in the docs and maybe reading Nextflow tutorials, if you want to understand the organization of the code in "processes" and "channels". You should be able to find most needed information in the README file of the pipeline.
Best,
Simone

1 Like

Hi!
Just letting you (and others) know what I did if it helps. I'm not very tech savy at all, but it is quite straightforward to run, unless there are errors.. :sweat_smile:

I downloaded the nextflow to run MetONTIIME as per instructions given for my Linux Ubuntu computer.
I made sure the working directory is the MetONTIIME folder (cd MetONTIIME) as it needs the metontiime2.conf and metontiime2.nf files.
I downloaded the Greengenes2 Qiime2 artifacts provided and therefore set
--importDb=FALSE
I have the taxonomy.qza and seqs.qza in the resultsDir in a file importDb. Then I don't need to provide fasta format and tsv formats for the pipeline.

My first try ended with an error, because it couldn't find the directories. I fixed it with providing the full directory starting with "/home/". There might be other ways to fix it, but I'm definitely not savy enough to figure that out.
The second try I found out it needed a metadata file, even though the code says it will create one during the runtime. I just put in what barcode is what sample (sample-id) in tsv file. Perhaps because I have concatenated my ONT reads already previously to barcode01.fastq.gz and so on (and provided --concatenateFastq=FALSE in the code part) it needed the metadata file.
On my third try, unfortunately the dereplicate part needed 100GB of memory, which I don't have (I only have 60GB). I was using VSEARCH and then Blast options on the classifier to use it on full 16S reads, but I think I have too many reads even after filtering... Reading the github comment, I see that it is very memory intensive pipeline..

Anyways, hope you get it working for your data! :slight_smile: :crossed_fingers:

Also, awesome thing is, if you get an error further down in the analysis you can add '-resume' to your code and it picks up after you've fixed the error(s). :partying_face:

1 Like

Hi, sorry for the late reply. If you still need assistance with the pipeline, please open an issue in the GitHub page.
Best,
Simone

2 Likes

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.