Hi @bsteve1120,
I hope I'm interpreting this correctly: your reviewer is concerned you have 6,000 reads for 60 samples, as in ~100 reads/sample? I don't blame them, I would be annoyed, too.
Let's start with the fact that is an approximately fixed number of reads in a sequencing run. So, imagine we have a sequencing run with 1,000,000 reads. The average number of reads per sample (if my pools are equal concentration) is going to be that 1,000,000 reads over the number of samples. So, I coudl spend all 1M on a single sample. (This feels like a waste of money to me, but YMMV). I could put on 10 samples and get 100,000 reads. I could do 100, and get 10,000 reads, etc. I'm assuming that you didn't multiplex into oblivion, but if you did, there's still a maximum number of reads avaliable on a run.
In terms of checking depth, I recommend following the reads across steps.
In q2-demux, there's a summarize function that will tell you how many reads there are. Start with your demultiplexed data and look at every processing step. How many reads do you have at the beginning? We can't rescue data that isn't there. If you trimmed primers, how many do you lose? When you denoise, what do your denoising stats look like? Are you seeing big drops in numbers in any of those steps?
IIRC, you should be able to concatenate the demux summaries and maybe the DADA2 summary into a single tabular file. I'm relatively visual, so sometimes I will plot the average portion of reads lost at each step, so I can see where, if any, is giving me a big drop off.
I would predict
that you're losing reads either at the primer trimming, in quality filtering, or at read joining. The loss could also be annotation if that's a filter you use. (For example, drop any read that doesn't have at least phylum level annotation.) Depending on where your reads are getting lost, there are different solutions to look at.
If I misinterpeted, I'm happy to talk about the other case.
Best,
Justine