Songbird Loss function plot looks like (a bad) EKG

Hello,

I'm trying to use Songbird to find differential taxa in a dataset comparing two groups. My cross-validation plot(s) have looked good, but the loss function plots are looking like a very bad EKG. I was reading this post, but her loss function appears to be behaving normally with an exponential decline then plateauing, while mine declines (for one try) and then turns into a series of peaks and valleys. Can anyone help me figure out what I'm doing wrong that the Loss graphs don't look right?

Here is the command I ran that gave me the best plots (in my opinion):
qiime songbird multinomial \ --i-table filtered_t1_cases_obstrl.qza \ --m-metadata-file NAFLD_Mapping_JZ_SNA_V3.txt \ --p-formula "Group" \ --p-epochs 10000 \ --p-differential-prior 0.5 \ --p-summary-interval 1 \ --p-num-random-test-examples 37 \ --output-dir songbird_nafld_test_ex_37/

I have a total of 247 samples, distributed almost half and half (146 cases and 101 controls). I added the random test samples flag after reading the tutorial that I can help with many samples. I don't think creating a test/training column would benefit me since the samples are pretty evenly distributed (but will if people in the know think it would help).

I also tried increasing the epochs to 20000 since the tutorial said that would help with non-plateauing, but that just seemed to make the plot worse.

Attached are the output from the command above and the output from increasing epochs to 20000.
Any help/tips would be appreciated.

Thanks,
Samantha

Test example = 37 & epoch 10000

Test example = 37 & epoch = 20000

1 Like

@saatkinson what does your R2 / Q2 look like? That can help decide if your model is overfitting or not.

If you don't want a noisy loss, the best approach is to lower the learning rate --p-learning-rate. The default is now 1e-3, you can try 1e-4 or 1e-5 to see if that will give you a cleaner loss (although you will need to train for more epochs).

1 Like

Hi @mortonjt,

I ran the commands to look at Q2 and both instances are extremely low (0.000245 and 0.000458, for 10000 and 20000 epochs respectively). Below is the command I ran to generate the null model and the paired-summaries.

qiime songbird multinomial --i-table filtered_t1_cases_obstrl.qza --m-metadata-file NAFLD_Mapping_JZ_SNA_V3.txt --p-formula 1 --p-epochs 10000 --p-differential-prior 0.5 --p-num-random-test-examples 37 --p-summary-interval 1 --output-dir songbird_nafld_test_ex_37_null

qiime songbird summarize-paired --i-regression-stats songbird_nafld_test_ex_37/regression_stats.qza --i-baseline-stats songbird_nafld_test_ex_37/null_stats.qza --o-visualization songbird_nafld_test_ex_37/vis_paired_summary.qzv

I also ran the null model without the num-random-test-examples flag because I wasn't sure if it should be there for the null model, but that made the Q2 drop to -0.064678.

Am I correct in interpreting that my model is severely overfitting the data? Looking at the FAQs there was a section about lowering the differential-prior, do you think this would help with the data I'm seeing? Do you have other suggestions for correcting the fit?

Thanks,
Samantha

1 Like

I can't completely comment on your results without seeing the loss plots. Do you see both your loss and CV curves decline?

If you see Q2 < 0, that is not a good sign (it means that your predictive power is not good). The differential prior is worth making smaller (i.e. 0.5).

Attached are my plots, the CV curve declines nicely, but the loss curve still looks like an EKG. I did not yet try decreasing the learning curve, do you recommend lowering that in conjunction with the differential prior? I ran the commands with a differential prior of 0.5, would 0.3 be an acceptable decrease?

Epochs 10000
convergence-plot-paired-10000

Epochs 20000
convergence-plot-paired-20000

Thanks for your help, I'm new to regression modeling.
Samantha

Hi @saatkinson, given how noisy the loss is, there is a benefit of running with a lower learning rate (try 1e-4 and 1e-5). I'd try adjust the differential prior only after getting the loss function smoothed (I'd stick with just tuning those parameters for now)

Hi @mortonjt,

I tried both 1e-4 and 1e-5, both gave shape to the Loss function, but it still looks up and down.

Here are my commands:
1e-4 and 30000 epochs
qiime songbird multinomial --i-table filtered_t1_cases_obstrl.qza --m-metadata-file NAFLD_Mapping_JZ_SNA_V3.txt --p-formula "Group" --p-epochs 30000 --p-differential-prior 0.5 --p-summary-interval 1 --p-num-random-test-examples 37 --p-learning-rate 0.0001 --output-dir songbird_nafld_LR-0.0001

1e-5 and 40000 epochs
qiime songbird multinomial --i-table filtered_t1_cases_obstrl.qza --m-metadata-file NAFLD_Mapping_JZ_SNA_V3.txt --p-formula "Group" --p-epochs 40000 --p-differential-prior 0.5 --p-summary-interval 1 --p-num-random-test-examples 37 --p-learning-rate 0.00001 --output-dir songbird_nafld_LR-0.00001

The learning rate of 1e-4 looks better to my untrained eye, is this accurate? This this as good as I'm going to get or should I keep decreasing the learning rate until it's smooth? Should I have increased the number of epochs more than 10000?

Thanks for your help!
Samantha

Hi @saatkinson, those fits are looking much better and are likely good enough for you downstream analysis (not sure how much further tuning will help). Good luck!

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.