Adam already adapts per-parameter step sizes. But the global learning rate is fixed at
this.lr = 0.05. When the market regime shifts (concept drift), a higher LR helps the model adapt faster. When loss has settled, a lower LR enables fine-tuning.
The algorithm: every 5 minutes:
- Read
model.lossHistory (last 30 entries)
- Fit linear regression:
loss_t = a + b ร t
- If
b > +0.001: loss is rising โ drift detected โ LR ร= 1.10
- If
b < -0.0008: loss is falling โ converging โ LR ร= 0.97
- Otherwise: no change
- Bound LR โ [0.01, 0.10]
The Adam optimizer's internal state (m, v moments) is preserved across LR changes, so the change is smooth โ only the global step-size scaling changes.
Complements DriftPSI + AdversarialValidator: those modules
detect drift; this module
responds to it by making the model train faster.