โšก Manual update
The adaptive-LR loop runs every 5 minutes. Force one update now:
๐Ÿ“Š Recent updates
๐Ÿ“š How it works
Adam already adapts per-parameter step sizes. But the global learning rate is fixed at this.lr = 0.05. When the market regime shifts (concept drift), a higher LR helps the model adapt faster. When loss has settled, a lower LR enables fine-tuning.

The algorithm: every 5 minutes:
  1. Read model.lossHistory (last 30 entries)
  2. Fit linear regression: loss_t = a + b ร— t
  3. If b > +0.001: loss is rising โ†’ drift detected โ†’ LR ร—= 1.10
  4. If b < -0.0008: loss is falling โ†’ converging โ†’ LR ร—= 0.97
  5. Otherwise: no change
  6. Bound LR โˆˆ [0.01, 0.10]
The Adam optimizer's internal state (m, v moments) is preserved across LR changes, so the change is smooth โ€” only the global step-size scaling changes.

Complements DriftPSI + AdversarialValidator: those modules detect drift; this module responds to it by making the model train faster.