Real machine learning. Logistic regression with binary cross-entropy loss, trained via stochastic gradient descent. Each finding the brain emits captures a
22-feature snapshot of market state. 30 minutes later the brain rates the outcome as hit / miss / flat. Hits and misses (not flats) become labeled training data โ the model trains one sample at a time, updating weights to reduce prediction error.
- Feature vector (22 dims): RSI, ATR%, RVOL, MA distances, IV pct, sector strength, brain weight, regime score, VIX, time-of-day, setup type (one-hot), severity, coincident-finding count
- Loss: binary cross-entropy. Each batch shrinks gradient ~0.05 ร prediction error ร feature value
- Output: P(win) for any new setup โ visible on Model Confidence + Conviction Stack
- Versioning: snapshot any time, A/B compare on Model Versions