SAMPLES TRAINED
--
--
HITS
--
positive labels
MISSES
--
negative labels
ROLLING ACCURACY
--
last 100 predictions
MODEL VERSION
--
--
๐Ÿ“‰ Training Loss (binary cross-entropy)
Lower = better. Spikes are individual hard samples. Trend down = learning.
๐Ÿ“ˆ Rolling Accuracy
Random baseline = 50%. Anything above = learned edge.
๐Ÿงช Hold-out Evaluation (full training set)
ACCURACY
โ€”
PRECISION
โ€”
RECALL
โ€”
F1 SCORE
โ€”
TP: -- FP: -- TN: -- FN: --
โšก Training Events (last 80)
live ยท trains as outcomes get rated
TIME
OUTCOME
PREDICTED
LOSS
STATUS
๐Ÿง  How the model trains
Real machine learning. Logistic regression with binary cross-entropy loss, trained via stochastic gradient descent. Each finding the brain emits captures a 22-feature snapshot of market state. 30 minutes later the brain rates the outcome as hit / miss / flat. Hits and misses (not flats) become labeled training data โ€” the model trains one sample at a time, updating weights to reduce prediction error.
  • Feature vector (22 dims): RSI, ATR%, RVOL, MA distances, IV pct, sector strength, brain weight, regime score, VIX, time-of-day, setup type (one-hot), severity, coincident-finding count
  • Loss: binary cross-entropy. Each batch shrinks gradient ~0.05 ร— prediction error ร— feature value
  • Output: P(win) for any new setup โ€” visible on Model Confidence + Conviction Stack
  • Versioning: snapshot any time, A/B compare on Model Versions