Current market regime
Ensemble prediction
โ€”
Short (1d) n_trained
0
Mid (5d) n_trained
0
Long (20d) n_trained
0
๐ŸŽฏ Per-(regime ร— horizon) accuracy matrix
Each cell shows the rolling accuracy of the matching horizon model in the matching regime. The BEST cell in the current row gets ensemble priority.
SHORT (1d)
MID (5d)
LONG (20d)
โš– Ensemble weights (current regime)
Weights = each horizon's edge above coin-flip in the current regime, normalized. A horizon with no signal (โ‰ค50% accuracy) gets weight 0.
๐Ÿ” Active learning queue (uncertain predictions)
Symbols where the model is least confident (prediction in [40%, 60%]). These get re-captured every 5 minutes instead of 30, so the brain explores its weak spots faster.
๐Ÿ“Š Feature alpha map (which features predict wins vs losses)
For every resolved prediction we record per-feature contribution. A feature with consistently positive average contribution on wins (and negative on losses) is REAL ALPHA. A feature with mixed contribution is noise.
FEAT
NAME
WINS / LOSSES
AVG CONTRIB ON WINS
ALPHA
๐Ÿงฌ What 'self-learning' means here
Multi-horizon (idea #1): three models in parallel. When you ask for a prediction, you get a weighted blend where the weights reflect each horizon's recent performance in THIS regime. If the 5-day model is crushing in choppy markets and the 1-day model fails, the ensemble auto-routes to the 5-day.

Reward shaping (idea #2): training samples are weighted by |R-multiple|. A correct prediction on a +3R move trains the model 3ร— harder than a correct prediction on a 0.5R move. The brain learns to care about BIG, ACTIONABLE moves more than tiny drifts.

Active learning (idea #3): when the model says "I'm 50/50 confident" on a symbol, that prediction has the highest information value when resolved. So those symbols get re-captured 6ร— more often (5min vs 30min cooldown), accelerating learning where it matters most.

Feature alpha (idea #4): every resolved prediction is back-attributed to which features contributed most to the logit. Over thousands of predictions, this surfaces which features are REAL predictors vs noise. The brain can then prune low-alpha features or up-weight high-alpha ones.