Findings (live-tagged)
0
Training rows (live)
0
Model n_trained
0
Recent test accuracy
โ€”
๐Ÿ“‰ Recent training loss (last 100 steps)
๐ŸŽฏ Probability distribution of last 50 findings
๐Ÿ“ก Live event feed
Every emit, outcome, and training step the brain processes.
โš– Top feature weights
Which inputs the brain has learned matter. Big positive = predicts wins. Big negative = predicts losses.
๐Ÿ”ฌ What "self-learning" looks like
A working learning brain shows three signals on this page over time:
1. Loss curve trending down. Each new training example moves loss toward zero. If loss is flat or rising, the brain is stuck.
2. Test accuracy > baseline. If accuracy beats the 'always guess up' baseline, real signal is being learned. Check via Historical Trainer.
3. Feature weights diverging from zero. A trained brain has clear opinions โ€” some features have weight > 0.5, others < -0.5. A brain with all weights near zero hasn't learned anything yet.