The brain scans real-time quotes, volume, technical levels, and order flow. Every few seconds it checks for setups: breakouts, pullbacks, squeezes, gamma walls, anomalies โ about 30 different setup types in total.
When something interesting happens, it generates a finding โ a structured snapshot of the setup with all the surrounding context (price, volume, RSI, MACD, sector strength, time of day, and 20+ other features).
For every finding, the model produces a number between 0 and 1 โ the probability this setup will be a winner. Internally, this is a logistic regression: it multiplies each feature by a learned weight, sums them up, and squashes the result into the 0-1 range.
Some features get a positive weight (the model has learned they predict wins). Others get a negative weight. The model started "blank" but learned which features matter from the 200 synthetic training rows we seeded on first visit โ then it keeps learning from every real trade you log.
Every prediction has a known outcome eventually โ the trade either wins or loses. When you log a trade in Journal PRO and close it, that trade becomes a labeled training row: features + outcome (1 or 0).
The autopilot paper trader also creates labeled rows automatically โ every high-confidence finding gets paper-traded and the outcome recorded.
When a labeled trade comes in, the model adjusts its weights via stochastic gradient descent. If it predicted 80% confident and was wrong, the weights of every feature contributing to that prediction get nudged down. If it predicted 30% confident and was right, the weights nudge up.
Each individual adjustment is tiny (learning rate ~0.01), but over hundreds of trades, the model starts to favor the features that actually work in YOUR market environment and trading style.
"Calibration" means: when the model says 70% confident, does it actually win 70% of the time? If yes, the model is well-calibrated and you can trust its probabilities. If not (e.g., it says 70% but only wins 50% of the time), it's overconfident and you should be skeptical.
The brain measures this with Brier score and Expected Calibration Error (ECE). Lower = better. It also watches for drift โ when recent performance diverges from lifetime average โ which signals a regime change.
The model's probability is used everywhere across the desk: it ranks the Conviction Stack, picks the daily Trade of the Day, sizes positions in Risk Sizer (model-weighted Kelly), filters Alerts (only high-conf trigger pushes), and shows up as a ๐งฌ chip on every signal feed.
When confidence โฅ 78% AND severity is high, the brain fires a desktop push notification (with a 30-min cooldown per symbol so you're not spammed).
No magic. It's logistic regression (and a small MLP neural net in the ensemble page), which are well-understood statistical methods. The "AI" you hear about in chatbots is a different thing entirely โ transformer-based LLMs trained on text. Our brain is much simpler and much more interpretable โ you can literally read its weights.
Don't trust it blindly โ verify. Every prediction comes with calibration metrics, a confidence interval, and an explanation of which features drove the prediction (SHAP-style). If the brain says 80% confident but the calibration page shows 80%-confidence calls win 50% of the time, you have evidence it's miscalibrated.
100% on your machine, in your browser's localStorage. Nothing is sent to a server. You can export the full brain state at any time (Model Backup) and restore it on a different device. If you clear your browser data, the brain forgets โ back it up if you care.
It needs labeled examples. The first 10-20 real trades will improve it noticeably. After 100+ trades, it's customized to your trading style. After 500+ trades, it's a real tool. Synthetic seed data gets it useful from minute 1, but real trades are what matter.
Yes. LR Tuner changes how aggressively it adapts to new data. Model Trainer runs a full retrain from scratch. Ensemble A/B lets you compare two models and promote the winner.
The brain has drift detection โ if recent performance diverges from lifetime, it flags it. You'll see a warning on Brain Hub and Calibration. The fix is usually a full retrain. If you're moving from a low-vol to high-vol regime (or vice versa), expect a few weeks of recalibration.
Check Brain vs SPY โ that page benchmarks the brain's returns against passive S&P 500. Be honest about it. If the brain isn't beating SPY net of effort and risk, simplify and use index funds.