Short version: The brain is a self-learning machine learning model that watches the market, makes predictions, tracks its own accuracy, and gets smarter over time. It's not magic โ€” it's a statistical pattern-matcher that runs entirely in your browser.

Below is the full picture in 6 steps. Each step links to a page where you can see that part of the brain in action.
1

Watch the market

The brain scans real-time quotes, volume, technical levels, and order flow. Every few seconds it checks for setups: breakouts, pullbacks, squeezes, gamma walls, anomalies โ€” about 30 different setup types in total.

When something interesting happens, it generates a finding โ€” a structured snapshot of the setup with all the surrounding context (price, volume, RSI, MACD, sector strength, time of day, and 20+ other features).

๐Ÿ“ See it: Brain Live Feed shows findings as they happen.
2

Make a prediction

For every finding, the model produces a number between 0 and 1 โ€” the probability this setup will be a winner. Internally, this is a logistic regression: it multiplies each feature by a learned weight, sums them up, and squashes the result into the 0-1 range.

Some features get a positive weight (the model has learned they predict wins). Others get a negative weight. The model started "blank" but learned which features matter from the 200 synthetic training rows we seeded on first visit โ€” then it keeps learning from every real trade you log.

๐Ÿ“ See it: What-If lets you drag the feature sliders and watch the prediction change live.
3

Track outcomes

Every prediction has a known outcome eventually โ€” the trade either wins or loses. When you log a trade in Journal PRO and close it, that trade becomes a labeled training row: features + outcome (1 or 0).

The autopilot paper trader also creates labeled rows automatically โ€” every high-confidence finding gets paper-traded and the outcome recorded.

๐Ÿ“ See it: Model Results shows accuracy, confusion matrix, and which trades the model called right vs wrong.
4

Learn from mistakes

When a labeled trade comes in, the model adjusts its weights via stochastic gradient descent. If it predicted 80% confident and was wrong, the weights of every feature contributing to that prediction get nudged down. If it predicted 30% confident and was right, the weights nudge up.

Each individual adjustment is tiny (learning rate ~0.01), but over hundreds of trades, the model starts to favor the features that actually work in YOUR market environment and trading style.

๐Ÿ“ See it: Training History plots the loss curve over time โ€” flat means learning has converged; rising means something is broken.
5

Stay calibrated

"Calibration" means: when the model says 70% confident, does it actually win 70% of the time? If yes, the model is well-calibrated and you can trust its probabilities. If not (e.g., it says 70% but only wins 50% of the time), it's overconfident and you should be skeptical.

The brain measures this with Brier score and Expected Calibration Error (ECE). Lower = better. It also watches for drift โ€” when recent performance diverges from lifetime average โ€” which signals a regime change.

๐Ÿ“ See it: Calibration page shows the reliability diagram โ€” actual win % vs predicted on a 45ยฐ line.
6

Drive the UI

The model's probability is used everywhere across the desk: it ranks the Conviction Stack, picks the daily Trade of the Day, sizes positions in Risk Sizer (model-weighted Kelly), filters Alerts (only high-conf trigger pushes), and shows up as a ๐Ÿงฌ chip on every signal feed.

When confidence โ‰ฅ 78% AND severity is high, the brain fires a desktop push notification (with a 30-min cooldown per symbol so you're not spammed).

๐Ÿ“ See it: Brain Hub ties it all together on one page.

โ“ Frequently asked

Is this AI? GPT? Magic?

No magic. It's logistic regression (and a small MLP neural net in the ensemble page), which are well-understood statistical methods. The "AI" you hear about in chatbots is a different thing entirely โ€” transformer-based LLMs trained on text. Our brain is much simpler and much more interpretable โ€” you can literally read its weights.

Why should I trust it?

Don't trust it blindly โ€” verify. Every prediction comes with calibration metrics, a confidence interval, and an explanation of which features drove the prediction (SHAP-style). If the brain says 80% confident but the calibration page shows 80%-confidence calls win 50% of the time, you have evidence it's miscalibrated.

Where does the data live?

100% on your machine, in your browser's localStorage. Nothing is sent to a server. You can export the full brain state at any time (Model Backup) and restore it on a different device. If you clear your browser data, the brain forgets โ€” back it up if you care.

How fast does it learn?

It needs labeled examples. The first 10-20 real trades will improve it noticeably. After 100+ trades, it's customized to your trading style. After 500+ trades, it's a real tool. Synthetic seed data gets it useful from minute 1, but real trades are what matter.

Can I tune it?

Yes. LR Tuner changes how aggressively it adapts to new data. Model Trainer runs a full retrain from scratch. Ensemble A/B lets you compare two models and promote the winner.

What if the market changes?

The brain has drift detection โ€” if recent performance diverges from lifetime, it flags it. You'll see a warning on Brain Hub and Calibration. The fix is usually a full retrain. If you're moving from a low-vol to high-vol regime (or vice versa), expect a few weeks of recalibration.

Does it actually beat the market?

Check Brain vs SPY โ€” that page benchmarks the brain's returns against passive S&P 500. Be honest about it. If the brain isn't beating SPY net of effort and risk, simplify and use index funds.

Bottom line: The brain is a tool, not a guru. It surfaces patterns faster than you can scan them, gives every signal a probability with an honest accuracy track record, and adapts as you trade. Use it. Don't worship it.