The problem with a single probability: when the brain says "70%", it doesn't tell you HOW confident it is in that 70%. Could be a robust 70% (drop any 20% of features and you still get 70%) or a fragile 70% (drop different features and you get anywhere from 50% to 90%).

MC Dropout estimates this cheaply: randomly mask 20% of features, predict, repeat 20 times, average + std. The math:
  • Mean = avg of 20 samples = point estimate of probability
  • Std = variance across samples = uncertainty
  • p5, p95 = 5th and 95th percentiles = 90% confidence interval

Confidence categories:
  • HIGH std < 3pp → full 1.0× size multiplier
  • MEDIUM std 3-6pp → 0.75× size
  • LOW std 6-10pp → 0.50× size
  • VERY LOW std > 10pp → 0.25× size
The brain-bet page reads this and shrinks position size automatically when uncertainty is high. So you risk less on shaky picks.
📊 Live uncertainty per symbol
For each universe symbol with a live quote, computes Bayesian uncertainty. Wide intervals = low confidence even at high mean.
SYM
MEAN
90% INTERVAL
STD
CONFIDENCE
SIZE MULT
VISUAL
🔬 Why this matters for risk
Two trades both with 70% probability but very different uncertainty:

Trade A: 70% ± 2% → 90% interval [67%, 73%]. Robust. Take full 1% risk.
Trade B: 70% ± 12% → 90% interval [55%, 85%]. Fragile. Take 0.25% risk.

Trade A and Trade B have identical "best estimate" but very different actual risk profiles. Bayesian uncertainty quantifies that automatically. The brain-bet page reads this and shrinks position size when uncertainty is high.

The intuition behind MC Dropout: if removing different subsets of features from your input barely changes the prediction, the model is finding robust signal across many features = high confidence. If different subsets produce wildly different predictions, the model is relying on a few features that might be noise = low confidence.