Enter a predicted probability and see the conformal interval with rigorous coverage.
📚 What is this?
MC Dropout uncertainty is heuristic. The "90% interval" produced by sampling model dropout masks has no guarantee that it actually covers the true probability 90% of the time. It might be 60%. It might be 99%. You don't know.
Split Conformal Prediction fixes this. The algorithm:
1. Maintain a pool of (predicted, actual_win) pairs from resolved trades.
2. Score each by residual: s = |y - p|.
3. Find the (1-α)-quantile q of those residuals.
4. For any new prediction p, the interval [p - q, p + q] is guaranteed to contain the true probability with probability ≥ 1-α — under no distributional assumptions, just exchangeability of the data.
Empirical coverage = fraction of past pairs where the true outcome was within ±q of the prediction. If we say "90%" and it's only delivering 70%, the model is overconfident and we know it.
Why this matters for sizing: if conformal halfwidth is 0.30 (i.e. ±30%) on a "65% LONG" prediction, the true probability could be anywhere from 35% to 95% — that's basically a coinflip. Size down accordingly. The unified predictor uses this signal.