๐Ÿงช Try it
Enter a raw probability and see what the isotonic calibration maps it to:
๐Ÿ“Š Fitted bins
The PAV algorithm produced these monotone bins from the (predicted, actual) pairs:
๐Ÿ“š PAV explained
Platt scaling fits a sigmoid. But the true calibration curve may be bumpy or have flat regions โ€” sigmoid can't capture that. Isotonic regression is non-parametric: it fits an arbitrary monotone step function via the Pool Adjacent Violators algorithm:

  1. Sort (raw, actual) pairs by raw probability ascending
  2. Start each pair as its own block with weight=1
  3. Walk left-to-right. If a block's average y violates the monotone constraint (greater y than next block), merge it with its neighbor and recurse left
  4. The result is a piecewise-constant, monotone non-decreasing function
Trade-offs vs Platt: isotonic is more flexible (better fit when the data isn't sigmoidal) but needs more samples to avoid overfitting individual outliers. Use Platt for <100 calibration pairs, isotonic for >500.

Where it fits: the unified predictor currently uses Platt (global) โ†’ regime-Platt (per-regime). Isotonic is a third option you can wire in if Platt's coverage diverges from target.