๐Ÿ“Š Multiplier distribution
Distribution of active-learning multipliers applied to resolved examples. A healthy mix has examples across all buckets โ€” too many in "low" means uncertainty signals are weak.
๐Ÿ“ˆ Gain signal
Average prediction error for high-multiplier examples vs low-multiplier examples. If the system is working as intended, high-mult should show larger errors โ€” those really were the harder examples.
๐Ÿงช Try it
See what multiplier a hypothetical resolved trade would receive.
๐Ÿ“š How it works
Classical training treats every example the same. But "easy" examples (model says 90%, wins 90%) carry little new information. The real learning signal comes from examples the model was uncertain about.

This module computes a sample-weight multiplier in [1.0, 3.0] from three uncertainty signals captured at prediction time:
  1. Boundary distance: how close to 0.5 was the prediction?
  2. MC dropout std: how much did random feature masks shift the prediction?
  3. Bootstrap divergence: how much did the K=5 bagged models disagree?
When a trade resolves, the standard sample weight (derived from R-multiple) is multiplied by this active-learning multiplier before being passed to Model.train(). The model gradient gets a larger push on uncertain examples โ€” biasing learning toward the boundary where it matters most.

Reference: Settles (2009) "Active Learning Literature Survey".