The principle: three independent uncertainty estimators (multi-horizon ensemble, bootstrap K=5 models, MC dropout) all output probabilities. When they ALL agree on direction AND magnitude, the prediction is robust. When they split, the prediction is fragile.

The math:
  • direction_agreement = fraction of estimators on the majority side (1.0 = unanimous)
  • magnitude_agreement = 1 - (pooled std / 0.20). Tight spread = high agreement.
  • score = (direction + magnitude) / 2, range [0, 1]

Tiers:
  • โ‰ฅ0.85 STRONG โ€” amplify size 1.10ร—
  • 0.65-0.85 MODERATE โ€” full size
  • 0.45-0.65 MIXED โ€” half size
  • <0.45 FRAGMENTED โ€” 0.40ร— size or skip
SYM
SCORE
DIRECTION
MAGNITUDE
METHODS
TIER
VISUAL
๐Ÿ”ฌ Reading the methods
Multi-horizon ensemble: three time-horizon models (1d, 5d, 20d). If short says LONG but long says SHORT, the setup is at a regime transition โ€” risky.
Bootstrap ensemble: five models trained on random 60% subsets of data. If their outputs cluster tightly, signal is robust. If they spread, model is uncertain.
MC Dropout: 20 samples with random 20% feature masking. Measures fragility to feature noise.

METHODS column: count of estimator methods loaded and producing predictions. Should be 3 once enough training data accumulates.

Why all three: each captures a different uncertainty source. Multi-horizon = time-frame consensus. Bootstrap = data-subset robustness. Dropout = feature robustness. Real edge comes when all three say yes.