📋 Pick a prediction
🔬 Explanation
Select a finding on the left to see the per-feature contributions.
🧠 How to read SHAP-style explanations
Each prediction is the sum of (feature value × feature weight) contributions, pushed through a sigmoid. This page breaks it down: the top contributors (largest absolute value × weight products) are the features driving the prediction the most. Green bars push toward WIN, red bars push toward LOSS. The bias term is the model's "baseline" probability before seeing any features.