Model Accuracy Tracker
Transparent tracking of our prediction model's performance. We retroactively price recent matches and check whether the favoured team won. No cherry-picking — every match with sufficient data is included.
How CSDB tracks prediction model accuracy
Most prediction models keep their performance opaque. CSDB takes the opposite approach: every match the model rates with sufficient data is logged, and the actual outcome is recorded after the match completes. The headline numbers below are computed from the full match log, not a hand-picked subset.
What the metrics mean
- Overall accuracy — percentage of matches where the model's favourite won. A 60%+ accuracy on CS2 BO3s is meaningful, given how high-variance the game is.
- Current streak — consecutive correct (positive) or incorrect (negative) predictions, useful for spotting hot or cold periods.
- Calibration — when the model says "70% confidence," does the favourite actually win ~70% of the time? Calibration measures whether the model's confidence scores are honest.
- Brier score — a probabilistic accuracy measure (lower is better) that rewards the model for being confident when it's right and humble when it's wrong.
What we do and don't include
- ✅ Every BO1, BO3, and BO5 played by HLTV-eligible teams
- ✅ Both upset wins and chalk results
- ❌ Matches where the model didn't have enough data on either team (typically tier-3 newcomers)
- ❌ Forfeits and walkovers (no actual play)
Why transparency matters
Any model can look good on hand-picked matches. By publishing the full track record, CSDB lets you decide whether the model's edge is real. If you're using the hot picks page or the predictions tool for actual decisions, this page is where you check whether to trust them.