Classification Model Accuracy Metrics, Confusion Matrix — and Thresholds

Paul Simpson
Grow The Unicorn
Published in
7 min readNov 2, 2022

--

How do you compare several predictive binary classification models that you have built? Many think the ROC curve’s AUC is great for that, and I agree. What else, then? How about using the Confusion Matrix — can we get some value out of that to help us gauge our model’s performance? Yes, but we should first review the basics of some popular metrics.

Read on, or jump straight to my R Shiny app for Predictive Thresholds, Performance Measures and the Confusion Matrix.

--

--

Paul Simpson
Grow The Unicorn

Paul is a data scientist, web developer, musician/songwriter, and adjunct professor of master’s Data Analytics courses. linkedin.com/in/paulsimpson4datascience/