Search
Program Calendar
Browse By Day
Browse By Room
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
Recent innovations in Machine Learning and Deep Learning technologies have opened the door for public and private decision-makers to expand the use of these predictive analysis techniques. Traditionally, most attention has gravitated around the predictive accuracy and performance of these models but not necessarily their interpretability. In this study, we compare various Machine Learning models on a crime prediction task. We present weekly predictions in five types of crime at the block group level using seven distinct algorithms to compare their performance in predicting future crime while assessing the interpretability of these models to inform community decision-makers. First, we rely on highly interpretable and widely used algorithms, such as Kernel Density Estimates (KDE), L1 penalized Logistic regression, and Decision trees, and then on Extreme Gradient Boosting to provide a benchmark for the best performance in achieving this prediction task. Lastly, we train three recent interpretable ML models (Explainable Boosting Machine, RiskSLIM, and SIRUS) to compare their performances with more traditional algorithms and XGB. Finally, we discuss the algorithms' metrics to understand the potential performance-interpretability trade-off for decision-makers using these models to inform crime reduction and prevention efforts.