Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Top Cited Papers
- 13 May 2019
- journal article
- research article
- Published by Springer Nature in Nature Machine Intelligence
- Vol. 1 (5), 206-215
- https://doi.org/10.1038/s42256-019-0048-x
Abstract
Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.Keywords
All Related Versions
This publication has 30 references indexed in Scilit:
- Comprehensible classification modelsACM SIGKDD Explorations Newsletter, 2014
- Which Method Predicts Recidivism Best?: A Comparison of Statistical, Machine Learning and Data Mining Predictive ModelsJournal of the Royal Statistical Society Series A: Statistics in Society, 2012
- How to reverse-engineer quality rankingsMachine Learning, 2012
- An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive modelsDecision Support Systems, 2011
- The Magical Mystery FourCurrent Directions in Psychological Science, 2010
- Binarized Support Vector MachinesINFORMS Journal on Computing, 2010
- A process for predicting manhole events in ManhattanMachine Learning, 2010
- Evaluating the Predictive Validity of the Compas Risk and Needs Assessment SystemCriminal Justice and Behavior, 2008
- Classifier Technology and the Illusion of ProgressStatistical Science, 2006