Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task
- 25 June 2021
- journal article
- research article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Artificial Intelligence
- Vol. 4 (4), 764-777
- https://doi.org/10.1109/tai.2021.3092698
Abstract
The black-box nature of machine learning models hinders the deployment of some high-accuracy models in medical diagnosis. It is risky to put one's life in the hands of models that medical researchers do not fully understand. However, through model interpretation, black-box models can promptly reveal significant biomarkers that medical practitioners may have overlooked due to the surge of infected patients in the COVID-19 pandemic. This research leverages a database of 92 patients with confirmed SARS-CoV-2 laboratory tests between 18th Jan. 2020 and 5th Mar. 2020, in Zhuhai, China, to identify biomarkers indicative of severity prediction. Through the interpretation of four machine learning models, decision tree, random forests, gradient boosted trees, and neural networks using permutation feature importance, Partial Dependence Plot (PDP), Individual Conditional Expectation (ICE), Accumulated Local Effects (ALE), Local Interpretable Model-agnostic Explanations (LIME), and Shapley Additive Explanation (SHAP), we identify an increase in N-Terminal pro-Brain Natriuretic Peptide (NTproBNP), C-Reaction Protein (CRP), and lactic dehydrogenase (LDH), a decrease in lymphocyte (LYM) is associated with severe infection and an increased risk of death, which is consistent with recent medical research on COVID-19 and other research using dedicated models. We further validate our methods on a large open dataset with 5644 confirmed patients from the Hospital Israelita Albert Einstein, at So Paulo, Brazil from Kaggle, and unveil leukocytes, eosinophils, and platelets as three indicative biomarkers for COVID-19.Keywords
All Related Versions
Funding Information
- HY Medical Technology, Scientific Research Department, Beijing
- Offshore Robotics for Certification of Assets
- Partnership Resource Fund
- Accountable and Explainable Learning-Enabled Autonomous Robotic Systems (EP/R026173/1)
This publication has 23 references indexed in Scilit:
- A Review of Coronavirus Disease-2019 (COVID-19)Indian Journal of Pediatrics, 2020
- A gentle introduction to deep learning in medical image processingZeitschrift für Medizinische Physik, 2019
- Methods for interpreting and understanding deep neural networksDigital Signal Processing, 2018
- DeepFM: A Factorization-Machine based Neural Network for CTR PredictionPublished by International Joint Conferences on Artificial Intelligence ,2017
- Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningPublished by Association for Computing Machinery (ACM) ,2016
- Intelligible Models for HealthCarePublished by Association for Computing Machinery (ACM) ,2015
- Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional ExpectationJournal of Computational and Graphical Statistics, 2015
- Plasma C-Reactive Protein Levels Are Associated With Improved Outcome in ARDSChest, 2009
- Classification and Regression Trees.Biometrics, 1984
- 17. A Value for n-Person GamesPublished by Walter de Gruyter GmbH ,1953