Abstract
Deep learning models are receiving increasing attention in clinical decision-making, however the lack of interpretability and explainability impedes their deployment in day-to-day clinical practice. We propose REM, an interpretable and explainable methodology for extracting rules from deep neural networks and combining them with other data-driven and knowledge-driven rules. This allows integrating machine learning and reasoning for investigating applied and basic biological research questions. We evaluate the utility of REM on the predictive tasks of classifying histological and immunohistochemical breast cancer subtypes from genotype and phenotype data. We demonstrate that REM efficiently extracts accurate, comprehensible and, biologically relevant rulesets from deep neural networks that can be readily integrated with rulesets obtained from tree-based approaches. REM provides explanation facilities for predictions and enables the clinicians to validate and calibrate the extracted rulesets with their domain knowledge. With these functionalities, REM caters for a novel and direct human-in-the-loop approach in clinical decision making.
Competing Interest Statement
The authors have declared no competing interest.
Abbreviations
- ML
- Machine Learning
- DNN
- Deep Neural Network
- REM
- Rule Extraction Methodology
- REM-D
- Rule Extraction Methodology from Deep Neural Networks
- REM-T
- Rule Extraction Methodology from Trees