ABSTRACT
Deep learning models are receiving increasing attention in clinical decision-making, however the lack of explainability impedes their deployment in day-to-day clinical practice. We propose REM, an explainable methodology for extracting rules from deep neural networks and combining them with rules from non-deep learning models. This allows integrating machine learning and reasoning for investigating basic and applied biological research questions. We evaluate the utility of REM in two case studies for the predictive tasks of classifying histological and immunohistochemical breast cancer subtypes from genotype and phenotype data. We demonstrate that REM efficiently extracts accurate, comprehensible rulesets from deep neural networks that can be readily integrated with rulesets obtained from tree-based approaches. REM provides explanation facilities for predictions and enables the clinicians to validate and calibrate the extracted rulesets with their domain knowledge. With these functionalities, REM caters for a novel and direct human-in-the-loop approach in clinical decision-making.
Competing Interest Statement
The authors have declared no competing interest.
Abbreviations
- ML
- Machine Learning
- MR
- Machine Reasoning
- DNN
- Deep Neural Network
- REM
- Rule Extraction Methodology
- REM-D
- Rule Extraction Methodology from Deep Neural Networks
- REM-T
- Rule Extraction Methodology from Trees