Abstract
Background and Purpose Accurate identification of acute ischemic stroke (AIS) patient cohorts is essential for a wide range of clinical investigations. Automated phenotyping methods that leverage electronic health records (EHRs) represent a fundamentally new approach cohort identification. Unfortunately, the current generation of these algorithms is laborious to develop, poorly generalize between institutions, and rely on incomplete information. We systematically compared and evaluated the ability of several machine learning algorithms and case-control combinations to phenotype acute ischemic stroke patients using data from an EHR.
Methods Using structured patient data from the EHR at a tertiary-care hospital system, we built machine learning models to identify patients with AIS based on 75 different case-control and classifier combinations. We then determined the models’ classification ability for AIS on an internal validation set, and estimated the prevalence of AIS patients across the EHR. Finally, we externally validated the ability of the models to detect self-reported AIS patients without AIS diagnosis codes using the UK Biobank.
Results Across all models, we found that the mean area under the receiver operating curve for detecting AIS was 0.963±0.0520 and average precision score 0.790±0.196 with minimal feature processing. Logistic regression classifiers with L1 penalty gave the best performance. Classifiers trained with cases with AIS diagnosis codes and controls with no cerebrovascular disease diagnosis codes had the best average F1 score (0.832±0.0383). In the external validation, we found that the top probabilities from a model-predicted AIS cohort were significantly enriched for self-reported AIS patients without AIS diagnosis codes (65-250 fold over expected).
Conclusions Our findings support machine learning algorithms as a way to accurately identify AIS patients without relying on diagnosis codes or using process-intensive manual feature curation. When a set of AIS patients is unavailable, diagnosis codes may be used to train classifier models. Our approach is potentially generalizable to other academic institutions and further external validation is needed.
Footnotes
Figures revised, Validation study with UK Biobank added