RT Journal Article SR Electronic T1 Paired evaluation defines performance landscapes for machine learning models JF bioRxiv FD Cold Spring Harbor Laboratory SP 2022.09.07.507020 DO 10.1101/2022.09.07.507020 A1 Nariya, Maulik K. A1 Mills, Caitlin E. A1 Sorger, Peter K. A1 Sokolov, Artem YR 2022 UL http://biorxiv.org/content/early/2022/09/12/2022.09.07.507020.abstract AB The true accuracy of a machine learning model is a population-level statistic that cannot be observed directly. In practice, predictor performance is estimated against one or more test datasets, and the accuracy of this estimate strongly depends on how well the test sets represent all possible unseen datasets. Here we present paired evaluation, a simple approach for increasing the robustness of performance evaluation by systematic pairing of test samples, and use it to evaluate predictors of drug response in breast cancer cell lines and of disease severity in patients with Alzheimer’s Disease. Our results demonstrate that the choice of test data can cause estimates of performance to vary by as much as 30%, and that paired evaluation makes it possible to identify outliers, improve the accuracy of performance estimates in the presence of known confounders, and assign statistical significance when comparing machine learning models.Competing Interest StatementP.K.S. is a member of the SAB or BOD member of Applied Biomath, RareCyte Inc. and Glencoe Software; P.K.S. is also a member of the NanoString SAB. In the last 5 years, the Sorger laboratory has received research funding from Novartis and Merck. A.S. is a paid consultant for FL84 Inc. All other authors declare that they have no competing interests.