Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Beyond accuracy: Measures for assessing machine learning models, pitfalls and guidelines

Richard Dinga, Brenda W.J.H. Penninx, Dick J. Veltman, Lianne Schmaal, Andre F. Marquand
doi: https://doi.org/10.1101/743138
Richard Dinga
aDepartment of Psychiatry, Amsterdam UMC, Amsterdam, the Netherlands
bDonders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: dinga92@gmail.com
Brenda W.J.H. Penninx
aDepartment of Psychiatry, Amsterdam UMC, Amsterdam, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Dick J. Veltman
aDepartment of Psychiatry, Amsterdam UMC, Amsterdam, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Lianne Schmaal
cOrygen, The National Centre of Excellence in Youth Mental Health, Parkville, Australia
dCentre for Youth Mental Health, The University of Melbourne, Melbourne, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Andre F. Marquand
bDonders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Supplementary material
  • Data/Code
  • Preview PDF
Loading

Abstract

Pattern recognition predictive models have become an important tool for analysis of neuroimaging data and answering important questions from clinical and cognitive neuroscience. Regardless of the application, the most commonly used method to quantify model performance is to calculate prediction accuracy, i.e. the proportion of correctly classified samples. While simple and intuitive, other performance measures are often more appropriate with respect to many common goals of neuroimaging pattern recognition studies. In this paper, we will review alternative performance measures and focus on their interpretation and practical aspects of model evaluation. Specifically, we will focus on 4 families of performance measures: 1) categorical performance measures such as accuracy, 2) rank based performance measures such as the area under the curve, 3) probabilistic performance measures based on quadratic error such as Brier score, and 4) probabilistic performance measures based on information criteria such as logarithmic score. We will examine their statistical properties in various settings using simulated data and real neuroimaging data derived from public datasets. Results showed that accuracy had the worst performance with respect to statistical power, detecting model improvement, selecting informative features and reliability of results. Therefore in most cases, it should not be used to make statistical inference about model performance. Accuracy should also be avoided for evaluating utility of clinical models, because it does not take into account clinically relevant information, such as relative cost of false-positive and false-negative misclassification or calibration of probabilistic predictions. We recommend alternative evaluation criteria with respect to the goals of a specific machine learning model.

Footnotes

  • https://github.com/dinga92/beyond-acc

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC 4.0 International license.
Back to top
PreviousNext
Posted August 22, 2019.
Download PDF

Supplementary Material

Data/Code
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Beyond accuracy: Measures for assessing machine learning models, pitfalls and guidelines
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Beyond accuracy: Measures for assessing machine learning models, pitfalls and guidelines
Richard Dinga, Brenda W.J.H. Penninx, Dick J. Veltman, Lianne Schmaal, Andre F. Marquand
bioRxiv 743138; doi: https://doi.org/10.1101/743138
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Beyond accuracy: Measures for assessing machine learning models, pitfalls and guidelines
Richard Dinga, Brenda W.J.H. Penninx, Dick J. Veltman, Lianne Schmaal, Andre F. Marquand
bioRxiv 743138; doi: https://doi.org/10.1101/743138

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (3691)
  • Biochemistry (7800)
  • Bioengineering (5678)
  • Bioinformatics (21295)
  • Biophysics (10584)
  • Cancer Biology (8179)
  • Cell Biology (11947)
  • Clinical Trials (138)
  • Developmental Biology (6764)
  • Ecology (10401)
  • Epidemiology (2065)
  • Evolutionary Biology (13874)
  • Genetics (9709)
  • Genomics (13075)
  • Immunology (8150)
  • Microbiology (20021)
  • Molecular Biology (7859)
  • Neuroscience (43073)
  • Paleontology (321)
  • Pathology (1279)
  • Pharmacology and Toxicology (2260)
  • Physiology (3353)
  • Plant Biology (7232)
  • Scientific Communication and Education (1314)
  • Synthetic Biology (2008)
  • Systems Biology (5539)
  • Zoology (1128)