Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Novel Comparison of Evaluation Metrics for Gene Ontology Classifiers Reveals Drastic Performance Differences

Ilya Plyusnin, Liisa Holm, Petri Töoröonen
doi: https://doi.org/10.1101/427096
Ilya Plyusnin
1Institute of Biotechnology/University of Helsinki/Helsinki, Finland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Liisa Holm
1Institute of Biotechnology/University of Helsinki/Helsinki, Finland
2Department of Biosciences/University of Helsinki, Helsinki, Finland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Petri Töoröonen
1Institute of Biotechnology/University of Helsinki/Helsinki, Finland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Supplementary material
  • Preview PDF
Loading

Abstract

GO classifiers and other methods for automatic annotations of novel sequences play an important role in modern biosciences. It is thus important to assess the quality of different GO classifiers. Evaluation of GO classifiers depends heavily on the used evaluation metrics. Still, there has been little research on the effect of different metrics on the produced method ranking. Indeed most evaluation metrics are simply borrowed from machine learning without any testing for their applicability to GO classification.

We propose a novel simple comparison of metrics, called Artificial Dilution Series. We start by selecting a set of annotations that are known a priori to be correct. From this set we create multiple copies and introduce different amount of errors to each copy. This creates a “series” of annotation sets with the percentage of original correct annotations (“signal”) decreasing from one end of the series to the other. Next we test metrics to see which of them are good at separating annotation sets at different signal levels. In addition, we test metrics with various false positive annotation sets, and show where they rank in the generated signal range.

We compared a large set of evaluation metrics with ADS, revealing drastic differences between them. Especially we show how some metrics consider false positive datasets as good as 100 % correct data sets and how some metrics perform poorly at separating the different error levels. This work A) shows that evaluation metrics should be tested for their performance; B) presents a software that can be used to test different metrics on real-life datasets; C) gives guide lines on what evaluation metrics perform well with Gene Ontology structure; D) proposes improved versions for some well-known evaluation metrics. The presented methods are also applicable to other areas of science where evaluation of prediction results is non-trivial.

Author Summary Comparison of predictive methods is one of the central tasks in science and bioinformatics is no exception. Currently predictive methods are increasingly needed as biosciences are producing novel sequences at an ever higher rate. These sequences require Automated Function Prediction (AFP) as manual curation is often impossible. Unfortunately, selecting AFP methods is a confusing task as current AFP publications show a mixed set of functions for method comparison called Evaluation Metrics (metrics for short). Furthermore, many existing popular metrics can generate misleading or unreasonable results in AFP comparison. We argue that the usage of badly performing metrics in AFP comparison is caused by the lack of methods that can be used to benchmark the metrics. We propose a testing method, called Artificial Dilution Series (ADS). It can be used to test any group of metrics on selected real life test dataset. ADS uses selected dataset to create a large set of artificial AFP results, where each AFP result has a controlled amount of errors. We use ADS to compare how different metrics are able to separate generated error proportions. Our results show drastic differences between different metrics.

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC 4.0 International license.
Back to top
PreviousNext
Posted October 09, 2018.
Download PDF

Supplementary Material

Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Novel Comparison of Evaluation Metrics for Gene Ontology Classifiers Reveals Drastic Performance Differences
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Novel Comparison of Evaluation Metrics for Gene Ontology Classifiers Reveals Drastic Performance Differences
Ilya Plyusnin, Liisa Holm, Petri Töoröonen
bioRxiv 427096; doi: https://doi.org/10.1101/427096
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Novel Comparison of Evaluation Metrics for Gene Ontology Classifiers Reveals Drastic Performance Differences
Ilya Plyusnin, Liisa Holm, Petri Töoröonen
bioRxiv 427096; doi: https://doi.org/10.1101/427096

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Bioinformatics
Subject Areas
All Articles
  • Animal Behavior and Cognition (3605)
  • Biochemistry (7575)
  • Bioengineering (5529)
  • Bioinformatics (20806)
  • Biophysics (10333)
  • Cancer Biology (7986)
  • Cell Biology (11644)
  • Clinical Trials (138)
  • Developmental Biology (6610)
  • Ecology (10213)
  • Epidemiology (2065)
  • Evolutionary Biology (13622)
  • Genetics (9543)
  • Genomics (12851)
  • Immunology (7923)
  • Microbiology (19550)
  • Molecular Biology (7666)
  • Neuroscience (42125)
  • Paleontology (308)
  • Pathology (1258)
  • Pharmacology and Toxicology (2203)
  • Physiology (3268)
  • Plant Biology (7044)
  • Scientific Communication and Education (1294)
  • Synthetic Biology (1951)
  • Systems Biology (5427)
  • Zoology (1118)