Review
Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention

https://doi.org/10.1016/j.tics.2014.02.010Get rights and content

Highlights

  • Publication and other reporting biases can have major detrimental effects on the credibility and value of research evidence.

  • There is substantial empirical evidence of publication and reporting biases in cognitive science disciplines.

  • Common types of bias are discussed and potential solutions are proposed.

Recent systematic reviews and empirical evaluations of the cognitive sciences literature suggest that publication and other reporting biases are prevalent across diverse domains of cognitive science. In this review, we summarize the various forms of publication and reporting biases and other questionable research practices, and overview the available methods for probing into their existence. We discuss the available empirical evidence for the presence of such biases across the neuroimaging, animal, other preclinical, psychological, clinical trials, and genetics literature in the cognitive sciences. We also highlight emerging solutions (from study design to data analyses and reporting) to prevent bias and improve the fidelity in the field of cognitive science research.

Introduction

The promise of science is that the generation and evaluation of evidence is done transparently to promote an efficient, self-correcting process. However, multiple biases may produce inefficiency in knowledge building. In this review, we discuss the importance of publication and other reporting biases, and present some potential correctives that may reduce bias without disrupting innovation. Consideration of publication and other reporting biases is particularly timely for cognitive sciences because it is a field that is expanding rapidly. As such, preventing or remedying these biases will have substantial impact on the efficient development of a credible corpus of published research.

Section snippets

Definitions of biases and relevance for cognitive sciences

The terms ‘publication bias’ and ‘selective reporting bias’ refer to the differential choice to publish studies or report particular results, respectively, depending on the nature or directionality of findings [1]. There are several forms of such biases in the literature [2], including: (i) study publication bias, where studies are less likely to be published when they reach nonstatistically significant findings; (ii) selective outcome reporting bias, where multiple outcomes are evaluated in a

Tests for single studies, specific topics, and wider disciplines

Explicit documentation of publication and other reporting biases requires availability of protocols, data, and results of primary analyses from conducted studies so that these can be compared against the published literature. However, these are not often available. A few empirical studies have retrieved study protocols from authors or trial data from submissions to the US Food and Drug Administration (FDA) 6, 7, 8, 9. These studies have shown that deviations in the analysis plan between

Neuroimaging

With over 3000 publications annually in the past decade, including several meta-analyses [27] (Figure 1), the identification of potential publication or reporting biases is of crucial interest for the future of neuroimaging. The first study to evaluate evidence for an excess of statistically significant results in neuroimaging focused on brain volumetric studies based on region of interests (ROIs) analyses of psychiatric conditions [28]. The study analyzed 41 meta-analyses (461 data sets) and

Approaches to prevent bias

Most scientists embrace fundamental scientific values, such as disinterestedness and transparency, even while believing that other scientists do not [60]. However, noble intentions may not be sufficient to decrease biases when people are not able to recognize or control their own biases 61, 62, rationalize biases through motivated reasoning [63], and are embedded in a culture that implicitly rewards the expression of biases for personal career advancement (i.e., publication over accuracy).

Concluding remarks

Overall, publication and other selective reporting biases are probably prevalent and influential in diverse cognitive sciences. Different approaches have been proposed to remedy these biases. Box 2 summarizes some outstanding questions in this process. As shown in Figure 2, there are multiple entry points for each of diverse ‘problems’ from designing to conducting studies, analyzing data, and reporting results. ‘Solutions’ are existing, but uncommon, practices that should improve the fidelity

Acknowledgments

S.P.D. acknowledges funding from the National Institute on Drug Abuse US Public Health Service grant DA017441 and a research stipend from the Institute of Medicine (IOM) (Puffer/American Board of Family Medicine/IOM Anniversary Fellowship). M.R.M. acknowledges support for his time from the UK Centre for Tobacco Control Studies (a UKCRC Public Health Research Centre of Excellence) and funding from British Heart Foundation, Cancer Research UK, the Economic and Social Research Council, the Medical

References (96)

  • S. Green

    Cochrane Handbook for Systematic Reviews of Interventions, version 5.1.0

    (2011)
  • K. Dwan

    Systematic review of the empirical evidence of study publication bias and outcome reporting bias

    PLoS ONE

    (2008)
  • G. Vogel

    Scientific misconduct. Psychologist accused of fraud on ‘astonishing scale’

    Science

    (2011)
  • G. Vogel

    Scientific misconduct. Fraud charges cast doubt on claims of DNA damage from cell phone fields

    Science

    (2008)
  • G. Vogel

    Developmental biology. Fraud investigation clouds paper on early cell fate

    Science

    (2006)
  • N. Saquib

    Practices and impact of primary outcome adjustment in randomized controlled trials: meta-epidemiologic study

    BMJ

    (2013)
  • E.H. Turner

    Publication bias in antipsychotic trials: an analysis of efficacy comparing the published literature to the US Food and Drug Administration database

    PLoS Med.

    (2012)
  • E.H. Turner

    Selective publication of antidepressant trials and its influence on apparent efficacy

    N. Engl. J. Med.

    (2008)
  • K. Dwan

    Systematic review of the empirical evidence of study publication bias and outcome reporting bias – an updated review

    PLoS ONE

    (2013)
  • J.A. Sterne

    Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials

    BMJ

    (2011)
  • J. Lau

    The case of the misleading funnel plot

    BMJ

    (2006)
  • J.L. Peters

    Comparison of two methods to detect publication bias in meta-analysis

    JAMA

    (2006)
  • R.M. Harbord

    A modified test for small-study effects in meta-analyses of controlled trials with binary endpoints

    Stat. Med.

    (2006)
  • J.B. Copas et al.

    A robust P-value for treatment effect in meta-analysis with publication bias

    Stat. Med.

    (2008)
  • J. Copas et al.

    A bound for publication bias based on the fraction of unpublished studies

    Biometrics

    (2004)
  • L.V. Hedges et al.

    Estimating effect size under publication bias: small sample properties and robustness of a random effects selection model

    J. Educ. Behav. Stat.

    (1996)
  • T. Pfeiffer

    Quantifying selective reporting and the Proteus phenomenon for multiple datasets with similar bias

    PLoS ONE

    (2011)
  • G.L. Gadbury et al.

    Inappropriate fiddling with statistical analyses to obtain a desirable p-value: tests to detect its presence in published literature

    PLoS ONE

    (2012)
  • J.P. Ioannidis et al.

    An exploratory test for an excess of significant findings

    Clin. Trials

    (2007)
  • J.P.A. Ioannidis

    Clarifications on the application and interpretation of the test for excess significance and its extensions

    J. Math. Psychol.

    (2014)
  • G. Francis

    The psychology of replication and replication in psychology

    Perspect. Psychol. Sci.

    (2012)
  • S.P. David

    Potential reporting bias in FMRI studies of the brain

    PLoS ONE

    (2013)
  • G. Rücker

    Detecting and adjusting for small-study effects in meta-analysis

    Biom. J.

    (2011)
  • S. Duval et al.

    Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis

    Biometrics

    (2000)
  • J.P. Ioannidis

    Adjusting for bias: a user's guide to performing plastic surgery on meta-analyses of observational studies

    Int. J. Epidemiol.

    (2011)
  • S. Thompson

    A proposed method of bias adjustment for meta-analyses of published observational studies

    Int. J. Epidemiol.

    (2011)
  • R.G. Jennings et al.

    Publication bias in neuroimaging research: implications for meta-analyses

    Neuroinformatics

    (2012)
  • J.P. Ioannidis

    Excess significance bias in the literature on brain volume abnormalities

    Arch. Gen. Psychiatry

    (2011)
  • S. Borgwardt

    Why are psychiatric imaging methods clinically unreliable? Conclusions and practical guidelines for authors, editors and reviewers

    Behav. Brain Funct.

    (2012)
  • P. Fusar-Poli

    Evidence of reporting biases in voxel-based morphometry (VBM) studies of psychiatric and neurological disorders

    Hum. Brain Mapp.

    (2013)
  • C.G. Begley et al.

    Drug development: raise standards for preclinical cancer research

    Nature

    (2012)
  • F. Prinz

    Believe it or not: how much can we rely on published data on potential drug targets?

    Nat. Rev. Drug Discov.

    (2011)
  • G. ter Riet

    Publication bias in laboratory animal research: a survey on magnitude, drivers, consequences and potential solutions

    PLoS ONE

    (2012)
  • E.S. Sena

    Publication bias in reports of animal stroke studies leads to major overstatement of efficacy

    PLoS Biol.

    (2010)
  • J.P. Ioannidis

    Extrapolating from animals to humans

    Sci. Transl. Med.

    (2012)
  • M.R. Macleod

    Evidence for the efficacy of NXY-059 in experimental focal cerebral ischaemia is confounded by study quality

    Stroke

    (2008)
  • K.S. Button

    Power failure: why small sample size undermines the reliability of neuroscience

    Nat. Rev. Neurosci.

    (2013)
  • T.D. Sterling

    Publication decisions and their possible effects on inferences drawn from tests of significance, or vice versa

    J. Am. Stat. Assoc.

    (1959)
  • Cited by (331)

    • Reproducibility in Neuroimaging Analysis: Challenges and Solutions

      2023, Biological Psychiatry: Cognitive Neuroscience and Neuroimaging
    View all citing articles on Scopus
    View full text