RT Journal Article SR Electronic T1 A systematic review of sample size and power in leading neuroscience journals JF bioRxiv FD Cold Spring Harbor Laboratory SP 217596 DO 10.1101/217596 A1 Alice Carter A1 Kate Tilling A1 Marcus R Munafò YR 2017 UL http://biorxiv.org/content/early/2017/11/23/217596.abstract AB Adequate sample size is key to reproducible research findings: low statistical power can increase the probability that a statistically significant result is a false positive. Journals are increasingly adopting methods to tackle issues of reproducibility, such as by introducing reporting checklists. We conducted a systematic review comparing articles submitted to Nature Neuroscience in the 3 months prior to checklists (n=36) that were subsequently published with articles submitted to Nature Neuroscience in the 3 months immediately after checklists (n=45), along with a comparison journal Neuroscience in this same 3-month period (n=123). We found that although the proportion of studies commenting on sample sizes increased after checklists (22% vs 53%), the proportion reporting formal power calculations decreased (14% vs 9%). Using sample size calculations for 80% power and a significance level of 5%, we found little evidence that sample sizes were adequate to achieve this level of statistical power, even for large effect sizes. Our analysis suggests that reporting checklists may not improve the use and reporting of formal power calculations.