Abstract
Unbiased scientific reporting is crucial for data and research synthesis. Previous studies suggest that statistically significant results are more likely to be published and more likely to be submitted to high impact journals. However, the most recent research on statistical significance in relation to journal impact factors in ecological research was published more than two decades ago or addressed a small subset of the literature. Here, we extract p-values from all articles published in 11 journals in 2012 and 2014 across a wide range of impact factors with six journals sampled in both years. Our results indicate that the proportion of statistically significant results increases with rising impact factor. Such a trend can have important consequences for syntheses of ecological data and it highlights the importance of covering a wide range of impact factors when identifying published studies for data syntheses. This trend can also lead to a biased understanding of the probability of true effects in ecology and conservation. We caution against the possible downplaying of non-significant results by either journals or authors.