PT - JOURNAL ARTICLE AU - Ulrich Knief AU - Wolfgang Forstmeier TI - Violating the normality assumption may be the lesser of two evils AID - 10.1101/498931 DP - 2018 Jan 01 TA - bioRxiv PG - 498931 4099 - http://biorxiv.org/content/early/2018/12/20/498931.short 4100 - http://biorxiv.org/content/early/2018/12/20/498931.full AB - 1. Researchers are often uncertain about the extent to which it may be acceptable to violate the assumption of normality of errors, which underlies the most-frequently used tests for statistical significance (regression, t-test, ANOVA, and linear mixed models with Gaussian error).2. Here we use Monte Carlo simulations to show that such Gaussian models are remarkably robust to even the most dramatic deviations from normality.3. We find that P-values are generally reliable if either the dependent variable Y or the predictor X are normally distributed and that bias only occurs if both are heavily skewed (resulting in outliers in both X and Y). In the latter case, judgement of significance at an α-level of 0.05 is still safe unless sample size is very small. Yet, with more stringent significance criteria as is used when conducting numerous tests (e.g. α = 0.0001) there is a greater risk of making erroneous judgements.4. Generally we conclude that violating the normality assumption appears to be the lesser of two evils, when compared to alternative solutions that are either unable to account for levels of non-independence in the data (most non-parametric tests) or much less robust (e.g. Poisson models which require control of overdispersion and sophisticated resampling). We argue that the latter may pose a more substantial threat to the reliability of research findings when pragmatically acknowledging that, in the majority of publications, statistical expertise is limited.