Abstract
After an experiment has been completed, a trend may be observed that is “not quite significant”. Sometimes in this situation, researchers collect more data in an effort to achieve statistical significance. Such “N-hacking” is condemned because it can lead to an excess of false positive results. I use simulations to demonstrate how N-hacking causes false positives. However, in a parameter regime relevant for many experiments, the increase in false positives is quite modest. Moreover, results obtained this way have higher Positive Predictive Value than non-incremented experiments of the same sample size and statistical power. In other words, adding a few more observations to shore up a nearly-significant result can increase the reproducibility of results, counter to some current rhetoric. Many experiments are non-confirmatory, and unplanned sample augmentation with reasonable decision rules would not cause rampant irreproducibility in that context.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
improvements in figures for clarity, reorganization of order of points, refinement of discussion points