Difficulty in reproducing published biomedical research studies has become a matter of increasing concern that, if unaddressed, will waste limited research funding and may erode public support for research. Nature Methods is therefore adopting new editorial measures in an attempt to improve the consistency and quality of reporting in submitted manuscripts.

We join Nature and the other Nature research journals in this effort. We will all be using a checklist (http://www.nature.com/authors/policies/checklist.pdf) intended to prompt authors to disclose technical and statistical information in their submissions and to encourage referees to consider aspects important for research reproducibility. The checklist was developed on the basis of community discussion, including workshops held last year by the US National Institute of Neurological Disorders and Stroke and the National Cancer Institute, to address the problems underlying irreproducibility. Inspiration was also taken from published studies and guidelines about reporting standards (or the lack thereof) and by the collective experience of editors at Nature journals.

The checklist focuses on experimental and analytical design elements that are critical for the interpretation of research results but that are often reported incompletely. For example, authors will need to describe methodological parameters that may introduce bias or influence robustness, and they must provide precise characterization of key reagents, such as antibodies and cell lines.

More broadly, we will require more precise descriptions of statistics. To help improve the statistical robustness of papers, Nature journals will now employ statisticians as consultants on certain papers, at the Editors' discretion and as suggested by referees. By focusing on reporting, we avoid dictating how authors should perform their experiments, for example, by always requiring biological replicates. Although biological replicates are required for making biological conclusions, technical replicates are necessary for determining the technical variability of a method. But reviewers must know what kinds of replicates underlie the error bars in a figure at a point in the review process when deficiencies can still be corrected.

To further increase transparency, we now also encourage authors to provide, in tabular form, the data underlying the graphical representations used in figures. This is in addition to our well-established data-deposition policy for specific types of experiments and large data sets. The source data will be made accessible directly from the figure legend for readers interested in seeing them firsthand. We also continue to encourage authors to use resources for sharing detailed methods and reagent descriptions by providing direct online linking between primary research articles and Protocol Exchange (http://www.nature.com/protocolexchange/), an open resource into which authors can deposit the detailed step-by-step experimental protocols used in their study.

Finally, Nature Methods will be requesting more information about the custom software used to implement the methods we publish. Too often we don't discover until after a manuscript has been published that implementation of the method in other labs is inhibited by a need to code control or analysis software that wasn't supplied with or mentioned in the manuscript. Similarly, the precise processing of the data may be unknown. Although neither problem is likely to invalidate the methodology, this lack of transparency hinders method implementation and reproduction.

Ensuring systematic attention to reporting and transparency is only a small step toward solving the issues of reproducibility that have been highlighted across the life sciences. As underscored in three Correspondences in this issue, statistical analysis of reproducibility itself is still immature. But most research studies can be robustly assessed with existing statistical methods, when they are applied properly. Unfortunately, too many biologists still do not receive adequate training in statistics and other quantitative aspects of their area of study, and mentoring of young scientists on matters of rigor and transparency is inconsistent at best. Institutions must put more emphasis on training future scientists in these areas.

Finally, as discussed in these pages last year (doi:10.1038/nmeth.1926), there is little recognition or incentive for implementing and validating newly described methods. Those who would put effort into documenting the validity or irreproducibility of a method, or making practical but incremental improvements to it, have little prospect of seeing their efforts valued by journals and funders; meanwhile, funding and efforts are wasted as others are unable to benefit from such follow-up work.

Tackling these issues is a long-term endeavor that will require the commitment of funders, institutions, researchers and publishers. Our effort is but a single step that we hope will be one of many. Because what is ultimately at stake is public trust in science.