Abstract
Research institutions and researchers have become increasingly concerned about poor research reproducibility and replicability, and research waste more broadly. Research institutions play an important role and understanding their intervention options is important. This review aims to identify and classify possible interventions to improve research quality, reduce waste, and improve reproducibility and replicability within research-performing institutions.
Taxonomy development steps: 1) use of an exemplar paper of journal-level research quality improvement interventions, 2) 2-stage search in PubMed using seed and exemplar articles, and forward and backward citation searching to identify articles evaluating or describing research quality improvement, 3) elicited draft taxonomy feedback from researchers at an open-sciences conference workshop, and 4) cycles of revisions from the research team.
The search identified 11 peer-reviewed articles on relevant interventions. Overall, 93 interventions were identified from peer-review literature and researcher reporting. Interventions covered before, during, and after study conduct research stages and whole of institution. Types of intervention included: Tools, Education & Training, Incentives, Modelling & Mentoring, Review & Feedback, Expert involvement, and Policies & Procedures. Identified areas for research institutions to focus on to improve research quality and for further research includes improving incentives to implement quality research practices, evaluating current interventions, encourage no- or low-cost/high-benefit interventions, examine institution research culture, and encourage mentor-mentee relationships.
Introduction
Over the past decade, the problems of research waste and the reproducibility crisis have been extensively documented.(1-4) A 2014 series in the Lancet demonstrated that approximately 85% of biomedical research goes to waste through the combination of poor design, non-publication, and poor reporting, (1) with a similar percent recently reported for ecology research.(5) Studies in disciplines as diverse as economics, cancer biology, psychology, machine learning, ecology, and social sciences have found disappointingly low reproducibility and replicability.(1-4)
Low reproducibility means that original protocol, materials or data sets may not be available to conduct analysis to reproduce the results, and low replicability relates to the inability to re-conduct or re-conduct well an entire study or experiment, regardless of whether the results replicate.(6) The two approaches, reproducibility and replicability, exist on a spectrum from ‘direct’ following the original methods strictly, to ‘conceptual’ where researchers may selectively alter aspects of the original methods to test for robustness and generalisability.(7, 8) Both are important to reducing research waste and improving overall research practice quality.
Poor research reproducibility and replicability is partly attributable to flaws in study design and partly to incomplete or poor documentation of research processes. The flow-on effects impact research end-users such as industries that utilize research to facilitate practice. Many of these problems are avoidable and might be reduced with sustained interventions at the research systems level. The key stakeholders in improving the research system to improve quality are the research funders and research institutions.
What might research institutions do to improve the quality and reproducibility of their research? This work builds on a previous taxonomy of interventions for journals and publishers developed by Blanco et al in their scoping review of interventions to improve adherence to reporting guidelines in health research, which classifies interventions by the type of intervention and by the research stage.(9) The current taxonomy expands the behaviour change categories used by Blanco, drawing on Michie’s behaviour change wheel which covers Training, Incentivisation, Modelling, Persuasion, Education and Coercion.(10) Using this approach is useful, not just for identifying existing interventions, but also to identify where there are gaps. We are not aware of any previous study that classifies interventions to improve quality and increase reproducibility at institutions.
This review aims to identify and classify possible interventions to improve research quality, reduce waste, and improve reproducibility and replicability within research-performing institutions. We also identified studies that assessed the interventions.
Methods
Research institutions were inclusive of academic institutions such as universities, government research institutes, and privately funded institutions. Within institutions, interventions could occur at a range of levels, from individual actions to department and whole of institution levels, including policy changes.
The interventions could be training or education, institutional incentives or regulations, or provision of infrastructure and tools; the only requirement was that the intervention must be aimed at some aspect of reducing research waste, improving quality, or improving reproducibility. For example, interventions aimed at better study design or better conduct of research, increased or timelier publication of research, better reporting of research, including better “open science” such as the provision of protocols and other research process details, and research data would all be includable.
The Search
Because the potential range of potential interventions and terms used was broad and unknown, we used a 2-stage process for the search. Stage 1 used a set of seed articles and reviews identified by the authors from a preliminary search which identified several articles including a review of journal interventions.(9) We then used a forward and backward citations search of this set of articles to widen the pool of potential articles. Stage 2 then conducted a word frequency analysis on these eligible articles to identify key terms to build a search strategy for the full database searches.
This Stage 1 search identified a key review article on interventions to improve adherence to reporting guidelines for journals, which included a suggested taxonomy.(9) We then drafted a potential taxonomy and used the other interventions identified from the searches to test and modify this proposed intervention taxonomy.
Next, we sought input from others for further examples and on the taxonomy. To ensure the taxonomy was reflective of research practice in institutions, we invited possible end-users to assist in co-design. During the 2021 AIMOS Association for Interdisciplinary Meta-Research and Open Science Conference (https://www.ivvy.com.au/event/aimos2021), we held a workshop with approximately 40 participants to further refine the draft taxonomy. Workshop participants included researchers at different career levels, ranging from PhD students to professors.
Briefly, the steps of the workshop process were
List any interventions you have conducted, attended, or heard of.
Map these interventions onto the taxonomy using a Google Doc accessible to all participants (Note: if they do not fit, then put them into the second list)
Discussion of interventions that do not fit the proposed taxonomy (do these warrant a change to the taxonomy?)
General Discussion on next steps
Following the workshop, we used the participant input to develop the revised taxonomy, collect further potential examples and revised the taxonomy again.
Results
The taxonomy
Interventions were first classified according to the research stage of their implementation: before study conduct, during study conduct, and after study conduct. Research stages were then further subclassified into education; grant writing; protocol writing; research conduct & analysis; manuscript writing; manuscript submission; and post-publication. Table 1 highlights which type of behaviour change interventions, as classified by Michie’s behaviour wheel, are represented at each research stage.
“Whole of institution” was included as an additional category, separate to the research stages, as some interventions relate to two or more stages of research or support overall research practices in that institution. Similarly, “Institutional Culture and Individual Ethos” was added to the taxonomy to highlight the influence of the culture of the institution including their overall research aims and mission, and those that work in the institution and their individual ethos, values, and attitudes towards research practices.
Table 1 gives examples of the interventions identified. The full set of interventions are displayed in Supplementary File 1. Overall, we identified 93 different possible interventions.
The types of intervention varied widely, from whole of institution policies – such as modifying hiring and promotion criteria to emphasise rigorous research design, reproducibility, and transparency, to highly specific departmental-level interventions such as developing mentor-mentee relationships. Most interventions are applicable to researchers at all levels of experience. Several interventions are specific to particular areas of research e.g., registration of clinical trials in healthcare, but others, such as mentoring, or journal clubs are relevant to multiple disciplines. We did not subclassify by disciplines.
In reviewing the taxonomy, several themes emerged. Many of the interventions require a substantial and long-term investment in people– e.g., hiring of specific experts; training of research assistants and others on data collection methods and techniques; co-design with patients and public/end-users. Though ad hoc seminars have value, most of the interventions require individuals or teams or to be embedded in institutions, even if the intervention is to provide “just in time” advice. For example, a publications officer to check adherence of papers to reporting guidelines would have to be well established for them to be able to provide on the spot advice at a time of need.
Education of researchers and research support staff can happen by a variety of formal and informal methods. There was some suggestion that some of the training had to be compulsory e.g., included in the curriculum for undergraduate and postgraduate research training. However, there was also specific recognition of the role of informal networks including peer-to-peer learning and mentor-mentee relationships. We note that mentor-mentee learnings can be in both directions, as more junior staff are sometimes the instigators of novel research practices learned during their research skill development. As training in undergraduate and postgraduate programs are constantly changing, more experienced researchers can be exposed to this by mentoring a student or Early Career Researcher.
There were surprisingly few technical interventions suggested. Most of these also included an element of human intervention e.g., use of pull-requests and code commentary by collaborators and/or external peers on shared codebases. Notably one of the technical interventions was to cease subsidising (through purchase of site licenses) the use of certain software programs (e.g., statistical and spreadsheet) that are not conducive to reproducibility, and to instead promote and encourage the use of open science source software and practices.
Surprisingly, the “incentives” row has more empty cells than the other rows. This indicates either a lack of awareness of participants involved in reviewing the taxonomy, and/or a lack of incentives being implemented and available at research institutions to encourage researchers to participant in quality research practices.
Following the classification, we searched for papers that had described and/or assessed these interventions. During the search processes, eleven articles evaluating interventions were found. All the interventions that had been assessed were in the manuscript and grant writing, or education phase of research (Table 2).
Discussion
To improve institutional interventions that might improve research quality, an understanding of the range and types of interventions is vital. Based on work from Blanco et al (2019) in their scoping review of interventions to improve adherence to reporting guidelines in health research we have developed a taxonomy of possible interventions to improve research quality and reproducibility within institutions. At the institutional level, interventions are possible at all stages of research and for each stage of research there are several possible mechanisms of intervention.
Through an iterative crowdsourced process, we identified interventions that the authors or the hackathon participants had experienced, that they had seen conducted in their or others’ institutions, or which they wanted to see. Very few of the interventions have been evaluated. In several areas where interventions are possible none were identified, or the interventions suggested are only aspirational at this point.
Research quality has become a much-discussed topic in Australia, and internationally, but there is no systematic approach to improving research quality, especially regarding what interventions are needed at institutions. In Australia, research quality is most prominently assessed through the Excellence in Research for Australia (ERA) process. However, ERA assesses research outputs, not any part of the research process. An increased national focus on research quality more widely through the work of the NHMRC’s Research Quality Steering Committee (RQSC) established in 2018. In 2019, the RQSC oversaw a survey of Australian Research institutions and researchers.(22) Key opportunities identified from that survey relevant to interventions at institutions were the need for effective training and mentorship (especially of junior researchers) about responsible research practice; addressing factors that adversely affect research quality, such as poor research practices; promoting positive initiatives and processes rather than competition where possible; and encouraging more rigorous reproducibility procedures.
A recent Australian Chief Scientist declared a need to “shift from quantity to quality” and to challenge the status quo of “a passive apprenticeship system” of researcher training.(23, 24)
In the recently released National Research Infrastructure RoadMap, the importance of research quality is recognised in specific, limited areas, primarily data e.g., “An important driver for maintaining quality research output is Australia’s ability to generate and analyse data as well as improving the digital skills of researchers”.(22, 25)
On a global level, the United Nations Educational, Scientific and Cultural Organization, developed their Recommendation on Open Science in 2021.(26) Their recommendations align with the taxonomy in this paper as they recommend interventions related to institutions, including open scientific publications, research data, educational resources, source software and source code, and hardware. Open scientific publications relate to the ‘Education and Training’ – ‘Manuscript Submission’ cell – training on how to access funds for publication fees for open access journals. Open research data relates to many of the examples provided in the ‘Research Conduct & Analysis’ column of the taxonomy, including shared-version control repositories, and data dictionaries. Open educational resources, includes the examples in the ‘Education and Training’ row of the taxonomy, including the availability of training sessions to have hybrid delivery. Open-source software and source code, includes examples outlined in the ‘Tools’ row. Lastly, Open hardware relates to ‘training manuals/data collection protocol, including use of equipment’ in the ‘Research Conduct & Analysis’ – ‘Education and Training’ cell.
There have been other classifications of potential interventions, the Michie behaviour change wheel and the Nosek pyramid.(3, 10) Both of those classifications align with ours in that they range from interventions that are simply a change in the environment – our “tools”, Nosek’s “make it easy”, Michie’s “enablement” through to required actions - our “policies and procedures”, Nosek’s “make it required”, Michie’s “coercion”. What the classifications all demonstrate is the need for a range of approaches. By mapping interventions to specific research stage and interventions type, we have demonstrated the range of possible interventions, where there are gaps and especially the relative lack of assessment of these interventions.
Limitations
The interventions in the taxonomy we present are not a comprehensive list of all possible interventions. We did not assess adherence to any of the interventions or examine their effectiveness, except where there were previously published papers. As the participants and authors were largely or exclusively from the sciences, the list of interventions may not include interventions within Humanities and Social Sciences.
A strength of this research is the use of both published peer-reviewed literature and end-user engagement to develop the taxonomy.
Future Directions
Given most interventions outlined in the taxonomy have not been evaluated for their impact on research quality and reproducibility, there is a clear need for more institutional interventions be evaluated. Priority areas for evaluation should be those currently in common use at institutions, to assess their value. Implementation of new or different interventions could be those that are no- or low-cost, such as open access tools and software to enhance research practices, e.g., Overleaf, and JASP, and adaption of policies and the research environment to promote open science practices.
Institution culture and individual researcher ethos have a strong influence over the reproducibility, quality, and transferability of research practices. The UK Reproducibility Network encourages institutions to examine their research culture and how it may or may not be supportive of producing robust and credible research.(27) The implementation and evaluation of interventions outlined in our taxonomy should be considered along with the institution’s current culture and potential shifts that could be made to encourage and promote open science practices.
Finally, it is vital to explore the paucity of “incentive” interventions that research institutes might use. Incentivisation is important in workload models of research institutions, much like universities have incentives for their education and teaching of degrees and coursework, they need incentives for research quality. The kind of incentives depend heavily on the institutional structures and availability of resources to create or fund incentives. It is recommended that future research could be guided by this taxonomy further to identify how incentivisation of quality research practices could be better implemented.
Competing Interests
The authors declare no competing interests.
Acknowledgements
The authors would like to thank the AIMOS 2021 Conference Hackathon Workshop attendees who participated in providing feedback for the draft taxonomy.