Abstract
Background Success shapes the life and careers of scientists. But success in science is difficult to define, let alone to translate in indicators that can be used for assessment. In the past few years, several groups expressed their dissatisfaction with the indicators currently used for assessing researchers. But given the lack of agreement on what should constitute success in science, most propositions remain unanswered. This paper aims to complement our understanding of success in science and to document areas of tension and conflict in research assessments.
Methods We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.
Results Given the breadth of our results, we divided our findings in a two-paper series, with the current paper focusing on what defines and determines success in science. Respondents depicted success as a multi-factorial, context-dependent, and mutable factor. Success appeared to be an interaction between characteristics from the researcher (Who), research outputs (What), processes (How), and luck. Interviewees noted that current research assessments overvalued outputs but largely ignored the processes deemed essential for research quality and integrity. Interviewees sustained that we need a diversity of indicators to allow a balanced and diverse view of success; that assessments should not blindly depend on metrics but also value human input; that we must value quality over quantity; and that any indicators used must be transparent, robust, and valid.
Conclusions The objective of research assessments may be to encourage good researchers, to benefit society, or simply to advance science. Yet we show that current assessments fall short on each of these objectives. Open and transparent inter-actor dialogue is needed to understand what research assessments aim for and how they can best achieve their objective.
Trial Registration osf.io/33v3m
Footnotes
A few typos were corrected
LIST OF ABBREVIATIONS
- COREQ
- COnsolidated criteria for REporting Qualitative research checklist
- DORA
- San Francisco Declaration on Research Assessment
- EP
- Editor(s) or publisher(s)
- ESF
- European Science Foundation
- EUA
- European Universities Association
- FA
- Funding Agency(s)
- LT
- Laboratory technician(s)
- PMI
- Policy Maker(s) or influencer(s)
- PostDoc
- Post-Doctoral Researcher(s)
- QUAGOL
- Qualitative Analysis Guide of Leuven
- RCC
- Researcher(s) who changed career
- Re-SInC
- Rethinking Success, Integrity, and Culture in Science
- RIL
- Research institution leader(s)
- RIN
- Research integrity network member(s)
- RIO
- Research integrity office member(s)
- WCRI
- World Conference on Research Integrity