Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Ethical Shades of Gray: Questionable Research Practices in Health Professions Education

View ORCID ProfileAnthony R. Artino Jr, Erik W. Driessen, View ORCID ProfileLauren A. Maggio
doi: https://doi.org/10.1101/256982
Anthony R. Artino Jr
Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland Twitter:@mededdoc
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Anthony R. Artino Jr
Erik W. Driessen
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Lauren A. Maggio
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Lauren A. Maggio
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

Abstract

Purpose To maintain scientific integrity and engender public confidence, research must be conducted responsibly. Whereas scientific misconduct, like data fabrication, is clearly irresponsible and unethical, other behaviors—often referred to as questionable research practices (QRPs)—exploit the ethical shades of gray that color acceptable practice. This study aimed to measure the frequency of self-reported QRPs in a diverse, international sample of health professions education (HPE) researchers.

Method In 2017, the authors conducted an anonymous, cross-sectional survey study. The web-based survey contained 43 QRP items that asked respondents to rate how often they had engaged in various forms of scientific misconduct. The items were adapted from two previously published surveys.

Results In total, 590 HPE researchers took the survey. The mean age was 46 years (SD=11.6), and the majority of participants were from the United States (26.4%), Europe (23.2%), and Canada (15.3%). The three most frequently reported QRPs were adding authors to a paper who did not qualify for authorship (60.6%), citing articles that were not read (49.5%), and selectively citing papers to please editors or reviewers (49.4%). Additionally, respondents reported misrepresenting a participant’s words (6.7%), plagiarizing (5.5%), inappropriately modifying results (5.3%), deleting data without disclosure (3.4%), and fabricating data (2.4%). Overall, 533 (90.3%) respondents reported at least one QRP.

Conclusions Notwithstanding the methodological limitations of survey research, these findings indicate that a substantial proportion of HPE researchers report a range of QRPs. In light of these results, reforms are needed to improve the credibility and integrity of the HPE research enterprise.

“Researchers should practice research responsibly. Unfortunately, some do not.” –Nicholas H. Steneck, 20061

The responsible conduct of research is the foundation of sound scientific practice.1,2 The need to conduct research in a responsible manner is self-evident—if science is to inform our understanding of how the world works, it must be done in an honest, accurate, and unbiased way.3

Whereas behaviors like data fabrication are clearly irresponsible and highly unethical, other forms of research misconduct exploit the ethical shades of gray that color acceptable research practice. Often referred to as questionable research practices (QRPs), these behaviors “by nature of the very fact that they are often questionable as opposed to blatantly improper, also offer considerable latitude for rationalization and self-deception.”4 Consequently, QRPs are more prevalent and, many have argued, more damaging to science and its public reputation than obvious fraud.4–8 Ultimately, QRPs can waste resources, provide an unfair advantage to some researchers over others, damage the scientific record, and provide a poor example for other researchers, especially trainees.7

Health professions education (HPE) is not immune to the damaging effects of irresponsible research practices. In the HPE context, we define QRPs as poor data management; inappropriate research procedures, including questionable procedures for obtaining informed consent; insufficient respect and care for study participants; improper research design; carelessness in observation and analysis; suboptimal trainee and mentor partnerships; unsuitable authorship or publishing practices; and derelictions in reviewing and editing.7 The need to guard against such practices is frequently described in the author instructions for most scientific journals. For example, Academic Medicine’s author instructions clearly describe ethical considerations related to authorship, prior and duplicate publications, conflicts of interests, and ethical treatment of human subjects (http://journals.lww.com/academicmedicine/_layouts/15/1033/oaks.journals/informationforauthors.aspx). Such journal guidelines are often patterned on the recommendations of the Committee on Publication Ethics (https://publicationethics.org/).

In the last decade, commentaries by several HPE journal editors have highlighted instances of QRPs in article submissions, including self-plagiarism, so-called “salami slicing” (i.e., inappropriately dividing a single study into multiple papers), and unethical authorship practices.9–11 Additionally, a recent review of four HPE journals found that 13% of original research articles published in 2013 did not address approval by an ethics review board or stated that it was unnecessary, without further discussion.12 Moreover, a 2017 study of senior HPE leaders highlighted multiple problematic authorship practices, including honorary authorship and the exclusion of authors who deserved authorship.13 Notably, only about half of the senior researchers surveyed were able to correctly identify the authorship standards used by most medical journals (i.e., the International Committee of Medical Journal Editors (ICMJE) authorship criteria14).

Notwithstanding these examples, QRPs have received limited attention in the HPE literature. In a recent article,7 we attempted to raise the community’s awareness of QRPs and highlight the need to examine their pervasiveness among HPE researchers. With this call to action in mind, we conducted the present study to measure the frequency of self-reported QRPs in a diverse, international sample of HPE researchers. In doing so, we hope to continue the conversation about QRPs in our growing HPE field, with the ultimate goal of promoting the responsible conduct of high-quality, ethical research.

Method

To measure the frequency of serious research misconduct and other QRPs, many different approaches have been employed in the literature. These include counts of confirmed cases of researcher fraud and paper retractions, as well research audits by government funders.8 Such methods are limited because they are calculated based on misconduct that has been discovered, and detecting such misconduct is difficult.15 Moreover, distinguishing intentional misconduct from honest mistakes is challenging. Therefore, such approaches significantly underestimate the real frequency of QRPs, since only researchers know if they have willfully acted in a questionable or unethical manner.

To address these challenges, survey methods have been used to ask scientists directly about their research behaviors.4–6,16,17 Like the measurement of any socially undesirable behavior, assessing QRPs via self-report likely underestimates the true prevalence or frequency of the behaviors. Nonetheless, when employed appropriately, survey methods can generate reasonable estimates that provide a general sense of the problem’s scope.18,19

Therefore, we administered an anonymous, cross-sectional survey to determine the frequency of QRPs in a sample of HPE researchers. The Ethical Review Board Committee of the Netherlands Association for Medical Education approved this study (Dossier #937).

Survey development

We developed our survey instrument by adapting several existing surveys. The final version of the survey featured a total of 66 items divided into three sections (see Supplemental Digital Appendix). The first section included 43 items derived from two previously published surveys assessing QRPs in biomedicine.5,6 The QRP items asked respondents to identify how often they had engaged in the particular research practice. The practices spanned the research continuum, from data collection and storage to study reporting, collaboration, and authorship. The QRP items employed a six-point, Likert-type, frequency-response scale: never, once, occasionally, sometimes, frequently, and almost always. Each item also included the response option not applicable to my work.

We slightly modified the original QRP items to improve their clarity and relevance to the HPE research context. For example, the original item “inadequately handled or stored data or (bio)materials” was revised to “inappropriately stored sensitive research data (e.g., data that contains personally identifiable information).” Following these modifications, 19 experienced HPE researchers reviewed the adapted survey items and provided detailed qualitative feedback.20 Ten of the expert reviewers were women, and all held doctoral degrees (13 PhDs, 4 MD/PhDs, and 2 MDs). On average, the reviewers had published 98.9 journal articles (SD=66.1) findable in PubMed. Based on the date of their first HPE publication, they had been publishing in the field for an average of 20.7 years (SD=9.3). Expert reviewers reported their work location as the United States (n=9), Canada (n=3), Europe (n=2), South America (n=1), Africa (n=1), and Australia (n=1), and all but two were identified as full professors.

The expert feedback included comments on item relevance, clarity, missing facets, and suggestions for overall survey improvement. Based on the expert feedback, we revised the survey again; revisions included wording modifications and the development of several new items specific to HPE research. For example, based on the recommendations of several experts, we created the following items related to qualitative research methods, among several others: “misrepresented a participant’s words or writings” and “claimed you used a particular qualitative research approach appropriately (e.g., grounded theory) when you knowingly did not.”

The second section of the survey included nine publication pressure items16 (these data are not reported here), and the final section included 13 demographic items. The survey ended with a single, open-ended question, which gave respondents the opportunity to give feedback on the survey instrument itself or provide additional thoughts on QRPs in HPE.

Sampling and survey distribution procedures

To create our sample, we used two separate approaches. First, we created a “curated sample” by searching Web of Science, Scielo (a database focused on South America), African Journals Online, and Asia Journals Online for articles in over 20 HPE journals published in 2015 and 2016. From these articles, we extracted all available author email addresses, removing duplicate authors. This process generated a sample of 1,840 unique HPE researchers. All names and emails were entered into Qualtrics, an online survey tool (Qualtrics, Provo, Utah), and the survey was then distributed in four waves of email invitations: wave 1 (sent November 13, 2017), wave 2 (sent November 20, 2017), wave 3 (sent November 27, 2017), and wave 4 (sent December 11, 2017).

Next, we collected a “social media sample” by posting anonymous links to the survey on our Twitter and Facebook accounts (posted on December 11, 2017). All survey responses obtained from the social media links were tracked separately from those sent to the curated sample. To prevent duplicate submissions, respondents in the social media sample were given the option to select “I have already completed this survey” on the informed consent page.

Statistical analyses

Prior to analysis, we screened the data for accuracy and missing values. Next, we calculated the response rate in the curated sample using response rate definition #6, as delineated by the American Association for Public Opinion Research.21 Then, to assess potential nonresponse bias in the curated sample, we used wave analysis to calculate a nonresponse bias statistic.22 In wave analysis, late respondents are considered proxies for non-responders, and their responses are compared to responses from the first wave. In addition, to determine whether or not it was appropriate to combine the curated and social media samples for analysis, we conducted a multivariate analysis of variance (MANOVA) to compare respondents on several demographic characteristics: age, experience doing HPE research, percentage of work time dedicated to HPE research, and number of peer-reviewed publications. Finally, we calculated descriptive statistics for the total sample, with particular emphasis on the frequency of self-reported QRPs. All data analyses and visualizations were conducted using IBM SPSS Statistics (IBM Corporation, New York, NY) and Microsoft Excel (Microsoft Corporation, Redmond, WA), respectively.

Results

Of the 1,840 email invitations sent to HPE researchers in the curated sample, 199 were returned as undeliverable, leaving 1,641 potential respondents. Of these, 463 (28.2%) researchers completed a least a portion of the survey.21 Results from the wave analysis revealed a nonresponse bias statistic of 0.36. On a six-point, frequency-response scale, this represents a 6% difference, which is unlikely to have a meaningful effect on practical interpretation of the results.22

The social media sample yielded an additional 127 responses. Results from the MANOVA comparing respondents in the curated sample to those in the social media sample revealed statistically significant differences between the two groups F(5, 524) = 6.67, P < .001. In particular, post-hoc analyses revealed that respondents in the social media sample were slightly younger (M=40.7 years) and more inexperienced in HPE research (M=7.5 years) than those in the curated sample (M=47.4 years and M=11.0 years, respectively). That said, the two groups did not differ in terms of percentage of work time spent doing HPE research activities or mean number of peer-reviewed publications. Therefore, because our goal was to explore the frequency of QRPs among a diverse, international sample of HPE researchers, we elected to pool the two samples and analyze the data together.

Of the 590 respondents in the pooled sample, the mean age was 46 years (SD=11.6), and there were 305 (51.7%) women, 246 (41.7%) men, and 39 (6.6%) individuals who did not report their gender. As indicated in Table 1, the sample consisted of HPE researchers from across the World Health Organization’s six world regions. The majority reported their location as the United States (26.4%), Europe (23.2%), and Canada (15.3%). Respondents’ education, area of study, work context and role, academic rank, and primary research activities are also presented in Table 1. In addition, respondents reported the following: years involved in HPE in any capacity (M=14.9 years, SD=9.7), years involved in HPE research (M=11.3 years, SD=8.5), percentage of work time spent conducting HPE research (M=27.3%, SD=23.7%), and total number of peer-reviewed publications (M=40.1, SD=55.0).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1 Demographic characteristics of an international sample of 590 health professions education researchers

Table 2 summarizes the frequency of self-reported QRPs, and the Figure provides a visual representation of these results. To simplify the figure, we collapsed the response options of occasionally, sometimes, frequently, and almost always into a single frequency option labeled more than once. Finally, the Box lists the top 10 most frequently reported QRPs among our respondents, from highest to lowest frequency. As indicated, the most frequently reported QRPs were related to authorship and study reporting practices, as well as issues around data storage, collection, and interpretation. Additionally, 39 (6.7%) respondents reported misrepresenting a participant’s words, 31 (5.5%) reported using sections of text from another author’s copyrighted material without permission or proper citation, 30 (5.3%)reported inappropriately modifying study results due to pressure from a research advisor or collaborator, 20 (3.4%) reported deleting data before performing analysis without disclosure, and 14 (2.4%) reported fabricating data. Overall, 533 (90.3%) respondents reported at least one QRP.

View this table:
  • View inline
  • View popup
Table 2 Freqquency of self-reported questionable research practices among an international sample of 590 health fessions education researchers, reported as No. (%)a

Box The top 10 most frequently reported QRPs among an international sample of 590 health professions education researchers (listed from highest to lowest frequency).

  1. Added one or more authors to a paper who did not qualify for authorship (so-called “honorary authorship”)

  2. Cited articles and or materials that you have not read

  3. Selectively cited certain papers just to please editors or reviewers

  4. Inappropriately stored sensitive research data (e.g., data that contains personally identifiable information)

  5. Selectively cited your own work just to improve your citation metrics

  6. Ignored a colleague’s questionable interpretation of data

  7. Collected course or curriculum data under the guise of “program evaluation” without human-subjects ethics (IRB) approval with the ultimate intent of using the data for research purposes

  8. Inappropriately emailed sensitive research data (e.g., data that contains personally identifiable information)

  9. Accepted authorship for which you did not qualify (so-called “honorary authorship”)

  10. Spread study results over more papers than is appropriate (so-called “salami slicing”)

Discussion

This study examined the frequency of self-reported QRPs among HPE researchers, practices that may be detrimental to scientific inquiry.1,2 To our knowledge, this is the first study to explore QRPs across the HPE research continuum. Taken together, our findings indicate that a substantial proportion of HPE researchers admit to having engaged in a range of QRPs. These results are consistent with the extant literature on research misconduct in other fields,4–6,8,16 and they are important because QRPs can waste resources, provide an unfair advantage to some researchers over others, and ultimately impede scientific progress. Therefore, this study raises significant concerns about the credibility of HPE research, suggesting that our community may need to take a hard look at its ethical norms and research culture.

In our survey, we asked respondents about a range of problematic behaviors, from clear misconduct (data fabrication) and falsification (distortion of results) to plagiarism (copying ideas or words without attribution) and authorship manipulation (e.g., honorary authorship). As Fanelli8 noted in his review of research misconduct, certain activities (e.g., honorary authorship or excluding study limitations) are qualitatively different than fabrication and falsification because they do not directly distort the quality of the science, per se. However, the damage done to the scientific enterprise by these “less severe” and more ambiguous QRPs may be proportionally greater than deliberate misconduct, for the simple reason that such practices occur more frequently. For example, 20.1% of HPE respondents reported one or more instances of “salami slicing.” While some may think of this as a minor offense, the practice fills the literature with more articles than is seemingly necessary.1,10 So, not only does this activity unfairly reward authors and waste resources (e.g., editorial time and journal space), it also can inflate the significance of a given finding, which in turn can distort the outcomes of meta-analyses and other types of systematic reviews.10,23

It is worth stating that the interpretation of our results is limited by the nature of the survey methodology employed and, in particular, by the threat of nonresponse bias (especially considering the sensitive nature of the topic under study).5,6 Therefore, it is reasonable to ask: (1) how reliable are these frequency estimates, and (2) what can they really say about the actual frequency of QRPs among HPE researchers? Although we did not assess score reliability in the present study, we argue here, as others have previously,5,8,16 that self-reports of QRPs likely underestimate the real frequency of questionable behaviors. Researchers who have acted unethically are undoubtedly hesitant to reveal such activities in a survey, despite all assurances of anonymity. What is more, the opposite—researchers admitting to unethical or otherwise questionable practices that they did not do—seems unlikely.8 Therefore, we speculate that QRPs may be even more widespread in our community than our estimates imply. Nevertheless, rather than establishing an absolute prevalence of QRPs in HPE, we believe these data are better suited for helping the community understand the nature of the most common QRPs and begin finding feasible solutions to improve our research enterprise.

Questionable research practices related to authorship were some of the most frequently reported behaviors in this study, particularly the practice of giving or accepting unwarranted authorship (so-called “honorary authorship”). Honorary authorship is unethical in academic publishing because individuals who have not sufficiently contributed to the work unfairly receive credit as an author and misrepresent their contributions in the scientific literature.24 Our findings corroborate the results of a recent survey of established HPE researchers,13 and it seems we are not alone in these practices.24–26 For example, a 2008 study of six high-impact medical journals found that 17.6% of corresponding authors admitted to including honorary authors.24 In a separate survey of radiology researchers, 58.9% of respondents reported that they had written a paper with a co-author whose contributions did not merit authorship.27 In some fields, including HPE, this practice has led journals to require authors to sign an author contribution agreement to verify their explicit authorship roles.28 The effectiveness of such requirements is unknown and could be a fruitful area for additional research.

Of note in our findings, the frequency of authors giving honorary authorship and those accepting honorary authorship were not equivalent: 60.6% admitted to adding undeserving authors whereas only 22.7% admitted to accepting honorary authorship. This mismatch suggests that HPE researchers may not fully understand authorship criteria. It may also be a concrete example of the so-called “Muhammad Ali effect”—the idea that individuals often see themselves as more likely to perform good acts and less likely to perform bad acts than others.29 Regardless of the mechanism, this finding indicates the need for increased author communication and better shared understanding of author roles and responsibilities, such as those set forth by the ICMJE (http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html).

A complete discussion of all the QRPs assessed in this study is outside the scope of this paper. While readers may debate the degree to which some of these behaviors are unethical or otherwise problematic, we should note, as described above, that seemingly minor infractions can have far-reaching negative consequences. For example, by employing QRPs like p-hacking (i.e., manipulating data or analyses until nonsignificant results become significant)30,31 or taking advantage of other types of “researcher degrees of freedom,”32 scientists can discover “illusory results”33 that actually represent artifacts of their study design and analytic approach, as opposed to legitimate findings that can be replicated.34,35 Some have argued that such behaviors are the result of individual researchers responding to a set of incentives, the most important of which are rewards for the quantity (not the quality) of their publications.16,36 A compelling way to view these behaviors is through the evolutionary lens of natural selection. Smaldino and McElreath37 made this point, contending that “The persistence of poor methods results partly from incentives that favor them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement.” If one accepts this thesis, then the most effective way to improve research practices, and the quality of the corresponding science, is to change incentive structures at the institutional level.37

Limitations and future directions

The current study has limitations. First, nonresponse bias is a legitimate concern, especially considering the modest response rate (28.2%) in the curated sample. That said, recent research suggests that response rate may be a flawed indicator of response quality and representativeness.38,39 Moreover, the wave analysis results indicate that nonresponse bias was limited in our sample. Nonetheless, investigators should build on these initial results by examining QRPs among a larger, more global sample of HPE researchers.

A second limitation relates to the inherent challenge of assessing complex, context-specific research behaviors with a survey that requires respondents to self-assess their own practices.40,41 Because some of the QRPs on our survey are judgment calls, their evaluation likely requires more detail and nuance than an individual survey item can provide.40 So, while the practice of fabricating data is fairly straightforward (and never justified), the same cannot be said for something like inappropriately storing sensitive research data. The latter practice is open to interpretation: what is considered “inappropriate storage” to one researcher might seem perfectly fine to another. We attempted to reduce this type of ambiguity in our survey items by employing a rigorous expert review process, which resulted in several revised (and we believe, improved) survey items. But, ultimately, “survey self-reports can never fully rule out ambiguities in meaning, limitations in autobiographical memory, or motivated biases.”40 Thus, future work might apply qualitative research methods to address some of these limitations and further unpack the nature of QRPs in HPE.

Finally, we administered our survey in English and did not ask respondents to focus on a particular time period. These implementation and design choices could have negatively affected data quality. For example, several respondents noted in their written comments that ethical standards related to human-subjects research had evolved over time. Future research could address this problem by limiting the time period that respondents are asked to consider.5

Recommendations for practice

As HPE continues to mature as a field, it is essential that we explicitly confront our obligation to conduct our research in an ethical and responsible manner. Previously, we suggested a number of recommendations to improve practice,7 and the findings reported here indicate the time is now to implement these and other policy changes. Our recommended approaches included: (1) empowering research mentors as role models, (2) openly airing research dilemmas and infractions, (3) protecting whistleblowers, (4) establishing mechanisms for facilitating responsible research (e.g., creating HPE-specific institutional review boards42), and (5) rewarding responsible researchers (e.g., providing grant funding and publication opportunities for replication studies34). These are institution-level approaches; they embrace the idea that QRPs are not simply the result of individual researchers acting badly. Instead, there are important contextual factors that influence researcher behavior (e.g., social norms, power disparities, institutional policies, and academic incentives).7 Examples of initiatives being tried in other fields and institutions include promotion and tenure guidelines that privilege publication quality over quantity,43 study pre-registration plans,33 and other open-science practices (e.g., open data and materials sharing).44 All of these approaches require study to determine their efficacy in HPE.

In summary

Cultivating the responsible conduct of research is essential if we are to maintain scientific integrity and engender public confidence in our research.1,2 This study raises significant concerns about the credibility of HPE research and presents a somewhat pessimistic picture of our community. We should be clear though—most HPE scientists surveyed did not report the vast majority of QRPs. They are presumably doing good science. Nevertheless, we believe reforms are needed. In addition, we recommend future research to monitor QRPs in HPE and evaluate the effectiveness of policies designed to improve the integrity of our research enterprise.

Funding/SupportNone reported.

Other disclosures None reported.

Ethical approval The Ethical Review Board Committee of the Netherlands Association for Medical Education approved this study (Dossier #937).

Disclaimer

The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences, the U.S. Navy, the Department of Defense, or the U.S. Government.

Written work prepared by employees of the Federal Government as part of their official duties is, under the U.S. Copyright Act, a “work of the United States Government” for which copyright protection under Title 17 of the United States Code is not available. As such, copyright does not extend to the contributions of employees of the Federal Government.

Figure 1
  • Download figure
  • Open in new tab
Figure 1
  • Download figure
  • Open in new tab
Figure 1
  • Download figure
  • Open in new tab
Figure 1
  • Download figure
  • Open in new tab
Figure 1
  • Download figure
  • Open in new tab
Figure 1
  • Download figure
  • Open in new tab
Figure 1

Stacked bar graph showing the frequency of self-reported questionable research practices among an international sample of 590 health professions education researchers.

Acknowledgments

The authors wish to sincerely thank all of the HPE researchers who took the time (and had the courage) to complete our survey.

Footnotes

  • The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences, the U.S. Navy, the Department of Defense, or the U.S. Government.

  • Written work prepared by employees of the Federal Government as part of their official duties is, under the U.S. Copyright Act, a “work of the United States Government” for which copyright protection under Title 17 of the United States Code is not available. As such, copyright does not extend to the contributions of employees of the Federal Government.

References

  1. 1.↵
    Steneck NH. Fostering integrity in research: definitions, current knowledge, and future directions. Sci Eng Ethics. 2006;12(1):53–74.
    OpenUrlCrossRefPubMedWeb of Science
  2. 2.↵
    Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435(7043):737–738. doi:10.1038/435737a.
    OpenUrlCrossRefPubMedWeb of Science
  3. 3.↵
    Steneck NH. Introduction to the Responsible Conduct of Research. https://ori.hhs.gov/sites/default/files/rcrintro.pdf. Accessed January 15, 2018.
  4. 4.↵
    John LK, Loewenstein G, Prelec D. Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychol Sci. 2012;23(5):524–532. doi:10.1177/0956797611430953.
    OpenUrlCrossRefPubMed
  5. 5.↵
    1. Dorta-González P
    Tijdink JK, Bouter LM, Veldkamp CLS, van de Ven PM, Wicherts JM, Smulders YM. Personality Traits Are Associated with Research Misbehavior in Dutch Scientists: A Cross-Sectional Study. Dorta-González P, ed. PLoS One. 2016;11(9):e0163251. doi:10.1371/journal.pone.0163251.
    OpenUrlCrossRef
  6. 6.↵
    Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Res Integr Peer Rev. 2016;1(1):17. doi:10.1186/s41073-016-0024-5.
    OpenUrlCrossRef
  7. 7.↵
    Maggio LA, Artino AR, Picho K, Driessen EW. Are You Sure You Want to Do That? Fostering the Responsible Conduct of Medical Education Research. Acad Med. July 2017:1. doi:10.1097/ACM.0000000000001805.
    OpenUrlCrossRef
  8. 8.↵
    1. Tregenza T
    Fanelli D. How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. Tregenza T, ed. PLoS One. 2009;4(5):e5738. doi:10.1371/journal.pone.0005738.
    OpenUrlCrossRefPubMed
  9. 9.↵
    Brice J, Bligh J, Bordage G, et al. Publishing ethics in medical education journals. Acad Med. 2009;84(10 Suppl):S132–4. doi:10.1097/ACM.0b013e3181b36f69.
    OpenUrlCrossRefPubMed
  10. 10.↵
    Eva KW. How would you like your salami? A guide to slicing. Med Educ. 2017;51(5):456–457. doi:10.1111/medu.13285.
    OpenUrlCrossRef
  11. 11.↵
    ten Cate O. Why the ethics of medical education research differs from that of medical research. Med Educ. 2009;43(7):608–610. doi:10.1111/j.1365-2923.2009.03385.x.
    OpenUrlCrossRefPubMed
  12. 12.↵
    Hally E, Walsh K. Research ethics and medical education. Med Teach. 2016;38(1):105–106. doi:10.3109/0142159X.2014.956068.
    OpenUrlCrossRef
  13. 13.↵
    Uijtdehaage S, Mavis B, Durning SJ. Whose paper is it anyway? Authorship criteria according to established scholars in health professions education. Acad Med.
  14. 14.↵
    International Committee of Medical Journal Editors. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. 2017. http://www.icmje.org/icmje-recommendations.pdf. Accessed January 15, 2018.
  15. 15.↵
    1. White C
    , ed. The COPE Report 2000: Annual Report of the Committee on Publication Ethics. London, UK: BMJ Books; 2000.
  16. 16.↵
    Tijdink JK, Verbeke R, Smulders YM. Publication Pressure and Scientific Misconduct in Medical Scientists. J Empir Res Hum Res Ethics. 2014;9(5):64–71. doi:10.1177/1556264614552421.
    OpenUrlCrossRefPubMed
  17. 17.
    Anderson MS, Horn AS, Risbey KR, Ronning EA, De Vries R, Martinson BC. What Do Mentoring and Training in the Responsible Conduct of Research Have To Do with Scientists’ Misbehavior? Findings from a National Survey of NIH-Funded Scientists. Acad Med. 2007;82(9):853–860. doi:10.1097/ACM.0b013e31812f764c.
    OpenUrlCrossRefPubMedWeb of Science
  18. 18.↵
    Schaeffer NC, Dykema J. Questions for Surveys: Current Trends and Future Directions. Public Opin Q. 2011;75(5):909–961. doi:10.1093/poq/nfr048.
    OpenUrlCrossRefPubMedWeb of Science
  19. 19.↵
    Tourangeau R, Yan T. Sensitive questions in surveys. Psychol Bull. 2007;133(5):859–883. doi:10.1037/0033-2909.133.5.859.
    OpenUrlCrossRefPubMedWeb of Science
  20. 20.↵
    Artino Jr. AR, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide No. 87. Med Teach. 2014;36:463–474. doi:10.3109/0142159x.2014.889814.
    OpenUrlCrossRefPubMed
  21. 21.↵
    The American Association for Public Opinion Research. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 9th ed. AAPOR; 2016.
  22. 22.↵
    Phillips AW, Reddy S, Durning SJ. Improving response rates and evaluating nonresponse bias in surveys: AMEE Guide No. 102. Med Teach. 2016;38(3):217–228. doi:10.3109/0142159X.2015.1105945.
    OpenUrlCrossRef
  23. 23.↵
    Supak Smolcić V. Salami publication: definitions and examples. Biochem medica. 2013;23(3):237–241. doi:10.11613/BM.2013.030.
    OpenUrlCrossRef
  24. 24.↵
    Wislar JS, Flanagin A, Fontanarosa PB, Deangelis CD. Honorary and ghost authorship in high impact biomedical journals: a cross sectional survey. BMJ. 2011;343:d6128.
    OpenUrlAbstract/FREE Full Text
  25. 25.
    Kornhaber RA, McLean LM, Baber RJ. Ongoing ethical issues concerning authorship in biomedical journals: an integrative review. Int J Nanomedicine. 2015;10:4837–4846. doi:10.2147/IJN.S87585.
    OpenUrlCrossRefPubMed
  26. 26.↵
    Vera-Badillo FE, Napoleone M, Krzyzanowska MK, et al. Honorary and ghost authorship in reports of randomised clinical trials in oncology. Eur J Cancer. 2016;66:1–8. doi:10.1016/J.EJCA.2016.06.023.
    OpenUrlCrossRef
  27. 27.↵
    Eisenberg RL, Ngo L, Boiselle PM, Bankier AA. Honorary Authorship in Radiologic Research Articles: Assessment of Frequency and Associated Factors. Radiology. 2011;259(2):479–486. doi:10.1148/radiol.11101500.
    OpenUrlCrossRefPubMedWeb of Science
  28. 28.↵
    Lundberg GD, Flanagin A. New Requirements for Authors: Signed Statements of Authorship Responsibility and Financial Disclosure. JAMA J Am Med Assoc. 1989;262(14):2003. doi:10.1001/jama.1989.03430140121037.
    OpenUrlCrossRefPubMedWeb of Science
  29. 29.↵
    Allison ST, Messick DM, Goethals GR. On Being Better but not Smarter than Others: The Muhammad Ali Effect. Soc Cogn. 1989;7(3):275–295. doi:10.1521/soco.1989.7.3.275.
    OpenUrlCrossRefWeb of Science
  30. 30.↵
    Head ML, Holman L, Lanfear R, Kahn AT, Jennions MD. The Extent and Consequences of P-Hacking in Science. PLOS Biol. 2015;13(3):e1002106. doi:10.1371/journal.pbio.1002106.
    OpenUrlCrossRefPubMed
  31. 31.↵
    Nuzzo R. Scientific method: Statistical errors. Nature. 2014;506(7487):150–152. doi:10.1038/506150a.
    OpenUrlCrossRefPubMedWeb of Science
  32. 32.↵
    Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22(11):1359–1366. doi:10.1177/0956797611417632.
    OpenUrlCrossRefPubMed
  33. 33.↵
    Gehlbach H, Robinson CD. Mitigating Illusory Results through Preregistration in Education. J Res Educ Eff. October 2017:1-20. doi:10.1080/19345747.2017.1387950.
    OpenUrlCrossRef
  34. 34.↵
    Picho K, Maggio LA, Artino AR. Science: the slow march of accumulating evidence. Perspect Med Educ. 2016;5(6):350–353. doi:10.1007/s40037-016-0305-1.
    OpenUrlCrossRef
  35. 35.↵
    Ioannidis JP A. Why Most Published Research Findings Are False. PLoS Med. 2005;2(8):e124. doi:10.1371/journal.pmed.0020124.
    OpenUrlCrossRefPubMed
  36. 36.↵
    Horton R. Offline: What is medicine’s 5 sigma? Lancet. 2015;385(9976):1380. doi:10.1016/S0140-6736(15)60696-1.
    OpenUrlCrossRef
  37. 37.↵
    Smaldino PE, McElreath R. The natural selection of bad science. R Soc Open Sci. 2016;3(9):160384. doi:10.1098/rsos.160384.
    OpenUrlCrossRefPubMed
  38. 38.↵
    Johnson TP, Wislar JS. Response rates and nonresponse errors in surveys. JAMA. 2012;307:1805–1806. doi:10.1001/jama.2012.3532.
    OpenUrlCrossRefPubMedWeb of Science
  39. 39.↵
    Halbesleben JRB, Whitman M V. Evaluating survey quality in health services research: a decision framework for assessing nonresponse bias. Health Serv Res. 2013;48(3):913–930. doi:10.1111/1475-6773.12002.
    OpenUrlCrossRefPubMedWeb of Science
  40. 40.↵
    Fiedler K, Schwarz N. Questionable Research Practices Revisited. Soc Psychol Personal Sci. 2016;7(1):45–52. doi:10.1177/1948550615612150.
    OpenUrlCrossRef
  41. 41.↵
    Eva KW, Regehr G. Self-assessment in the health professions: a reformulation and research agenda. Acad Med. 2005;80(10 Suppl):S46–54. http://www.ncbi.nlm.nih.gov/pubmed/16199457.
    OpenUrlCrossRefPubMedWeb of Science
  42. 42.↵
    DeMeo SD, Nagler A, Heflin MT. Development of a Health Professions Education Research-Specific Institutional Review Board Template. Acad Med. 2016;91(2):229–232. doi:10.1097/ACM.0000000000000987.
    OpenUrlCrossRef
  43. 43.↵
    Nazim Ali S, Young HC, Ali NM. Determining the quality of publications and research for tenure or promotion decisions. Libr Rev. 1996;45(1):39–53. doi:10.1108/00242539610107749.
    OpenUrlCrossRef
  44. 44.↵
    1. Macleod MR
    Kidwell MC, Lazarević LB, Baranski E, et al. Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency. Macleod MR, ed. PLOS Biol. 2016;14(5):e1002456. doi:10.1371/journal.pbio.1002456.
    OpenUrlCrossRefPubMed
Back to top
PreviousNext
Posted April 12, 2018.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Ethical Shades of Gray: Questionable Research Practices in Health Professions Education
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Ethical Shades of Gray: Questionable Research Practices in Health Professions Education
Anthony R. Artino Jr, Erik W. Driessen, Lauren A. Maggio
bioRxiv 256982; doi: https://doi.org/10.1101/256982
Reddit logo Twitter logo Facebook logo LinkedIn logo Mendeley logo
Citation Tools
Ethical Shades of Gray: Questionable Research Practices in Health Professions Education
Anthony R. Artino Jr, Erik W. Driessen, Lauren A. Maggio
bioRxiv 256982; doi: https://doi.org/10.1101/256982

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Scientific Communication and Education
Subject Areas
All Articles
  • Animal Behavior and Cognition (4229)
  • Biochemistry (9118)
  • Bioengineering (6753)
  • Bioinformatics (23949)
  • Biophysics (12103)
  • Cancer Biology (9498)
  • Cell Biology (13746)
  • Clinical Trials (138)
  • Developmental Biology (7618)
  • Ecology (11666)
  • Epidemiology (2066)
  • Evolutionary Biology (15479)
  • Genetics (10621)
  • Genomics (14298)
  • Immunology (9468)
  • Microbiology (22808)
  • Molecular Biology (9083)
  • Neuroscience (48900)
  • Paleontology (355)
  • Pathology (1479)
  • Pharmacology and Toxicology (2566)
  • Physiology (3828)
  • Plant Biology (8320)
  • Scientific Communication and Education (1467)
  • Synthetic Biology (2294)
  • Systems Biology (6172)
  • Zoology (1297)