Abstract
Many postdoctoral fellows in the STEM fields enter the academic job market with little knowledge of the process and expectations, and without any means to assess their qualifications relative to the general applicant pool. Demystifying this process is critical, as there is little information publicly available. In this work, we provide insight into the process of academic job searches by gathering data to establish background metrics for typical faculty job applicants, and further correlate these metrics with job search outcomes. We analyzed 317 responses to an anonymous survey for faculty job applicants from the May 2018 - May 2019 market cycle. Responses were about evenly split by gender, largely North American-centric and life science focused, and highly successful with 58% of applicants receiving at least one offer. Traditional metrics (funding, publications, etc.) of a positive research track record above a certain threshold of qualifications were unable to completely differentiate applicants that did and did not receive a job offer. Our findings suggest that there is no single clear path to a faculty job offer and that perhaps criteria not captured by our survey may also influence landing a faculty position above a certain threshold of qualification. Furthermore, our survey did capture applicants’ perception of the faculty job application process as unnecessarily stressful, time-consuming, and largely lacking in feedback, irrespective of a successful outcome. We hope that this study will provide an avenue for better data-driven decision making by the applicants and search committees, better evidence-based mentorship practices by principal investigators, and improved hiring practices by institutions.
Introduction
In the past three decades, the number of doctoral degrees (PhDs) awarded across the globe has increased dramatically in most STEM fields (Box 1) (1). The number of faculty positions available has essentially been stagnant, specifically in the United States since 2003 when NIH received their last major budget increase (2). Not only are there an insufficient number of faculty positions for the number of PhDs produced (3), but trainees typically emerge from the academic training environment feeling under-prepared and under-mentored to undertake any other type of job search (4). This means that there are a large number of applicants per position, many of whom are uncertain about their chances of obtaining a faculty job offer (5). The applicant pool has changed in many ways. PhD graduates are generally no longer of a similar demographic as their peers and as those on hiring committees (2). Academic science has become more diverse with strong calls to push diversity initiatives further (6). Scientific publishing is also faster-paced with the curriculum vitae (CV) of applicants now looking very different than even 10 years ago. In one study, successfully recruited evolutionary biologists for the title of “junior researchers” at the French National Center for Scientific Research (CNRS) needed to publish nearly twice as many articles to be hired in 2013 (22 ± 3.4) than in 2005 (12.5 ± 2.4) (7). The length of academic training (time between beginning of the first postdoctoral position and recruitment as a faculty member) has also increased from 3.25 (± 0.6) years in 2005 to 8.0 (± 1.7) in 2013 (7, 8). This increased length in training time has been reported repeatedly in most STEM fields, and is perceived as detrimental to both the greater scientific community and individuals in these temporary employment positions (9–13).
Despite these significant changes, the nature of the academic job search process and the structure of the market itself have largely remained the same. This has resulted in the perception of the academic job search process as an opaque system, with no clear standards or guidelines for navigating the lengthy application process. Beyond the requirement of a doctoral degree and possibly postdoctoral training, faculty job advertisements rarely contain information on the specific preferred qualifications. Furthermore, the criteria to judge job applicants are typically determined by a small departmental or institutional committee and are not transparent or made public. The amount of materials required by applicants to submit to this committee at various institutions is highly variable among hiring institutions and possibly increasing (1). This places a heavy burden not only on the applicants’ time, performance and well-being as each job application package is tailored and submitted individually, but also on the members of the search committees who spend long hours poring over numerous documents in hopes of identifying the best pool of candidates (1).
Previous studies have come to at least one common conclusion on how to address this problem: a need to increase transparency (14–16). Currently, there is little systematic evidence for what makes a good faculty candidate. Though many aspects of early career training are hotly debated as markers of success, usually by established principal investigators (PIs), there are no resources for potential faculty candidates to gauge their preparation when considering entering the academic job market. This prevents trainees and their mentors from reasonable assessment and well-researched intent on whether to enter the academic job market and how to best prepare.
The annual pool of faculty job applicants is large and provides a unique opportunity for data collection on the aforementioned issues. Here we aimed to demystify the academic job market by surveying recent applicants about their faculty application experience. We created an anonymous survey, asking for both common components of research and scholarly activity found on an academic CV, as well as information on applicants’ success through the 2018-2019 job cycle. The survey was distributed through social media platforms such as Slack, Twitter, and postdoctoral association mailing lists of various universities in North America and Europe. Of the 322 responses, we received 317 with analyzable data and found our survey respondents to be particularly advanced in traditional metrics (publications, funding, etc) for academic success. However, we did not observe strong correlations between many of these metrics and the offer of a faculty job. In addition, a survey of search committee members uncovered several discrepancies between how search committees and applicants perceive the process. Here we present qualitative and quantitative data on the process as a whole, including information on number of successful off-site and on-site interviews, offers, rejections and the lack of feedback.
The application process
In order to initiate an academic job search, the typical job applicant searches for relevant job postings on a wide variety of platforms. Table S1 summarizes resources that were often mentioned by our applicant survey respondents and cited by others as helpful for locating academic job ads across different fields. The initial electronic application generally consists of the following: a cover letter addressing the search committee, a teaching philosophy statement, CV, and a research plan (Box 2). Diversity statements are becoming increasingly common as well. The length and content of these materials can vary drastically based on year, region, institution, or particular search committee. In the current system, the overwhelming expectation is that effective application materials require individual tailoring for each specific institution and/or department to which the applicant is applying. Tailoring here should include department-specific cover letters, but may also involve a range of changes to the research statement, teaching statement, and diversity statement. Applicants become short-listed and contacted for interviews somewhere between 6 weeks to 6 months after application materials are due. Although many searches usually consist of an off-site or remote interview using Skype, Zoom or the telephone, we found that this was not a standard part of every search as the typical applicant received more on-site interviews than off-site interviews. The on-site interview typically lasts 1 to 2 days and consists of a research seminar, likely a “chalk-talk” (oral presentation of the research proposal), and possibly a teaching demonstration. The on-site interview also usually consists of many one-on-one meetings with other faculty within the hiring department, including a meeting with the department chair, as well as meeting with current trainees, and possibly meeting the administrative staff. After the interviews are conducted, candidates may be contacted and offered a position (Box 2). The time to offer is also variable, but is usually faster than the time between application and first contact. Importantly, a single search can result in multiple offers (e.g. the department may be able to fund multiple competitive candidates, or the first choice candidate may decline and the second candidate is given an offer). Searches can also “fail” if the committee does not find a suitable candidate fit for their program/department or if all applicants deemed qualified by the search committee decline their offers. Most applicants are not notified of a receipt of their application, current/ongoing status or a final notice of rejection or that the search has failed. As our data (the job applicant survey) focused on candidates and not the search committees, it is unclear how many individual searches are represented in our dataset.
An overview of the Academic Job search process
Results
Demographics of our applicant survey respondents
A total of 322 early career researchers volunteered to answer survey questions regarding their experience on the academic job market in the 2018-2019 application cycle (see supplemental materials) and data from 317 responses were used for analysis. Survey respondents reported a large range in the number of submitted applications from a minimum of one to a maximum of 250 (median: 15). The respondent pool was notably enriched in applicants who received at least one off-site interview (70%), at least one on-site interview (78%) and at least one offer (58%) in the 2018-2019 cycle; this may represent a significant bias in our sample.
Respondents represented a wide variety of fields, including engineering, life sciences, and computer science (Figure 1A, Table S2). The largest group of responses (85%) came from those in the life sciences and related fields (Biomedical/Biology/Chemistry/Bioengineering), with relatively equal numbers of applications from men and women across the life sciences (Figure 1A, Table S2). Our survey captured data from an international applicant pool, with representation from 13 different countries across Europe, Asia, and North America (Figure 1B, Table S3). However, the majority (72%) of our respondents reported currently working in the United States, which may reflect larger circulation of our survey on social media platforms and postdoctoral associations there. Most of these candidates applied to jobs within the United States (82%), Canada (33%), and the United Kingdom (24%) (Table S4). Thus, the data presented in this manuscript should be qualified when applied to broader populations and demographics.
The large majority (96%) of our applicant survey respondents entered the job market as postdocs (Figure 1C, Table S5). The applicants spent 1 to 13 years (median: 4 years) in a postdoctoral position. These data are consistent with a recent report suggesting that postdocs in the United States in the field of biomedical sciences spend an average of 3.6 years in their positions (17).
The reported range of years spent as a postdoctoral researcher for our survey applicants was quite large, with 4% of respondents spending 1 year or fewer as a postdoc to 9% of respondents reporting 8 or more years in their postdoctoral positions (maximum of 13 years) (Figure 1D, Table S6). Notably, postdocs in the life sciences spent significantly more time in postdoc positions (median: 5 years) than those in other fields (median: 2.75 years) before applying for a faculty position (Figure 1D, p = 6.5 x 10-6), which is consistent with previous findings on increased training times in the life/biomedical sciences before junior faculty recruitment (7,9– 12). The large majority of our applicant survey respondents went on the job market while in their first postdoctoral position (68% of all respondents to this question), with 32% of applicants who responded to this question reporting time spent in previous postdoctoral positions (Figure 1E, Table S7).
With regard to academic qualifications, our survey indicated that those applying for faculty positions had a large range in publication record, including number of papers co-authored, h-index, and total citation count. Respondents reported a median of 13 total publications (including co-authorships and lead authorships), with a median of 6 first author publications when entering the job market (Figure 1F, Table S8).
Our respondents were relatively evenly distributed across self-identified gender categories, with 51% of applicants identifying as male, 48% as female, while 1% preferred not to disclose this information (with no applicants identifying as non-binary) (Figure 1A, Table S2 first row). While there were only slightly fewer female applicants, this may reflect an over-representation of women in our survey population relative to the general population in STEM hiring, which historically report a minority of women applicants (18–20).
Interestingly, men had significantly more first author publications than women (medians of 7 and 5, respectively; p = 0.00014; Wilcoxon rank sum), more total publications (medians 16 and 11, p = 0.003) and more overall citations (medians of 343 and 228, p = 0.015) (Figure 2B, Table S8). Men in our survey also reported a statistically significant higher h-index than women (medians of 9.0 and 7.0, respectively; p = 0.0054; Wilcoxon rank sum test) as well as in authorship in the high impact factor journals Cell, Nature, and Science (“CNS” journals, p = 0.04; Wilcoxon rank sum test) (Figure 2C, Table S14). However, despite popular discussions on the need for a CNS publication (21), the majority (74%) of the applicants who took our survey did not have any authorship on a CNS paper, and a greater majority (84%) did not have a first author publication in a CNS journal (Figure 2C, Table S14). Of the 51 respondents with CNS papers, 49 (96%) were in a life science related field, indicating the valuation of these journals was highly field-specific (Figure 2C, p = 0.01; Wilcoxon rank sum test). Further, a majority of applicants reported having obtained fellowship funding (78%) at some point in their early career (Figure 2D). Interestingly, these distributions (no fellowship, graduate fellowship, postdoctoral fellowship or both) was significantly different between women and men (p = 0.002, 𝛘2 =0.01 Chi-squared test). 88% of women received a fellowship of some kind, compared to 72% of men; women had better success at receiving both PhD fellowships (42% of women vs 36% of men), and postdoctoral fellowships (72% of women, 58% of men) (Figure 2D, Table S9).
Preprints, or manuscripts submitted to an open-access server prior to peer-reviewed publication, are becoming increasingly popular among early career researchers, particularly in the life sciences (33,34,35) and are shown to boost article citations and mentions (29–31,37). We opted to address whether this increase in preprint submissions had an impact on the academic job market. While not all respondents answered our questions about preprints, we did receive 270 responses on this issue. Our survey data showed that 55% of these respondents (148 candidates) had posted at least one preprint throughout their career and 20% had posted between 2-6 preprints with an average of 1.57 preprints per person throughout their career thus far (Figure 2E, Table S8, S15). At the time of faculty job application, 40% of these respondents had an active preprint that was not yet published in a journal, with an average of 0.69 active preprints per person. Preprinted research was enormously helpful for a number of candidates (who explicitly commented) that preprints served to demonstrate productivity before their paper was published (Tables S26, S27).
In aggregate, our survey respondents (n = 317) submitted a total of 7,644 applications (median: 15) (Figure 3A, Table S11). A separate Twitter poll indicated that applicants in general, not specifically for this cycle, typically spend more than 3 hours tailoring each application (22) (Table S12). This number does not take into account how long the initial creation of “base” application materials takes, which is often a much longer process. Our survey respondents were then invited for 805 (median: 1) off-site (phone, Zoom or Skype) and 832 (median: 2) onsite (campus) interviews, and received 359 offers (median: 1) (Figure 3A, Table S11). Application rejection messages (if received at all) most often did not come with any sort of feedback. Our data revealed that 42% of our participants received no offers, 33% received one offer, 14% received two offers, 6% received 3 offers, and 6% received more than 3 offers. The whole population medians for the number of applications, interviews and offers are presented in Figure 3B. Interestingly, these medians changed slightly by gender, with men submitting slightly more applications, but receiving slightly fewer off-site interviews (Figure 3C). These small differences by gender were not statistically significant (applications p = 0.0725, off-sites p = 0.1479, on-sites p = 0.5813; Wilcoxon rank-sum test). The median number of offers also did not vary by gender (p = 0.1775; Wilcoxon rank-sum test) and a recent Twitter poll with over 700 respondents confirmed that most faculty only received 1 to 3 offers at the beginning of their career (23) (Table S12). Candidates who received offers also typically submitted more applications than those who received no offers, indicating that some candidates without offers may simply not have submitted enough applications to have a reasonable chance of getting an offer.
Despite the fact that successful candidates submitted more applications, the number of applications per candidate only weakly correlated with the number of offers (R2 = 0.0477), while most strongly correlating with the number of off-site interviews (R2 = 0.2751) (Figure 3B). Not surprisingly, the number of on-site interviews strongly correlated with the number of offers received (R2 = 0.6239) (Figure 3C), as an on-site interview is a prerequisite for an offer. Interestingly, when responses were split into two groups by application number, one group either at or below the median (<15 applications, n = 162) and the other group above the median ( > 15 applications, n = 155) there was a significant difference in success rates. Respondents who submitted greater than 15 applications had a significantly higher average number of off-site interviews (p < 4.1 x 10-24; Wilcoxon rank-sum test), on-site interviews (p = 1.2 x 10-13; Wilcoxon rank-sum test), and offers (p = 5 x 10-5; Wilcoxon rank-sum test; Figure 3D).
We also asked whether survey respondents applied for non-faculty positions during this cycle (Table S13). Seventy one percent of applicants did not apply for other jobs. Those applicants who did not apply for other jobs also had a small, but significant increase in offer percentage (Figure 3E, p = 0.002). Taken together this data seemingly indicates that increasing the number of applications submitted can lead to more interviews, as suggested by others (24), with the typical candidate requiring at least 15 applications to achieve 1 offer. However, those who divided their attention between academic positions and other job searches generated fewer offers per application Lastly, the lower correlation between application number and offers (compared to application number and interviews) suggests that while higher application numbers can generate more interview opportunities, other criteria (e.g. the strength of the interview) are important in turning an interview into an offer.
No single metric guarantees a faculty job offer
A critical component in evaluating faculty job applicants is their research record and perceived impact of their future research program (21). Addressing the quality of application materials is highly context-specific (given the field, search committee, and institutional needs) and therefore beyond the scope of this work. However, we did collect multiple pieces of data to describe applicants’ research track record. Though the metrics of a research track record sufficient to obtain a faculty position is frequently debated, almost no data or evidence is available to prospective applicants to assess their chances. The need for transparency around this issue was a major driving force behind this study.
Cell, Nature, or Science (CNS) publications
The number of publications authored and the impact factor of early career researchers’ publications can be predictive of who will become independent faculty members (20,24,25). A common perception is that a CNS (Cell, Nature, or Science) paper is critical to land a life science faculty position (27, 28). Our data demonstrates that a CNS paper is not essential to an applicant receiving a faculty job offer as the vast majority (74%) of our survey respondents did not have authorship of any kind on a CNS paper (Table S14, Figure 4A), while a majority still received at least one offer (58%) (Table S11). Though a CNS paper was not essential, candidates with authorship of any kind on a CNS paper did have better success in obtaining at least one offer (𝛘2 = 4.4871, p = 0.03). Of our respondents, 16% had first authorship on a CNS paper and had significantly higher percentage of offers (Wilcoxon rank sum test; p = 0.00015, median offer: 11% (1+ CNS), 2% (No CNS)) and on-site interviews (p = 0.00027, median onsite 21% (1+ CNS), 10% (No CNS)) (Figure 4A). As the number of on-site interviews and offers are highly correlated (Figure 3C), it is unclear if this increased success simply represents a higher chance at landing more onsite interviews. It is important to note that this effect is correlative and these candidates undoubtedly had other attributes that made them appealing. While we measured some of these attributes (e.g. funding track record) and discuss them below, others are less easily quantified (e.g. lab pedigree, letters of recommendation).
Quantity of publications
We also examined several publication metrics and found no correlation with number of offers. Specifically, the total number of publications (R2 = 0.08), the number of first author (R2 = 0.02), the number of corresponding author publications (R2 = 0.0009), and h-index (R2 = 0.004) did not significantly correlate with offer percentage (Figure S1). When we separated candidates who were above and below the medians for each of these metrics and compared the distribution of offer percentages, only the total number of citations significantly associated with a higher offer percentage (p = 0.029) (Figure 4B). Although the median offer percentage was often higher for applicants above the median of the other metrics, none of these differences were statistically significant (first author publications p = 1.0, total publications p = 0.190, h-index p = 0.572; Wilcoxon rank-sum test with Holm correction) (Figure 4B, Table S21).
Independent Funding & Fellowships
Receiving funding as a trainee is often seen as part of a favorable research track record (32) and a recent study indicates faculty are more likely to receive a large research program grant (e.g. an R01 through NIH) if they have a history of funding as a trainee (33). We differentiated the types of funding a trainee can receive into independent funding (in which the trainee is listed as the PI and funds can often transition with the trainee to a hiring institute (e.g. K99/R00 award), postdoctoral fellowships, and PhD fellowships. Our survey respondents were highly successful in obtaining fellowship funding during their training (80% received a fellowship, Figure 2D, Tables S9). Independent funding was more rare with 25% of respondents receiving awards on which they were PI/co-PI (Tables S10). We found that respondents with independent funding received a higher percentage of offers (p = 0.025; Wilcoxon rank sum test, Holm correction, Figure 4B). Receiving a postdoctoral fellowship also correlated with greater success, although the effect was not significant after correcting for multiple comparisons (p = 0.169; Wilcoxon rank sum test, Holm correction, Figure 4B). PhD fellowships did not seem to influence an applicant’s offer percentage (p = 1.0; Wilcoxon rank sum test, Holm correction, Figure 4B).
Patents
Patents are also considered positive metrics of research track record, although their importance and frequency can vary between fields. Only 18% of survey respondents reported having 1 or more patents on file from their work when entering the job market (Table S16). The number of patents held by the applicant did not correlate with the number of offers received (R2 = 0.003) (Figure S2) and the percentage of offers did not change between those with or without a patent (p = 1.0; Wilcoxon rank-sum test, Holm correction) (Figure 4B).
Years on the Job Market
We also asked respondents for the number of cycles they had participated in by applying for PI positions. Roughly half (55%) of our respondents were applying for the first time, and these candidates fared significantly better in terms of offer percentages than those who were applying again (p = 0.035; Wilcoxon rank-sum test, Holm correction) (Figure 4B). Analyses such as the work presented here may help applicants refine and present their materials and track record in a manner that might improve success and decrease repeated failed cycles for applicants.
Interplay between significant criteria
We next examined the relationship between each of the traditional criteria that were significantly associated with an increase in offer percentage (CNS 1st authorship, total citations, and independent funding). Overall we had 241 applicants that appropriately responded to all of our questions about these metrics. Pairwise testing of each of these criteria found no statistically significant relationships (p = 0.446, independent funding vs CNS; p = 0.264 total citations vs CNS; p = 0.289 funding versus total citations; Chi-squared tests). Regardless, we plotted subgroups based on offer status and each of these criteria to see if there was evidence for any general trends in our dataset (Figure 4C). Notably, respondents with a CNS 1st authorship who received offers had a greater number of total citations than those with a CNS 1st authorship, but who received no offers (Figure 4C). No clear relationship was seen between offers, CNS 1st authorship, and independent funding (Figure 4C), although applicants with no CNS papers that secured offers had a higher rate of independent funding than applicants with no CNS papers and no offers. Taken together these trends suggest that that the combination of criteria together influence the ability to obtain an offer (e.g. securing independent funding is less important for those applicants with a CNS paper, and a CNS paper and high citation count is more strongly associated with success than just a CNS paper alone).
Most applicants fulfill the teaching requirements for any university type
A major focus of the discussion surrounding the academic job market is centered on applicants’ publications and/or funding, while teaching experience and requirements generally receives much less attention. Accordingly, a candidate’s expected teaching credentials and experience varies vastly and largely depends on the type of institution at which the candidates are seeking a position. The higher education institutions in the United States use the Carnegie classification system, which defines R1s as “very high research activity” institutions, R2s as “high research activity” institutions, and R3s as more teaching focused, commonly referred to as “Primarily Undergraduate Institutions” (PUIs) or “Small Liberal Arts Colleges” (SLACs) (Box 1). In this survey we limited response options to being either R1 focused, PUI focused, or submitting applications to both types of institutions. A majority of our respondents applied to jobs at an R1 institution (Figure 5A, Table S17). This may be the reason that most discussions focus on research-centric qualifications. Despite this, almost every application to an R1 institution requires a teaching philosophy statement. During the off-site and on-campus interview there is also discussions centered around teaching and mentoring experiences, leading candidates to question how much teaching experience is required to be offered one of these positions.
Teaching Experience
Our data suggest that most applicants fulfill the teaching requirements. The majority of respondents (99%) have some type of teaching experience (Figure 5B, Table S18). We asked that this teaching experience be described as either limited to serving as a Teaching Assistant (TA) only (Box 1) or experience beyond a TA position, such as serving as an instructor of record (Table S19). This split the data roughly 50/50 (Figure 5B, Table S18). The degree of teaching experience did not change based on the target institution of the applicant (p = 0.5592, Chi-squared test, Figure 5C). The percentage of offers received also did not significantly differ between groups based on teaching experience (p = 0.1633; Wilcoxon rank-sum test, Figure 5D).
Research versus Teaching-intensive institutions
Despite the fact that the majority of applicants who were sampled in this study applied to R1 type positions, a sub-group of applicants specifically targeted PUIs (Table S18). To our knowledge there is a lack of non-anecdotal evidence describing the process or expected qualifications of a PUI-focused job search (34). We received 25 “PUI only” responses to our survey, and despite this small number of survey respondents, we aimed to describe this important sub-group here. Many characteristics of the whole survey population were reflected in this sub-group, including gender identity, number of publications, funding, and teaching experience (Figure 6A-D). The median number of remote interviews, onsite interviews, and offers was also similar to R1-focused applicants with PUI-focused applicants submitting fewer applications (Figure 6E). Interestingly, when asked to describe their “beyond TA” teaching experience, this sub-group was specifically enriched in “adjunct”, “visiting professor”, “instructor of record”, “community college”, or “contract-based” teaching experiences compared to the ‘R1 only’ or ‘both’ applicant groups (p = 5 x 10-4; Chi-squared test, Figure 6F, Table S19). Having “adjunct” experience as described in Figure 6F did not significantly increase the median number of offers received for PUI focused applicants (p = 0.5538; Wilcoxon rank-sum test, Figure 6G). There was no difference in the median number of offers received based on adjunct experience for applicants targeted at R1s or both types of institutions (p = 0.9896; Wilcoxon rank-sum test, Figure 6G).
Applicants perceive the process to be time-consuming and opaque, with minimal to no feedback
In addition to our quantitative analyses of population demographics and outcomes, we also performed a descriptive analysis of short answer responses to general questions on how applicants perceived the application process.
We asked if the applicant had any comments, including asking if they thought any aspect of their career was particularly helpful or harmful to their faculty applications (Table S22). We used word clouds and follow-up Twitter polls (Tables S12,S24-25) to analyze recurrent themes in these open-ended questions. We created three word clouds (Figure 7) grouped as follows: 1) What did you find helpful to the application process? (Figure 7A, Table S26); 2) What did you find harmful or as an obstacle to your application? (Figure 7B, Table S27); and 3) What were your general thoughts on the application process? (Figure 7C, Table S22). We found that applicants identified “funding” as one of the most helpful things to their applications, and “no-funding” as subsequently harmful; this perception agrees with the data presented above as funding was one of the best predictors identified as relevant for receiving an offer and increasing the average number of offers received (Figure 7A, 3C, 1S). In addition to identifying funding as important, our applicants’ perceptions were also in-line with the rest of the data in that they were unable to largely agree on other measurable aspects of their career that were perceived as helpful. Non-measurable aspects that were perceived as particularly helpful included networking and attending/presenting at conferences. Interestingly “interdisciplinary-research”, which is often highlighted as a strength and encouraged by institutions and funders, was perceived by candidates to be seen as a weakness by search committees. Indeed, interdisciplinary candidates may pose an evaluation challenge for committees given the differences in research metric valuation across fields, the extended training timing required to master multiple fields, as well as valuation of interdisciplinary teams of specialists over interdisciplinary people (35).
Notably, many applicants found the amount of time spent on applications and the subsequent lack of feedback from searches frustrating (Figure 7B-C, Tables S22,S25). Most applicants never received any communication regarding their various submissions. For instance, an applicant who applied for 250 positions, only received 30 rejections. Overall, our pool of applicant survey respondents submitted 7,644 applications (Figure 3A) and did not hear anything back in 4,365 cases (57% of applications), only receiving 2,920 formal rejection messages. Application rejection messages (if received at all) most often do not include any sort of feedback. Additionally, a considerable amount of time is spent on writing each application; as previously noted, at minimum each application likely takes 3 hours to tailor and submit, excluding time spent for the initial curation of materials (22). Our pooled applicants at minimum then spent a combined 22,932 hours (or 2.62 years) on these applications. Individually, this amounts to 72 hours for each applicant on average. Although this number seems manageable, it does not account for the intellectual time and effort required for tailoring applications. In a follow-up Twitter poll, not specific to this application cycle, a majority of respondents felt that time spent on preparing faculty job applications impeded their ability to push other aspects of their career forward (Table S25) (36). Combining these insights, it is therefore unsurprising that almost all applicants, including applicants that received at least one offer (Table S28), found the process “time-consuming”, a “burden-on-research”, and “stressful” (Figure 7B-C, Table S27, Table S28).
Forty four percent of our respondents had applied for PI jobs for more than one cycle (Table S30). Though applicants who applied for more than one cycle had significantly lower offer percentages (p = 0.0345; Wilcoxon rank-sum test) (Figure 4B), many reported perceived benefits from significant feedback from their current PI through their previous application cycles. Though mentorship was not as often reported as specifically helpful (Table S26), the lack of mentorship was a commonly cited harmful obstacle (Figure 7B, Table S27). Lastly, multiple candidates felt that issues pertaining to family, a two-body problem (need for spousal/significant other hire), or parental leave significantly harmed their success.
Search Committees Value the Future
In an effort to better understand the academic search process, we performed a purposefully narrow survey of academic search committee members. Fifteen faculty members responded, with 67% having been involved in search committees for over ten years (Table S31). In keeping with the academic and geographical contours of our applicant survey respondents, we focused on faculty members at R1 academic centers working in life sciences (93% of those polled) and engineering (7%) within the USA (Table S32).
We sought to understand what factors search committees felt were most important, what their perception of the market was, and how they felt it had changed since they first became involved in hiring. Figure 8 shows how heavily distinct factors are weighted in making a decision as assessed from 1 (not weighted at all) to 5 (heavily weighted). From this we found the most important reported factors appeared to be those most closely related to what a candidate would do going forward, such as their proposed research and current funding, rather than their previous track record (Table S35).
Our limited survey of the search committee faculty members provided various insights. The majority (67%) said preprints were viewed favorably, although their strength may not be equivalent to published peer-reviewed work (Table S33). Sixty seven percent of our search committee survey respondents said that they received over 200 applicants per job posting, while 33% said that their committee received 100-199 applications per cycle (Table S31). Despite these high numbers, just 5-8 applicants are typically invited to interview, with around a third (33%) of the faculty respondents noting they did not perform off-site (phone or Skype) interviews (Table S31). These statistics help demonstrate the challenge hiring committees face; the sheer volume of applicants is overwhelming, as mentioned explicitly by several search committee respondents.
We took this opportunity to ask the search committee survey respondents if there were additional factors that they wished applicants knew when applying (Figure 9, Table S36). Several emphasized it was the quality of research and papers that is the most important factor for assessing prior achievement, but that a compelling and coherent research proposal is also critical and can be underdeveloped in some otherwise competitive candidates. The importance of departmental fit was also emphasized and that at the interview stage a candidate’s assessment is in no small part predicated on interpersonal interactions with faculty members. Intriguingly, while one faculty respondent noted that they rarely interview anyone without a K99/R00 award, the NIH-based “Pathway to Independence Award” (a situation they noted as problematic), another lamented that applicants worried too much about metrics/benchmarks anecdotally perceived to be important, such as receiving K99/R00 awards. Finally, most (73%) faculty respondents noted it was easy to identify good candidates from their submitted application, that there were too many good applicants (67%), and that candidates often underperformed at the interview stage (67%) (Figure 9, Table S34).
Discussion
We set out to better understand the academic job search process by examining the relationship between applicant metrics and job search outcomes. We analyzed over 300 responses to an anonymous survey of faculty job applicants specifically describing the outcomes of their job search, CV qualifications, and perception of the process. Responses were limited to applicants within a single job market cycle (85% life science, 15% other fields). We also performed a narrow survey of academic search committee members (93% life science, 7% engineering) in the USA. After combining metric and outcome data with analysis of both applicants and search committee members’ perception of the process, we find that participation in the academic job market is perceived to be a highly stressful endeavor overall, lacking in feedback, and with seemingly no clear path to success for applicants. Our data is consistent with the well-established belief that it has become harder to obtain a faculty position, with high numbers of applicants for every position available. Below we discuss our results further, acknowledge the limitations of this study and provide suggestions for the improvement of this process.
As with any opaque, high-pressure environment, an absence of clear guidelines and expectations coupled with anecdotal advice can lead individuals to focus on tangible goals and metrics that they feel will help them stand out in the system. We were able to confirm several common faculty application advice anecdotes: the number of applications submitted, CNS paper authorship, total citation count, and funding were associated with obtaining offers. Despite this association, these criteria were not necessary or sufficient for securing an offer. We found that most metrics were differentially valued by candidates and committees. Qualitatively, the impact of funding on research career success was well-recognized by a number of applicants and search committee members alike in our results. Quantitatively, our applicant survey found funding and publication record to be the most important factor we measured that was associated with obtaining a faculty job offer. Additionally, we report that candidates with a track record of independent funding were likely to have more job offers, which was in line with candidate perceptions. The search committee respondents confirmed the benefit of independent funding (e.g. a K99/R00 award) as major strengths for an application.
CNS publications have anecdotally been highly regarded as a major benchmark for trainees in the life sciences and markers of fellowship and job application success for early career researchers. Qualitative comments from our applicant survey indicated that not having a CNS paper was perceived by some applicants to be detrimental to offer prospects. However, the majority of our respondents received offers without CNS publications. Interestingly, faculty respondents on search committees were not as focused on CNS papers or journal impact factor, but rather emphasized the quality of applicant publications. Our data affirms that CNS papers reflect positively on a candidate’s record, but a lack of such papers is not a disqualifying metric, showing that fixation on this metric by some PIs and trainees in academia is misplaced.
Nearly half of our applicant survey respondents reported posting at least 1 preprint, with several commenting that preprinting provided evidence of productivity outside of formal publication. A number of applicants commented that preprinting provided evidence of productivity outside of formal publication. Search committee survey respondents further confirmed that while published papers carry the most weight, preprints are generally viewed favorably. Future use of preprints as evidence of productivity records may have significant positive impacts on early career researchers, for whom timing of publications and job searches require critical considerations.
The respondents from this survey were generally highly-qualified according to the metrics we measured, and yet reported high stress and frustration with their experiences of the faculty job search. However, it is important to consider that the importance of these factors may vary based on the field of research and target institution/department, as some search committee respondents felt that potential applicants may worry too much about these aspects of their application which in turn may inhibit their desire to apply for faculty positions despite their good fit for the job.
Interestingly, applicants indicated that they perceived poor mentorship as a major obstacle to their application. Specifically, applicants mentioned a lack of feedback from mentors for preparing written materials and grants. This has profound implications when considered in the context of the search committee responses. The most valuable metric to search committees was reported to be the research proposal. Several search committee members perceived that applicants seemed to underestimate the importance of the proposal, emphasizing an expectation that the proposal will communicate the applicant’s future promise (independence, original ideas, creativity). Beyond the science, search committee members also suggested that candidates examine their motivations for pursuing faculty positions in the first place. It is important to note that the search committee members felt that it was easy to assess ‘good’ candidates for interviews on paper, but that many applicants fail to meet expectations during the interviews. This suggests a non-trivial difference in focus between the applicants and the search committees. We speculate that this is a natural result of a process lacking in clear benchmarks that relies heavily on anecdotal information, and we further suggest that these concerns could be addressed by improving mentoring practices.
Gender differences in the academic job market
Numerous studies demonstrate that women are underrepresented at the faculty level in most STEM departments, particularly in higher faculty ranks (38–41). However, our data suggest very few differences in outcomes in the May 2018-May 2019 female applicant pool relative to their male counterparts, with both genders receiving similar numbers of interviews and offers (Figure 3A, Table S11). Importantly, our data suggest that, of our survey respondents, nearly equal numbers of men and women applied for faculty positions in the May 2018-2019 job cycle in select fields such as the life sciences, (Figure 2A). While this may be due to inherent bias in our survey population, our data suggest a much smaller gap in the male/female application ratio than have been suggested for other academic appointments and/or awards, such as the DP5 NIH award for independent fellows, or applications for independent fellowships themselves (49).
Despite similar application rates from both genders in select fields, our survey did reveal statistically significant differences between genders in many metrics traditionally regarded as markers of success, such as men report a higher number of first author publications (Figure 2B, Table S8) and average h-index, while women report more funding over the duration of their training (i.e. graduate training) (Figure 2D). These gender differences may be influenced by systematic biases in funding decisions and publication practices by funders, principal investigators and journals against one gender (20, 52). The differences may also be influenced by the postdoctoral training period intersecting with other major life events, such as starting a family, which often is accompanied by time off from the laboratory for parental leave. Consistently, multiple survey respondents reported that maternity leave negatively impacted their job search and/or prospects because this period of leave and potential publication gap is negatively perceived by some search committees as lack of productivity (Figure 7). However, our survey data indicate that there is neither a shortage of women interest in seeking a faculty position (Figure 2A) nor in outcome of women obtaining interviews and offers relative to their male counterparts (Figure 3A). These data argue that there are few gender-based differences in our applicant pool, and highlight a need to further examine structural barriers (such as a fixation on metrics that may preserve inherent biases) that women face during their postdoctoral training, while seeking faculty positions in STEM, and their retention after receiving a faculty job offer.
Engaging stakeholders to improve the culture of academic hiring
In recent years, suggestions for improving the academic job market have been voiced continually (14). This study has provided comprehensive data to motivate those changes. We believe that both immediate, easily implementable changes at the level of individual search committees, as well as wide, long term changes to the culture of academic science will benefit both applicants and search committee members (Box 3).
Applicants
Trainees need space to thoroughly explore career possibilities. Indeed, the large number of applicants in the market at any given time may represent trainees who have not explored other career options and “defaulted” into pursuing an academic career (Table S13). We hope that use of this open-access data can help applicants make this critical career decision rationally by comparing how their interests, strengths and challenges are valued by academia. We urge trainees to conduct broad informational interviews with individuals who are currently working in the trainee’s potential career role, as these interviews can provide critical information about the kinds of skills and experiences that are valued in different organizations, which may be distinct from those valued by academia. Moreover, we believe that this is especially important for trainees seeking faculty positions who may not have a clear view of faculty roles and priorities at research-intensive academic institutions. Faculty-focused applicants can also take steps to understand the funding landscape in their field to assess the likely priorities of the funding agencies during their potential early-career phase. Additionally, a number of applicants expressed a desire to receive independent teaching experience, highlighting trainees’ own desire to share and communicate knowledge as part of their career. Trainees, should take advantage of free open access teaching resources as well as local teaching institutes and workshops in order to prepare themselves for future teaching roles (56); however trainees should understand that it is not clear that institutions or search committees value such experience. Lastly, we believe that trainees should be encouraged to apply for independent funding but not fixate on this criteria. Both applicants and search committees are acutely aware of the difficulties in acquiring funding, and committee members encouraged applicants not to see lack of funding as an immediate disqualification. Indeed, trainees should avoid overreliance on any single criteria as either a guarantee or disqualification of obtaining a position, but should instead focus on communicating why their proposed research program is of interest, why it is likely to succeed in advancing the field and training good scientists (given their research and mentorship track record), and why they would make a good member of the hiring department.
Mentors
Mentorship practices need improvement (57, 58). Insufficient mentoring for academic and non-academic career paths in STEM fields is also an issue and has been noted (59). Mentors and career development offices at universities need to prepare trainees and make them familiar with other career options in industry, science policy, etc. Mentors should maintain a running dialogue as to the kind of skills (both technical and transferable) that trainees can expect to build during training. For trainees interested in academia, mentors can encourage trainees to apply for independent funding and work with their mentees to prepare them for the job search using the data presented here as a guide. In particular, mentors need to be aware of the immense amount of time required of applicants for a successful faculty job search and that feedback may be critical to this success. Mentor participation in mock job/chalk-talks, interviews, and editing of application materials would be of great benefit to applicants. A recent study found that the odds of continuing in academia were improved by seniority of postdoc mentor (i.e., postdoc mentor academic age), as well as the average number of trainees both the graduate and postdoc mentor had per decade (i.e., proliferation rate), in addition to the trainee’s ability to obtain training in disparate areas of expertise as a graduate student and postdoc and then integrate that into their own work (60). The fact that proliferation rate (i.e., number of trainees in a lab) is associated with academic success hints at a prestige and pedigree effect on who will become a future faculty member. This fact also amplifies the excess supply of trainees in the biomedical workforce, heightening competition for faculty jobs. Our findings demonstrate a need for better, evidence-based career development and mentoring practices that can enable trainees to achieve improved work-life balance and training satisfaction (61). This includes educating graduate student and postdoctoral researchers about career paths in industry, government and other job sectors. Mentors also need to be open to trainees who are interested in non-academic careers, and connect them with resources of interest.
Search Committees
Faculty search committees can work to clarify the requirements and expectations of the application process posted within the job advertisement. Improved communication between search committees and potential applicants would help demystify the process and possibly decrease the sheer number of applicants per position. We argue that clear, detailed job advertisements would significantly improve the process. While it is understandable that search committees want to see applicants demonstrate “fit” for their advertised position, a screening process based on very transparent criteria published in the job advertisement could be used to limit the number of initial documents submitted by applicants. For example, based on a review of minimal materials outlining the applicant’s interest in the position, search committees could narrow down the pool of candidates they are interested in, and then ask that reduced pool to submit a larger batch of more in-depth materials (i.e teaching statement, diversity statement, and letters of reference). Such a process would limit the number of full application packets search committees need to review, save candidates who are not a good “fit” from preparing all materials, and provide much needed feedback by allowing candidates to know which materials they were evaluated on. We speculate that a candidate who knows that a search committee is interested in hearing more about their interests, work, and future plans will be more invested in crafting compelling application materials. These application materials would be higher in quality if they are select in number (i.e., representing 20% of all materials submitted) than if they are just one of dozens of sets of materials the candidate generates for a search using the current system. Search committees can also ask for fewer application materials at the outset.
We also believe that it is critically important that search committees place considerable thought into the types of screenings that they will use. Even when criteria are transparent to the applicants, not all individuals have equal access to obtain those criteria. The current system relies heavily on a CV to communicate candidate quality and credentials. However, the diverse experiences and backgrounds of trainees can mean that they encounter barriers that limit the amount of traditional metrics of academic success on their CV’s. As noted above, family-leave was cited by several applicants as a significant barrier to publishing, and women significantly underperformed men in metrics such as total number of publications indicating that over-reliance on these metrics can perpetuate systemic biases. In addition, several qualitative comments from applicants in our survey mentioned citizenship or green card status as a barrier, particularly in receiving funding, the most significant metric to correlate with receiving an offer (Table S27). This suggests academic science as a whole, in the United States in particular, is contributing to its own lack of diversity as it limits who can receive early career funding (6). Search committees should consider this when using funding record as a significant judgement criteria for extending offers. Undoubtedly, numerous qualitative metrics not easily measured play a large role in obtaining a faculty position; one such metric is recommendation letters which were not part of our surveys, but were referenced in search committee survey comments. Many of these qualitative metrics were also referenced by our applicants in their perceptions of the process (Figure 7). These include, but are not limited to, networking, mentorship quality, lab or background prestige, and nepotism. These concepts are deeply rooted in the gate-keeping culture of academic science. As previously discussed, search committee respondents felt that applicants underestimated the value of the research proposal. Perhaps considering the research proposals in an early round of screening (potentially along with a cover letter) would better align applicants’ efforts with the value placed on future research by the search committee members surveyed here. Additionally, a ‘research-first’ approach has been shown to reduce unconscious bias in grant reviewers compared to CV-focused review (62).
Search committees should also aim to provide acknowledgement of receiving the application and feedback after their review of the candidates. Our qualitative data on applicants’ perception of the job search process repeatedly indicated that time spent on applications and the lack of feedback from the search was not ideal and made for an often frustrating combination. Quantitatively, our applicants did not hear back from 57% of applications submitted after spending over an estimated 22,000 hours on them. We recognize that search committees are often screening over 200 applications per open position (Table S31); and providing significant feedback to each of these is a daunting burden distributed to a small group of people. However, we note that providing feedback is part of every researcher’s contributions to the scholarly community, such as reviewing manuscripts and grant applications. In our view, it is quite reasonable for applicants to expect some level of feedback, however little. We believe that at an absolute minimum, a timely rejection email is a low-cost, high-impact step that would save applicants time, stress, and effort. In addition, a simple email message, even if automatically generated, stating to all applicants that a candidate was selected based on an outlined set of criteria would provide some general feedback, although providing a general rubric for initial screening as part of the position description would also be effective in this regard. Not receiving any feedback or a rejection/status email can depend on the country. In the US system, the applicant may not even receive an acknowledgement that an application was submitted, let alone a rejection email after the application is reviewed. Applicants applying to institutions in the UK have consistently received some form of response on the fate of their faculty job application. More detailed feedback should also be provided to applicants that get selected for at least an off-site interview; a much more reasonably-sized cohort at around 10 applicants per position based on our survey of search committee members. Providing feedback in this manner provides an acknowledgment of the time and energy spent by the applicants, and promotes the mentorship ideal that academic science strives towards.
While we recognize that systemic changes take time, we believe it is important to promote inclusive practices in research environments that includes special focus on hiring and workplace balance. The most significant changes to the academic job market could be implemented at the level of individual search committees willing to put in those efforts. These efforts would directly and immediately impact applicants and their mentors. We also believe that if these changes started at the level of the search committee this would begin necessary shifts at the institutional level as discussed below.
Institutions
Hiring institutions should consider making the application centralized and streamlined. This could benefit applicants and search committees, as preparing materials is a time consuming process for applicants and reading those materials is a time consuming process for search committee members. For instance, some graduate school applications have been conducted online through centralized platforms. Although some platforms have sought to centralize the submission process (Table S1), greater standardization of materials requested and distribution systems could considerably improve the process.
To improve faculty mentoring, universities should provide mandatory training programs in mentoring, not only for early career faculty (63, 64), but for anyone involved in a mentoring relationship, including graduate and postdoctoral researchers (65). Mandatory mentoring training should continue throughout a faculty member’s career to keep pace with relevant job market changes. Mentoring training should equip mentors to successfully engage with the specific interests, talents, and goals of the trainees to ensure open communication throughout the training period. They can also ensure access to resources for career exploration at the graduate and postdoctoral level. Graduate and postdoctoral associations can also play a role in mitigating disadvantages faced by trainees who lack constructive career mentoring. They can also (and, in many cases, do) serve as a connection point for non-academic employers and trainees. Open houses, seminars, panels, and other events provide valuable information sources for trainees who may otherwise not have these opportunities. An emphasis on in-person networking at conferences or other events disproportionately affects trainees from labs with less funding or trainees of lower socioeconomic status who would then need to fund themselves. More remote networking events need to be made available to early career researchers. This is something that could be directly implemented by field-specific societies and funding institutions. Search committees will also need to value these experiences as equivalent to the current in-person system (66, 67). Investments in this area would equip trainees with sufficient information to confidently pursue the career options best-suited to their individual goals and needs.
How various stakeholders can improve the faculty job application process?
Limitations of this study and measuring outcomes in the academic job market
There are several limitations of this study to consider imposed by both the original survey design and general concerns, such as keeping respondents anonymous, and the measurability of various contributing factors. For future data collection we suggest keeping surveys focused on region-specific job markets. Our results are largely from those seeking a position in North America. We believe these results can be aggregated, but the survey questions may not be applicable to other large markets (e.g. Europe, China, India). We did not receive a sizable response from applicants looking outside of North America to make useful comparisons. If a similar survey was circulated in each market individually, receiving similar response numbers each, meaningful comparisons could be drawn. We also suggest circulation of a similar survey in a wider number of fields. Again, we did not receive a large enough response from fields outside of the life sciences to make useful comparisons. If a higher response number was achieved, these data and results would have broader impact.
We purposely did not ask for race or ethnicity demographics, PhD or postdoc institution, and region or institution where offers were received. We believe the addition of these metrics could potentially jeopardize survey respondents’ anonymity. Despite this, these factors could be significant contributors to who receives an academic job offer. Racial inequalities in all STEM fields at all levels exist and need to be addressed (70), specifically with how they intersect with gender (39). The reputation of a training institution is questionably measurable, but is also often listed in anecdotal advice pieces as important (71). Recently it was reported that a majority of new faculty are hired from a minority of institutions providing postdoc training (71, 72). It is possible that adding institutional reputation to the other traditional metrics we measured could provide a more complete picture of the current path to a faculty position.
Other aspects which are not directly measurable and are often cited as important for applicants in the academic job market are “fit” and “networking” (16). Applicants who responded to this survey did agree that “networking”, “conferences”, “collaborations”, and “connections” were helpful in their job search (Figure 7A). Conference organizers are also starting to include badge stickers or tags for faculty job seekers to self-identify at events. For future data collection, the number of conferences or networking events attended while applicants were on the academic job market could be measured and a relationship could be established between eventual faculty offers received and these networking metrics. Departmental or institutional “fit” is largely determined by the search committee on an individual basis. The metrics that determine a good “fit” are wildly variable and will likely never be adequately measured. We also did not collect data on certain job packet materials (e.g. recommendation letters) that comments from our search committee survey revealed to be important. Data collection on these items would be highly recommended in future surveys.
It is also important to point out that all questions in our survey were optional. Thus, respondents could choose which questions to answer in the survey. It is possible that this choice of survey design leaves some questions unanswered when the answer would be negatively perceived and/or zero in value. For example, many individuals may not have felt comfortable indicating they had zero offers leading our dataset to appear overly inflated in offer percentage. It is also possible that participation in the survey from the outset suffers from survivorship bias, in that those applicants that had a more positive experience are more likely to reflect upon it and complete a survey on the process. Future surveys may benefit from making response to all questions mandatory. Another related point is that our survey was undoubtedly completed by a highly-engaged group of aspiring future faculty. The Future PI Slack group itself is a space for postdocs most interested in obtaining a faculty career. Thus, our survey data likely reflects a highly motivated and accomplished group that may not reflect the full pool of applicants to faculty positions each year. Wider dissemination of future surveys will hopefully be aided by the publication of these results and increased awareness of the survey among trainees in various research communities.
Conclusions
The faculty job search process is in need of change. Of over 300 responses by job applicants, we did not receive a single clearly positive comment on the process. This is in spite of the fact that 58% of our applicants received at least one job offer. The current system is not adequately serving the applicants, mentors, search committees, or institutions. Clear and measurable criteria can be used to streamline this process. Transparency can be achieved in the academic job market through systematic data collection for subsequent creation of evidence-based best practices by mentors and search committees as well as incentives and mandates by funders and research institutions. It is our hope that collecting and analyzing these data in this manner will not only allow all stakeholders to make rational choices, but will also allow critical examination, discussion and reassessment of the implicit and explicit values and biases being used to select the next generation of academic faculty. We believe that such discussions are critical in building an academic environment that supports all of its members.
Materials & Methods
Survey Materials
The text of both surveys used in this work are included in the supplemental material as appendices pages 84 and 88. A Google form was used to conduct both surveys. The applicant survey was distributed on various social media platforms including the Future PI Slack group, Twitter, and Facebook. The survey was also distributed by several postdoc association mailing lists including in North America, Europe and Asia. The applicant survey was open for approximately 6 weeks to collect responses. The search committee survey was distributed to specific network contacts of the various authors. Though this distribution was more targeted, a Google form link was still used to maintain anonymity. The search committee survey was open for approximately 3 weeks to collect responses. In both cases, respondents to the surveys were asked to self-report, and the information collected was not independently verified.
Data Analysis
Microsoft Excel and RStudio were used to graph the results of both surveys shown in Figures 1-6 and 8. Whenever statistical analyses were used, the exact tests, p-values and 𝛘2 values are reported in the appropriate figure or figure legend or caption, results section and Table S21. A p-value of less than 0.05 was considered significant. Where a number of demographics are combined in the reporting throughout this study, any analysis group with less than 5 respondents were combined with other similar values instead of the raw n value in an effort to protect the anonymity of participants. Briefly statistical methods are as follows: in general, the two-tailed Wilcoxon rank sum test with Holm correction or Chi-squared test was used to report p-values (see table S21 for detailed breakdown). The qualitative survey comments were categorized by theme (keywords/context) describing each comment and the frequency of comments pertaining to a particular theme and tabulated (Tables S22, S26-27, S36-37). Word clouds were generated using the WordItOut platform (73) (Figures 7 and 9).
Data availability
The authors confirm that, for approved reasons, access restrictions apply to the data underlying the findings. Raw data underlying this study cannot be made publicly available in order to safeguard participant anonymity and that of their organizations. Ethical approval for the project was granted on the basis that only aggregated data is provided (as has been provided in the supplementary tables) (with appropriate anonymization) as part of this publication.
Statement of Ethics
This survey was created by researchers listed as authors on this publication, affiliated with universities in the United States in an effort to promote increased transparency on challenges early career researchers face during the academic job search process. The authors respect the confidentiality and anonymity of all respondents. No identifiable private information has been collected by the surveys presented in this publication. Participation in both surveys has been voluntary and the respondents could choose to stop responding to the surveys at any time. Both “Job Applicant” and “Search Committee” survey has been verified by the University of North Dakota Institutional Review Board (IRB) as Exempt according to 45CFR46.101(b)(2): Anonymous Surveys No Risk on 08/29/2019. IRB project number: IRB-201908-045. Please contact Dr. Amanda Haage (amanda.haage{at}und.edu) for further inquiries.
Conflicts of Interest
The authors declare no competing financial interests. The authors are all members of the Future PI Slack worldwide community of postdoctoral researchers. SS, JDF and NMJ are members of the eLife Community Ambassadors program to promote responsible behaviors in science. SS is a member of the eLife Early Career Advisory Group (ECAG).
Acknowledgements
This work was supported by a start up fund from University of North Dakota to AH, F32GM125388 (NIGMS) to JDF, T32HL007749 (NHLBI) to AJK, support from the Washington Research Foundation Fund for Innovation in Data-Intensive Discovery and the Moore/Sloan Data Science Environments Project at the University of Washington to VRP. The authors would like to thank the entire Future PI Slack community and those who support them in their support of this work.