ABSTRACT
Evaluating how wildlife conservation laws are implemented is critical for safeguarding biodiversity. Two agencies, the U.S. Fish and Wildlife Service and National Marine Fisheries Service (FWS and NMFS; Services collectively), are responsible for implementing the U.S. Endangered Species Act (ESA), which requires federal protection for threatened and endangered species. FWS and NMFS’ comparable role for terrestrial and marine taxa, respectively, provides the opportunity to examine how implementation of the same law varies between agencies. We analyzed how the Services implement a core component of the ESA, section 7 consultations, by objectively assessing the contents of >120 consultations on sea turtle species against the requirements in the Services’ consultation handbook, supplemented with in-person interviews of Service biologists. Our results showed that NMFS consultations were 1.40 times as likely to have higher quality scores than FWS consultations. Consultations tiered from an FWS programmatic consultation inherited the higher quality scores of the programmatic consultation, indicating that programmatic consultations could increase the efficiency of the section 7 process. Both agencies commonly neglected to account for the effects of previous consultations and the potential for compounded effects on species. From these results, we recommend actions that can improve quality of consultations, such as a single database to track and integrate previously authorized harm in new analyses and the careful but more widespread use of programmatic consultations. Our study reveals several critical shortfalls in the current process of conducting ESA section 7 consultations that the Services could address to better safeguard North America’s most imperiled species.
1. INTRODUCTION
The U.S. Endangered Species Act (ESA) is considered one of the strongest wildlife laws in the world (Gosnell 2001). Signed into law in 1973 by President Richard Nixon in response to rising concern over the number of species threatened by extinction, the ESA protects over 1,650 U.S. species by prohibiting negative impacts on species and their habitats and guiding the recovery of populations (USFWS 2017). Today, the ESA remains the primary piece of environmental legislation for protecting imperiled species and recovering them to the point that the law’s protections are no longer needed. With such a crucial role, the ESA must be implemented correctly. Yet agencies often struggle with gaps in effective implementation as they face funding shortfalls and staff limitations alongside a rising number of listed species. Although the ESA is a strong law, effective implementation in the face of these challenges is key, and opportunities for improvement in efficiency and effectiveness are crucial if the ESA is to continue successfully recovering and delisting species.
Section 7 of the ESA directs federal agencies to use their authorities to conserve listed species and is a key aspect of the law’s strength. Under section 7(a)(2), federal agencies (“action agency”) are instructed to consult with the U.S. Fish and Wildlife Service (FWS) or the National Marine Fisheries Service (NMFS) if any action authorized, funded, or carried out may jeopardize listed endangered or threatened species or destroy or adversely modify species’ critical habitat (for definitions see Box 1, Glossary). If an action agency initially concludes that the action is not likely to adversely affect species or their critical habitat, the agency must request Service concurrence on its finding. If the Service concurs, the consultation is complete; This assessment is classified as an “informal consultation.” Conversely, if an action is deemed likely to adversely affect species or critical habitat a “formal consultation” is initiated, and the consulted Service will issue a biological opinion with their findings of the project’s impact on imperiled species. FWS and NMFS share administration of the ESA, with NMFS generally overseeing marine species and FWS managing terrestrial and freshwater species (USFWS and NOAA 1974). However, both Services have authority over some listed species that cross jurisdictional boundaries, such as sea turtles, and consult with action agencies on these joint-jurisdiction species. If done properly, consultations ensure that federal agency actions do not violate the jeopardy and adverse modification prohibitions of the ESA, thereby minimizing negative effects on listed species.
The consultation process is guided by the Section 7 Handbook, which was created by the Services to “promote efficiency and nationwide consistency [of consultations] within and between the Services” (USFWS and NMFS 1998). The Handbook guides biologists to ensure consultations are serving their purpose of adequately protecting listed species and lays out a framework for what should be included in each section of a biological opinion issued by the Service. However, the Handbook is a guidance document only and does not prescribe all details of a consultation. This results in variation in consultation quality, which could become problematic if differences introduce inefficiencies or inconsistencies that ultimately reduce the protection or conservation of imperiled species.
Two preliminary observations suggest consultation quality may differ between the Services in ways that reduce consultation effectiveness. First, recent analysis of data on all section 7 consultations recorded by FWS from 2008-2015 (Malcom and Li 2015) revealed discrepancies in the time duration of consultations between the Services. Whereas the FWS completed 80% of formal consultations within the 135-day time limit set by the Handbook (the proportion of on-time consultations is likely higher because the data do not include information on legitimate “pauses” during consultation; JWM and Y-WL, pers. obs.), NMFS completed only 30% (NMFS 2014). This discrepancy in timing could indicate a problem in the conservation process if, for instance, FWS is compromising quality for quantity in order to complete its required number of consultations, which is substantially greater than NMFS despite receiving similar levels of funding (USFWS 2017; NOAA 2017). Second, based on the authors’ combined experience of reading hundreds of consultation documents, we observed high variation in the general quality and consistency of consultation documents (authors, pers. obs.). Variation appears to be structured (e.g., by species or office) rather than random, and especially large differences occur between consultations produced by the two Services. There are numerous reasons why the FWS and NMFS could differ in their approach to or process for consultations. For example, the two agencies have different organizational histories and cultures and receive different levels of funding, all of which can vary by region and office even within each Service (see, e.g., Lowell and Kelly 2016). Understanding the type and degree of variation between consultations could help identify the cause and outcome of differences and assist in designing solutions that minimize inconsistencies and maximize quality to support the Services in enforcing the ESA. Yet to our knowledge, there has never been a systematic analysis of differences in consultation quality, creating a knowledge gap with direct implications for biodiversity conservation and environmental policy.
We addressed this need by quantitatively evaluating variation in how the Services implement section 7 for threatened and endangered species of sea turtles. Sea turtles are one of the few taxa which falls under the jurisdiction of both the FWS and NMFS, offering a unique opportunity for direct comparison of consultation quality.
Our study examined the quality of ESA consultations through developing a novel, objective scoring system based on the requirements of the Section 7 Handbook. We expect consultations that follow the requirements of the Handbook are more likely to result in better conservation outcomes because the Handbook provides the best available description of protections to comply with section 7. Here, we tested the hypothesis that the quality of section 7 consultations, as measured against their joint handbook, is equal between the Services. To do this, we take advantage of a natural experiment to analyze the differences in how the Services implement the consultation process. While the null hypothesis may be equality, based on previous observations, we expect NMFS consultations to be of higher quality than FWS consultations. We report significant differences in the quality of both the formal and informal consultations between the Services. Study results highlight several pathways by which the Services can systematically improve the quality of consultations to strengthen the ESA and ensure the protection and recovery of North America’s most imperiled species.
2. METHODS
2.1 Sampling
The Services have carried out hundreds of thousands of consultations since the ESA was established, thus we devised a sampling study design to make our assessment feasible. Because consultations are often context-specific and can differ depending on predictable categorizes such as action type and species, random sampling of species was not suitable for our objective. Following prior methods (Owen 2012), we chose a defined subset of consultations to make comparisons between the Services more direct and insightful. We controlled for extraneous sources of variation by conducting our analysis on consultations from January 2008 through April 2015 and involving actions proposed by the Army Corps of Engineers that could potentially impact sea turtles in Florida. This focus enabled us to minimize confounding factors that might be introduced by the time period, type of evaluated action, species natural history or geographic variation so as to focus on differences between the Services consultation process and output. Species of sea turtle were the most consulted on by the Army Corp of Engineers and included green sea turtle [Chelonia mydas], loggerhead sea turtle [Caretta caretta], Kemp’s ridley sea turtle [Lepidochelys kempii], leatherback sea turtle [Dermochelys coriacea], and hawksbill sea turtle [Eretmochelys imbricata].
2.2 Consultation Selection
We obtained consultation data that met our sample criteria from several publicly-available databases. We accessed NMFS consultations using the Public Consultation Tracking System (PCTS; https://pcts.nmfs.noaa.gov/pcts-web/homepage.pcts), which allows users to directly download consultations. FWS has a similar database of consultation records, the Tracking And Integrated Logging System (TAILS). TAILS is designed to help coordinate record-keeping between field and regional offices of FWS and does not provide the consultation documents. Instead, the TAILS database provides records of FWS consultations but has no public interface, therefore we accessed TAILS records using the Section 7 Explorer web application (https://cci-dev.org/shiny/open/section7_explorer; Malcom and Li 2015) that allows the public to search for consultations using TAILS data. Using PCTS and the Section 7 Explorer, we randomly selected 30 formal and 30 informal consultation records from each Service during the study time period. We acquired the NMFS consultations directly from PCTS, while those from FWS we acquired through FWS South Florida Field Office’s online document library for biological opinions (https://www.fws.gov/verobeach/verobeach_old-dontdelete/sBiologicalOpinion/index.cfm) or through a Freedom of Information Act (FOIA) request. While evaluating the original selection of NMFS formal consultations, we discovered some that did not assess sea turtles in the biological opinion despite search parameters constrained to sea turtles. To account for this discrepancy, we removed those not assessing sea turtles and randomly selected an additional 10 formal NMFS consultations for evaluation from the PCTS database. All of the consultations analyzed in this work are archived at Open Science Framework (OSF) under https://dx.doi.org/10.17605/OSF.IO/KAJUQ.
2.3 Evaluation Criteria
For each consultation we recorded the start and end dates of the consultation, year completed, regional office filed through, species of sea turtles, page length, and other general information. All evaluated consultations and data are provided at OSF (https://dx.doi.org/10.17605/OSF.IO/KAJUQ). We developed different scoring methodologies for formal and informal consultations because each type involved different content. Scoring rubrics are provided in SI Appendix 1 (formal consultations) and Appendix 2 (informal consultations). It was not feasible to blind scorers to the Service that wrote consultations because of the nature of the documents; any familiarity with the consultation process makes the Service immediately apparent. Therefore, reviewers were not blind to the Service when analyzing quality. When there was any ambiguity as to the appropriate score, a second reviewer (JWM) would read the consultation in question, then decide on the appropriate score with the primary reviewer (ME).
For formal consultations, we selected the four core sections from the Handbook to score the quality of each biological opinion: “Status of the Species,” “Environmental Baseline,” “Effects of the Action,” and “Cumulative Effects.” Although not an exhaustive list of biological opinion sections, these four sections contain the bulk of the information and analysis of the species and proposed action. The Status of the Species and Environmental Baseline sections received a score from 0-5 and the Effects of the Action and Cumulative Effects sections were given a score from 0-2 based on how well they met the specific requirements for that section by the Handbook. Rating the quality of these core sections of the biological opinion was straightforward because the criteria described by the Handbook allowed for a simple present/absent scoring system. These present/absent scores were summed for each of the four core sections, resulting in a maximum possible score of 5 or 2 points. We calculated total quality by summing the scores across all four sections. The overall quality was normalized by calculating the ratio of the summed score to the total points possible for each consultation.
Scoring the informal consultations used a simpler rubric because informal consultations are shorter, rarely have individual sections, and the Services generally do not prescribe the required contents. We surveyed a selection of informal consultation documents from both Services and considered what information Services personnel need in order to evaluate the effects of actions and monitor the action after consultation is complete. We identified five criteria to evaluate the quality of informal consultations: stating the action, analysis of the action, analysis of the impacted species, stating the reason why the consultation stayed informal and including a map of the area affected by the action. Though a map is not required by the Handbook, the action area is highly important for much of the consultation analysis, and thus the inclusion or omission of a map was scored. These criteria were each assigned 1 point, for a total possible score of 5 points.
During the preliminary work we noticed the use of “sticker concurrences,” in which the FWS South Florida Office recorded only a sticker of consent applied to the request for concurrence provided to FWS (SI Figure 1). This sticker of approval for the action worked in lieu of a complete informal consultation, and no additional consultation documentation was supplied. Despite their lack of analysis, sticker concurrences were scored in the same manner as all other informal consultations.
2.4 Statistical Analyses
Our goal was to understand patterns and associations of variation in consultation quality. We used summary statistics (mean and standard deviation) and Pearson’s correlations to describe patterns. To examine relationships between quality and associated factors, we used two modeling approaches: (1) a binomial generalized linear model (GLM; McCullagh and Nelder 1989) to examine the proportions of total possible points, and (2) ordinal logistic regression (OLR; Kleinbaum and Klein 2010) to analyze the individual quality component scores. We considered six variables that were most likely to affect consultation quality: The Service performing the consultation, whether the consultation was formal or informal, the year the consultation took place, the species of sea turtle assessed, the type of action assessed, and whether the consultation was part of a programmatic consultation (see Glossary). We incorporated these variables into a global model (Model 1) of all variables and eight additional subset candidate models for the analysis of overall quality using the GLM (Table 1). We also considered that the particular office within the Service might be an important predictor of consultation quality. However, given that our focus is on the potential differences between the Services and that the offices are nested within the Services, the office variable was not included in our candidate model set. Because of the fundamental differences between formal and informal consultations and the difference in total possible score, we calculated the response variable as the proportion of possible points for each consultation. When we analyzed data separately for formal and informal consultations, we used reduced candidate model sets by removing the informal consultation variable from formal analyses and the formal and programmatic variables from the informal analyses.
We used a set of three candidate ordinal regression models (Table 1) with random effects for the consultation document in which the components were nested. While programmatic consultation was an important predictor of quality in the overall analysis, the Hessian was singular (presumably because of the lack of NMFS programmatic consultations) for the components and we were not able to include programmatic as a variable in these analyses. We therefore evaluated summary statistics to investigate the role of programmatic consultations in shifting quality scores. We used the R package ‘ordinal’ (Christensen 2015) to conduct ordinal regression. A univariate analysis was performed to identify predictor variables.
We carried out model selection (Anderson and Burnham 2002) based on Akaike’s Information Criterion adjusted for small sample sizes (AICC) using the AICcmodavg package (Mazerolle 2011). We considered models with ΔAICc<2.0 as having strong support (Anderson and Burnham 2002). All analyses were done in R 3.3 (R Core Team 2016) and are available as a package vignette in the project’s OSF repository (https://dx.doi.org/10.17605/OSF.IO/KAJUQ).
2.5 Biologist Interviews
To better understand the consultation process, one of the authors (ME) conducted semi-structured interviews (see, e.g., Pienaar 2015) with one biologist from NMFS and six biologists from FWS who consulted on sea turtles in Florida. All biologists interviewed were on the list of Service personnel who worked directly on the consultations evaluated for this study. Interviews were not meant to be representative of a larger sample but were instead intended to provide further insight into results. Interviews were conducted concurrent with our scoring of the consultations (in August 2015), and the interview questions were based on our understanding of the Handbook and preliminary examination of the consultations we reviewed. Following a set of questions, we asked interviewees about their opinions on the consultation process and how well consultations serve the intended purpose (SI Appendix 3). We then coded answers into categories of similar themes. We interviewed all biologists under the condition of anonymity. Although the sample size is too small for statistical analysis, we reviewed and scored the notes from the interviews to summarize recurring themes.
3. RESULTS
We retrieved, read, and scored 55 consultations produced by FWS (30 formal and 25 informal) and 68 consultations produced by NMFS (38 formal and 30 informal) for a total of 123 consultations. Consultations assessed the effects of the action on seven species on average (Table 2). Formal consultations ranged in length from 1 to 120 pages and required over a year to complete on average. Of the core quality sections evaluated, ‘Status of the Species’ was by far the longest, with an average of 19 pages. This section often contained lengthy content that was neither relevant to the species’ life history in the geographic area of the action nor to the effects of the action. In our random sample of FWS informal consultations, only one featured the sticker concurrence that we observed in the preliminary work.
3.1 Overall Consultation Quality
Generalized linear modeling suggested that consultation quality was best explained by Model 9, which showed the lowest AICc (ΔAICc = ∼2; Table 3). This model, which included all predictors except action type and year, indicated that a consultation done by NMFS was 1.40 times (95% CI = 1.25 - 1.57; Figure 1a) as likely to receive a higher score for quality components as a consultation done by FWS. FWS’s programmatic consultations provided a significant quality boost (OR = 1.35; 95% CI = 1.17 - 1.56), but formal consultations were about as likely (OR = 1.0; 95% CI = 0.89 – 1.13; Figure 1b) to score higher as informal consultations (Table 4). The duration of consultations was positively associated with overall quality in a univariate GLM (r = 0.20; p = 1.04e-6) but did not rank as an important variable in the multivariate analysis. Similarly, the page length was also correlated with quality in a univariate analysis (r = 0.2, p = 0.0037). However, after accounting for the Service performing the consultation and for programmatic consultations in a binomial GLM, there was no relationship (z = 1.024, p = 0.306). Model 2, which included the same predictors as Model 9 but added in the year the consultation was completed, was also supported. This model indicated that the year was associated with a slight decrease in consultation quality over the study period, though this association was not statistically significant (OR = 0.993; 95% CI= 0.97 – 1.02), thus we focus on model 9.
3.2 Quality Components
We examined sources of variation in the components of overall consultation quality. The only component of formal consultations that exhibited a strong association with any predictor variables was the Environmental Baseline, for which Service was a strong predictor of quality and NFMS was more likely to produce higher quality consultations (z = 5.3993, p = 6.691e-8; ORNMFS = 2.6e4 [95% CI = 6.5e2 – 1.1e6]; Figure 2). For the Environmental Baseline section, NMFS consultations were constructively more comprehensive, and tended to include previous consultations in the action area and discuss critical habitat or lack thereof as per the Handbook, neither of which were consistently present in FWS consultations. Most of the quality components of informal consultations were similar except for two categories (Figure 3). The analysis of the action and the reason the consultation was informal were associated with the time duration of the consultation (at a nominal α = 0.05): generally, the longer the informal consultation took to complete, the more likely these components were included. Second, although not required by the Consultation Handbook, half of NMFS but only 15% of FWS informal consultations included a map of the proposed action.
3.3 Interviews
We interviewed six biologists from FWS and one from NMFS and coded their responses into categorizes of similar themes (Table 5; full response notes in SI Appendix 4). When asked how the consultation process could be improved, most biologists (6/7) mentioned they found the process frustrating and many stated that they were overwhelmed with work. One biologist pointed to the fear of possible litigation resulting from shorter consultations as a reason for the overly comprehensive and highly time-consuming consultations that are currently the norm. Five of seven biologists also favored expanding the use of consultation keys, which are designed to help the biologists improve the timing and consistency of consultations when appropriate for a species or on a case-by-case basis (see, e.g., http://www.fws.gov/panamacity/resources/WoodStorkConsultationKey.pdf; SI Appendix 5). All biologists interviewed except one mentioned that they keep a record of cumulative incidental take, which varied in form from notes kept on a whiteboard to Excel spreadsheets. However, only three consultations (all from NMFS) incorporated previously authorized take in the analysis of the effects of the current action on sea turtle populations.
4. DISCUSSION
The ESA is considered one of the strongest national wildlife protection laws in the world (Schwartz 2008), and section 7 is a foundation of this strength. The content and quality of section 7 consultations can alter conservation outcomes, but such protections can only be realized if the scientific and regulatory analyses are robust. Despite the importance of consistently high-quality consultations, no analyses have critically evaluated the strengths and weaknesses of these regulatory documents. Our analysis offers an urgently-needed first step towards assessing the quality of consultations to inform and improve future consultations. Across all 123 consultations evaluated, we found that quality varied significantly between the Services and that the quality of NMFS consultations was higher than FWS consultations. In combination with the biologist interviews, which illuminate some of the possible causes of variation, our results reveal specific areas of improvement to ensure that future consultations achieve their objective of protecting threatened and endangered species.
4.1 Consultation Quality
The quality of both formal and informal consultations was higher in documents produced by NMFS than FWS. This result is consistent with prior findings that NMFS scored higher than FWS in three of seven metrics characterizing the use of “Best Available Science” in recovery plans, lawsuits, listing decisions, and literature cited in biological opinions and no difference was detected between the agencies in the other four metrics (Lowell and Kelly 2016). Although the cause of the difference is beyond the scope of our study, our interviews with Service biologists suggested one possible explanation: that the lack of time and resources available for the agencies’ ever-increasing consultation workload may limit their quality. The FWS biologists we interviewed especially stressed this point, which reflects the funding shortfall experienced by the FWS endangered species program. This program receives approximately equal funding as the Office of Protected Resources at NMFS even though Ecological Services within FWS is responsible for 15 times as many ESA-listed species (Lowell and Kelly 2016). Expenditures per consultation is therefore likely much lower for FWS. Future research should investigate how the Services allocate funding to consultations compared to other endangered species program components, such as listing and recovery.
Our scoring of the individual sections of biological opinions provides further insight into why FWS consultations are lower quality than NMFS consultations and for which content both Services deviate from the expectations of the Handbook. Although documents by both Services consistently showed low quality in the Environmental Baseline section because previously authorized incidental take in the action area was rarely analyzed, FWS scored lower than NMFS because the take analysis was missing from all prior consultations. The lack of this analysis is one of the most pernicious problems with implementing the ESA (Owen 2012). The omission of hundreds or thousands of minor take actions from analysis in consultations can compound to result in “death by a thousand cuts,” whereby individual actions are insignificant for the species but the cumulative effects across many actions severely damage their populations (USFWS 2012). A 2009 Government Accountability Office report on FWS’s implementation of the ESA highlighted this concern and recommended that the Services track authorized take across a species’ entire range to better inform consultations (GAO 2009). The only three consultations that included an analysis of previously authorized take were all produced by NMFS, enhancing the quality difference between the Services for this core section. However, it is worth noting that FWS’s programmatic consultation for beach work across Florida listed previous formal consultations (Activity Code 41910-2010-F-284), yet those data were not analyzed in the evaluated consultation and there was no evidence they played a role in the Environmental Baseline or the Effects Analysis. It is unclear why previously authorized take in the action area was not analyzed, especially since many biologists that we interviewed stated that they record cumulative take. Future research should investigate the disconnect between the information that Services biologists record and the information included in consultations.
Although the Handbook requires certain analyses for each section, sections of many FWS consultations contained little or no analysis and instead merely repeated the boilerplate language from the Handbook. This was particularly true of the Cumulative Effects section of FWS consultations, which often mentioned the obligation to “include the effects of future State, tribal, local or private actions that are reasonably certain to occur,” followed by a statement that there would be no cumulative effects. In contrast, most NMFS consultations more thoroughly analyzed the cumulative effects, which are critical to understanding the effects on species recovery.
The Handbook guidance for informal consultations is less prescriptive than for formal consultations, but our analysis revealed that the quality of consultations by FWS is similarly lower than for NMFS. Three components — the analysis of the action, the species analysis, and a map of the action area — were consistently missing or insufficient in the informal FWS consultations that we reviewed. On one hand, because informal consultation is merely a prerequisite to determine whether formal consultation is warranted, we recognize that detailed informal consultation analysis is unlikely to benefit ESA-listed species. Nonetheless, omission of content means that the administrative record is inconsistent, incomplete and, most alarming of all, differs from the Services’ expert recommendations for informal consultations. We see this in the use of “sticker” concurrences, observed both in our preliminary work and in one randomly sampled informal consultation. While these stickers may save time, they provide no record of why FWS approved the action or method for assessing whether FWS properly implemented that component of the ESA. Furthermore, in contrast, all informal consultations from NMFS explained why the consultation was informal. The shortcomings of FWS informal consultations can likely be explained by the resource constraints, yet we highlight this example as an invitation for the agency to critically evaluate whether such shortcuts appropriately achieve greater efficiency, or whether different improvements could make the process more effective.
4.2 Opportunities for Improving Consultation Efficiency
The stark difference between the FWS and NMFS in consultation quality, highlight gap in the way section 7 is implemented. This discrepancy, coupled with the known disparity in both workload and resources (both financial and personnel) available per consultation, means that improving the efficiency with which the Services carry out consultations is essential to properly implementing the ESA. Ideally, the Services should spend enough time on each consultation so as to maximize the conservation benefit to a listed species. Awareness of this optimal threshold, and the required content to reach it, would avoid overspending precious resources (Converse and colleagues 2011). Here we discuss some critical inefficiencies, and potential pitfalls of efficient approaches, indicated by our results.
The higher quality scores associated with consultations tiered off of the FWS programmatic consultation indicate that programmatic consultations are one promising way to improve consultation efficiency in concept. The effects analysis of programmatic consultations should provide a better description of cumulative effects because many planned or potential projects within a program are evaluated together rather than individually. We expect that when the cumulative impacts are properly acknowledged, the assessment of jeopardy or adverse modification is more likely to reflect real-world conditions. Another benefit is that because the overall program has already been evaluated, the consultations for future individual projects are faster and can contain less analysis. Malcom and Li (2015) found that project-level consultations that tiered off of a program-level consultation were completed nearly three times faster than the average standard consultation. In the set of consultations we evaluated, the single FWS program-level programmatic consultation for beach renourishment across Florida was a “tide that raised all boats,” in which the project-level programmatic consultations that tiered off of the program-level programmatic consultation “inherited” the (generally) high scores of the program-level consultation and significantly increased the quality of FWS consultations. Whether this is an outlier or representative of programmatic consultations in general is unclear but deserves further investigation. But the converse is also possible: low-quality program-level programmatic consultations would mean that tiered consultations inherit low-quality analyses that would likely lead to poor conservation outcomes. While the results from this set of consultations are promising, the Services need to continually evaluate their programmatic consultations to ensure that the speed benefits of these consultations do not overshadow the need for high-quality analyses.
Our interviews with biologists from the Services provided important context for interpreting the results and indicated other possibilities for improving consultation efficiency. The lack of consistency among offices and between Services was frequently mentioned as a frustrating aspect of the consultation process during the interviews. The differing approaches to consultations can be difficult for action agencies as well, who can see the approval of a project depend largely on the consulting office (Y-WL and JWM, pers. obs.). One possible solution that we did not test is the use of consultation keys, as have been developed for Army Corps of Engineers consultations for a few species, including wood storks (Mycteria americana) and indigo snakes (Drymarchon couperi). The Services use these documents to promote appropriate standards for certain construction activities. Creating similar documents for other frequently-consulted species may streamline consultations and increase inter-office and inter-Service consistency. The use of consultation keys would also increase the transparency of the consultation process, making it easier for action agencies or their applicants to plan their projects.
Last, we note one particular aspect of consultations that was not amenable to quantitative analysis but suggests efficiency improvements: inclusion of extensive material seemingly irrelevant to evaluating the effects of the action. For example, several consultations we reviewed included >20 pages of information on red knots (Calidris canutus), of which one paragraph was relevant to evaluating the action (JWM, pers. obs.). Including such inconsequential background information requires additional time not only for Services’ biologists, but also for the action agency or their applicants who read the opinion. By way of explanation, one FWS biologist mentioned that such information was included to buffer against any potential legal action, ensuring all “bases are covered.” However, this approach conflates “more” with “better” — the added time and cost does not always produce commensurate benefits for legal defensibility or conservation (Restani and Marzluff 2002). We encourage the Services to critically evaluate the information in biological opinions and exclude irrelevant material. The Recovery Planning Initiative (RPI) being developed by FWS at this time (SI Appendix 6) can help with this extraneous information problem. One component of RPI is a single, continually updated Species Status Assessment (SSA) for each ESA-listed species, which would be incorporated by reference in consultations, conservation permits, five-year reviews, and other aspects of ESA implementation (SI Appendix 7). Widespread adoption of SSAs would improve efficiency and, because they should include an analysis of previously authorized take, improve the effectiveness of section 7 consultations.
4.3 Policy Recommendations
Our results provide a basis for several policy recommendations that would improve the Services implementation of section 7 of the ESA:
Develop and require the use of a single database for recording and querying authorized take. The component most commonly missing from consultations we reviewed was an analysis of previously authorized take in the action area. This is not surprising because FWS and NMFS have not yet established a unified, systematic way for their biologists to record authorized take, much less to comprehensively quantify and track previously authorized take to use in the jeopardy and adverse modification analyses. A centralized take database was recommended by the GAO seven years ago (GAO 2009) but has not yet been implemented by the Services. Implementing this recommendation would dramatically improve the quality of the Environmental Baseline analysis of consultations. In turn, we expect better conservation outcomes for consulted-on species. Beyond consultations, an authorized take database would be invaluable for informing ESA-required five-year status reviews, such that harmful effects from consultations can be compared to beneficial effects from conservation activities.
Establish a systematic review protocol to ensure that programmatic consultations, which can increase efficiency, do not reduce the effectiveness of consultation. Programmatic consultations can increase consultation effectiveness and efficiency – in theory – but the Services must ensure that the quality of project-level consultations is not sacrificed. In our results, the programmatic consultation was the “rising tide that lifted all boats.” Ensuring that other and future programmatic consultations are similarly well-crafted can result in high quality, consistently-implemented consultations. The Services have expressed an interest in increasing the use of programmatic consultations, but such an increase must formally guard against a loss of effectiveness. Regular reviews at the field office, regional, and national levels, guided by a robust “checklist” of effectiveness measures, could also benefit an expansion of the use of programmatic consultations.
Require more widespread development and use of consultation keys. Our results revealed variation in consultation quality between the Services. If we had chosen a wider selection of consultations, this variation may have further increased. This highlights the need to promote standardization as a means of improving the efficiency and effectiveness of consultations. The biologists we interviewed suggested that the use of consultation keys could improve consistency. Although not every species and every type of action is amenable to consultation keys, wider use of keys could significantly improve the parts of consultations where they are relevant.
Reduce workload by referencing prior documents. To reduce the rote workload for consultation biologists and consulting agencies, the Services could consider transitioning to referencing Species Status Assessments (SSAs), created as part of the Recovery Planning and Implementation strategy, in consultations. This would dovetail with FWS’s current revision of the recovery planning program, which places SSAs as a central piece of the process. Improving efficiency through standardization should not mean cutting corners, however. The informal concurrence stickers are a form of standardization but, as currently used, do not provide an adequate record of why decisions were made. They may be sufficient if modified slightly, such as by adding simple check boxes and short note fields to indicate the reason a consultation qualified as informal.
Implementing the above recommendations could significantly increase efficiency to better utilize the precious resources of the Services, and thus would improve the conservation benefit conferred by section 7 consultations. Strengthening the quality of the consultations through these methods would enable the Services to improve the overall effectiveness of the ESA, thereby reinforcing its critical role in conserving imperiled species.
ACKNOWLEDGMENTS
We thank the personnel from the Florida offices of the U.S. Fish and Wildlife Service; the St. Petersburg office of the National Marine Fisheries Service; and the Florida Fish and Wildlife Conservation Commission for their work on consultations and for the insights they provided us during this project. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Appendix
SI APPENDIX 1: SCORING RUBRIC FOR FORMAL ESA SECTION 7 CONSULTATIONS
Environmental Baseline Quality (Total Points: 5)
Does the Environmental Baseline address the status of the species in the action area? (1)
Is there a mention of past/ongoing threats to the species in the action area? (1)
Does the Environmental Baseline take past consultations in the action area into consideration? (1)
Is there mention of critical habitat (or lack thereof) for the species? Does said critical habitat overlap with the action area? (1)
Does the baseline include State, tribal, local and private actions already affecting the species that will occur contemporaneously with the consultation in progress, as per the handbook? (1)
Effects of the Action Quality (Total Points: 2)
There is a clear and defined cause and effect analysis of the action. (1)
The consultation gives an explanation as to if and how said action will negatively affect sea turtles. (1)
Species Status Quality (Total Points: 5)
Does the consultation adequately describe the species and its habitat/critical habitat? (1)
Is the life history of the species addressed? (1)
Is there a detailed demographic analysis (if available for the species), including population size, variability and stability? (1)
Is the status and distribution of the species addressed, including reasons for listing? (1)
Is there an analysis of the species/critical habitat likely to be affected by the action? (1)
Cumulative Effects Quality (Total Points: 2)
Does the consultation consider the likelihood of the species to be able to recover? (1)
Does the consultation consider the effects of future State, tribal, local or private actions that are reasonably certain to occur, as per the handbook? (1)
SI APPENDIX 2: SCORING RUBRIC FOR INFORMAL ESA SECTION 7 CONSULTATIONS
Informal Criteria Baseline (Total Points: 5)
Mentions the action (1)
Some analysis of the action (1)
Some analysis of the impacted species (1)
Reason the consultation stayed informal is mentioned (1)
Map of the area affected by the action (1)
SI APPENDIX 3: INTERVIEW QUESTIONS FOR FISH AND WILDLIFE SERVICE AND NATIONAL MARINE FISHERIES SERVICE BIOLOGISTS
Can you tell me a bit about how the consultation process usually begins for you?
How frequently do you work on consultation? Has this number increased or decreased in recent years? Why might that be so?
How common is it to ask the action agency to provide more information on the action?
Have you seen a change over time in the way consultations are completed?
The number of consultations for FWS in Florida has been steadily decreasing since 2008 (according to the TAILS database there were 1099 in 2008 vs. 347 in 2014). Do you have an impression of how often you aren’t consulted on things?
Is there a consultation key for sea turtles, similar to the FWS Wood Stork Consultation Key? If not, is this something the Service would consider doing? Would this be an improvement to the process? Would you be in favor of a more standardized way to approach the consultation process? (Keys, a standardized ITP, etc.)
Can you explain the process of going through the literature and files on hand to satisfy the “best possible science” condition?
How do you exercise precaution when dealing with scientific uncertainty surrounding the effects of an action on a species/critical habitat? How much benefit of the doubt do you give to the species? Does it differ depending on the situation? Is this an issue you deal with on a regular basis?
How much time do you spend on the average consultation? FWS TAILS database says the average days for approval for formal consultations is 89 (13 for informal) days. Does that seem right?
Is pervious take ever tallied (formally or informally) to get a sense of how much has been done to a species over time? In your view, would this be a feasible/helpful thing to implement?
How often do you consult the section 7 Handbook?
Do you ever get requests for re-initiation of consultations?
NMFS is taking the lead on the revision of the handbook this year. What would you like to see in the revision? In your opinion, is there something that should be clarified?
What is your opinion on making all of the final documents publicly available (NMFS has PCTS, Vero Beach has the formal consultations online but not the informal documents)?
Where is there the most room for improvement in the consultation process? Does it work well as is?
SI APPENDIX 4: INTERVIEW RESPONSES
Included in Open Science Framework archive at https://dx.doi.org/10.17605/OSF.IO/KAJUQ
SI APPENDIX 5: WOOD STORK CONSULTATION KEY
Included in Open Science Framework archive at https://dx.doi.org/10.17605/OSF.IO/KAJUQ
SI APPENDIX 6: RECOVERY PLANNING AND IMPLEMENTATION FACT SHEET
Included in Open Science Framework archive at https://dx.doi.org/10.17605/OSF.IO/KAJUQ
SI APPENDIX 7: SPECIES STATUS ASSESSMENT PRESENTATION
Included in Open Science Framework archive at https://dx.doi.org/10.17605/OSF.IO/KAJUQ
Footnotes
We updated the Introduction and the Discussion slightly, and clarified methods. There are not substantive changes from the previous version.