ABSTRACT
Animals communicate using sounds in a wide range of contexts, and auditory systems must encode behaviorally relevant acoustic features to drive appropriate reactions. How feature detection emerges along auditory pathways has been difficult to solve due to both challenges in comprehensively mapping the underlying circuits, particularly in large brains, and in characterizing tuning for behaviorally relevant features. Here, we take advantage of the small size, genetic tools, and connectomic resources for the Drosophila melanogaster brain to investigate feature selectivity for the two main modes of fly courtship song, sinusoids and pulse trains. By building a large collection of genetic enhancer lines, we identify 24 new cell types of the intermediate layers of the auditory pathway. Using a new connectomic resource, FlyWire, we map connections among these cell types, in addition to connections to known early and higher-order auditory neurons. We characterize auditory responses throughout this pathway, and find that the newly discovered neurons show diverse preferences for courtship song modes. However, rather than being sorted into separate streams, neurons with different preferences are highly interconnected. Among this population, frequency tuning is centered on frequencies present in song, whereas rate tuning is biased towards rates below those present in song, suggesting that these neurons form a basis set for the generation of pulse feature tuning downstream. Our study provides new insights into the organization of auditory coding within the Drosophila brain.
INTRODUCTION
Sounds are an integral part of the social life of animals, and are also critical for mate choice, finding food, caring for young, and avoiding harm. Accordingly, the brains of a wide range of animals have evolved to recognize behaviorally salient acoustic signals. For instance, courtship songs often contain information about sender status and species, and receivers must decode this information by analyzing patterns within songs (Akre et al., 2011; Baker et al., 2019; Hedwig, 2016; Nieder and Mooney, 2020). Several species that use sound to communicate produce songs comprising multiple acoustic types, often referred to as syllables or modes (Behr and von Helversen, 2004; Holy and Guo, 2005; Wohlgemuth et al., 2010). While prior work has examined where selectivity for conspecific sounds emerges across a variety of systems including crickets (Schöneich et al., 2015) songbirds (Moore and Woolley, 2019), primates (Romanski and Averbeck, 2009), and mice (Roberts and Portfors, 2015), how coding for different acoustic types is sorted within auditory pathways has not yet been delineated. Doing so would require mapping the underlying circuitry and recording responses of neural subsets to different song types - a significant challenge in larger brains. Here we f ocus on Drosophila melanogaster, a species with a smaller brain and both genetic and connectomic tools for circuit dissection. Drosophila males sing songs of two main modes, alternating between them to compose song bouts (Arthur et al., 2013; Bennet-Clark and Ewing, 1967). This provides an opportunity to ask how these two modes are encoded along the auditory pathway by mapping circuit architecture and systematically studying functional responses. One model is that selectivity for each mode emerges early and stays separate. Another model is that selectivity for each mode emerges late, with neurons early in the pathway forming a basis set for downstream neurons to build their selective responses. In this second model, interconnections between neurons with different preferences would be advantageous for generating tuning for the main features of song. We test these two possibilities here.
D. melanogaster songs are comprised of two main modes: sinusoids and pulse trains. Males alternate between these two modes, based on feedback they receive from a female, to compose song bouts (Coen et al., 2014). Receptive females reduce locomotor speed in response to features within each mode, from the frequency of the individual elements to longer timescale patterns such as the duration of each mode (Clemens et al., 2015; Deutsch et al., 2019). In general, both males and females respond selectively to features and timescales present in conspecific song, but how tuning for these features arises along the auditory pathway is not yet known.
Sound detection begins at the Drosophila antenna as Johnston’s organ neurons (JONs) within the antennal second segment detect sound-evoked vibrations of the arista (Albert et al., 2007). Similar to vertebrate auditory systems, the fly auditory system parses sound frequency at the periphery (Ishikawa et al., 2017; Kamikouchi et al., 2009; Patella and Wilson, 2018; Yorozu et al., 2009). JONs appear to be coarsely tuned to frequency and project roughly tonotopically to the antennal mechanosensory motor center (AMMC) in the brain (Fig. 1A). The auditory responses of a handful of AMMC neurons, such as A1, A2, B1, and B2, have been characterized (Kamikouchi et al., 2009; Lai et al., 2012; Tootoonian et al., 2012). B1 in particular appears to be part of a major pathway for song processing, as silencing these neurons impacts song-evoked behaviors in both sexes (Vaughan et al., 2014; Yamada et al., 2018). Auditory information is then sent to the wedge (WED), ventrolateral protocerebrum (VLP), saddle (SAD), and gnathal ganglia (GNG) (Lai et al., 2012; Matsuo et al., 2016), but only a few auditory neurons in these areas have been identified (Clemens et al., 2015; Lai et al., 2012; Zhou et al., 2014). As no previous study has systematically examined pulse versus sine preference, there is insufficient information to rule in or out the two models (early vs. late selectivity) above.
In higher-order brain areas, we now know of at least two cell types with pulse song feature preference. pC2l neurons in both males and females respond selectively to several distinct features that define pulse song, from the frequency of individual pulses to the length of pulse trains, but they respond only weakly to sine song (Deutsch et al., 2019). vpoEN neurons also respond more strongly to pulse vs. sine song (Wang et al., 2020b). Neurons selective for sine song features are not yet known. Understanding how preference for pulse vs. sine song emerges along the pathway is critical to revealing how the fly brain encodes and decodes courtship song, since both modes are known to affect female behavior during courtship (Clemens et al., 2018a; Deutsch et al., 2019; Schilcher, 1976).
Pan-neuronal imaging of the fly brain during stimulation of the arista has facilitated mapping song feature tuning (Pacheco et al., 2019; Patella and Wilson, 2018). This approach has revealed that auditory responses occur throughout the brain, have diverse temporal dynamics, and tend to be strongest for sound features present in conspecific song (Pacheco et al., 2019). In particular, the tonotopy established in peripheral JONs is conserved and potentially sharpened as information traverses to the AMMC and WED (Patella and Wilson, 2018) and the VLP (Pacheco et al., 2019). The limitation of these approaches is that auditory responses cannot be assigned to individual cell types, and so we therefore do not yet know the identities of neurons that make up the auditory pathway, nor do we know how such neurons are connected to give rise to feature tuning.
We address these outstanding issues here, by identifying auditory neurons in the WED and VLP, hereafter referred to as WED/VLP. The overall goal was not to create a line for every cell type in the WED/VLP (of which over a thousand are estimated to exist (Scheffer et al., 2020)), but to identify a wide variety of new auditory neuron types. Using calcium imaging of split-GAL4 lines, we found 24 novel auditory cell types, as well as new lines for known auditory neurons. We tested each cell’s preference for sine vs. pulse song, and tuning for sine frequency and pulse rate. We mapped synaptic connectivity among auditory neurons, and found connections between neurons with different pulse vs. sine preference, suggesting that interactions between differently tuned neurons may be important for establishing feature tuning.
RESULTS
Identifying cell types of Drosophila auditory circuits
To identify new cell types downstream of known early auditory neurons (Dorkenwald et al., 2020), we focused on three of the five brain areas to which second-order mechanosensory neurons in the AMMC send the majority of their projections (Matsuo et al., 2016): the wedge (WED), anterior ventrolateral protocerebrum (AVLP), and posterior ventrolateral protocerebrum (PVLP) (Fig. 1A). Our general approach was to create sparse and specific split-GAL4 lines (Aso et al., 2014) targeting neurons (hereafter referred to as WED/VLP neurons) in any of these three areas and then to screen the most promising lines for auditory responses. In total, we examined the expression of 1041 split-GAL4 lines, generated stable lines for 117 with the sparsest expression in the brain, and selected 65 for functional recordings. These 65 lines target at least 50 WED/VLP cell types that include local, intra-hemispheric projection, and commissural neurons (Fig. 1B, Supp. Fig. 1–3).
To reveal cell type diversity within each split-GAL4 line, we collected multicolor flip-out (MCFO) images (see Methods) (Supp. Fig. 2). This uncovered some heterogeneity within particular lines. As recent work on the Drosophila hemibrain connectome resource (Scheffer et al., 2020)) has shown that neuron types should be defined both by morphological criteria and distinctions in input (presynaptic) and output (postsynaptic) partners, we were unable to disambiguate further the individual neuronal subtypes within each line on morphological criteria alone, and consider the WED/VLP neurons labeled in each line a morphologically similar ‘ cell class’. We named each cell class by identifying the neuropil with the highest density of neural processes (Supp. Fig. 3; Table 2). Hereafter we refer to the neurons imaged in each line by these names, except for neurons previously named and identified as auditory (i.e., A2, B1, B2, and WED-VLP) (Table 2) (Dorkenwald et al., 2020; Kamikouchi et al., 2009; Lai et al., 2012). To clarify which group of cells we targeted in each line for calcium imaging, we digitally segmented WED/VLP neurons from the broader expression pattern of each line (Supp. Fig. 4 and Fig. 1E-G).
We tested each WED/VLP line for auditory responses using GCaMP6s (Chen et al., 2013) and three stimulus categories: pulse, sine and broadband noise (Fig. 1C; see Methods for stimulus details). Pulse and sine constitute the two major modes of D. melanogaster courtship songs, and the noise stimulus was included to identify responses to acoustic features outside of the range of parameters in song. Of the 65 lines screened, we found consistent auditory responses in 28 (43%) (Fig. 1D), including four cell classes representing previously known auditory neurons: A2, B1, B2, and WED-VLP (Fig. 2; Table 2) (Azevedo and Wilson, 2017; Clemens et al., 2015; Dorkenwald et al., 2020; Kamikouchi et al., 2009; Lai et al., 2012; Tootoonian et al., 2012; Vaughan et al., 2014). Some auditory neurons were previously classified as lateral horn neurons, but not known to be auditory (e.g., AVLP_pr05 resembles AVLP-PN1 and IPS_pr01 resembles WED-PN1) (Dolan et al., 2019). We used the new FlyWire resource to find these auditory neurons in both hemispheres within an electron microscopic (EM) volume of an entire female brain (Fig. 2; see Methods) (Dorkenwald et al., 2020; Zheng et al., 2018). In most cases, we found clear morphological matches between LM and EM, but in two cases (AVLP_pr18 and AVLP_pr24) we were not able to resolve the EM reconstructions belonging to each cell class based on the available LM images. We use these EM data below to examine connectivity between auditory cell types. An additional 11 cell classes (17%) were classified as infrequent responders because responses occurred in fewer than 15% of imaged flies (Fig. 1D). The remaining 26 cell classes did not show responses to acoustic stimuli. In sum, our screen identified a total of 35 new neuron types with auditory activity. In the sections that follow, we investigate the song feature tuning of these neurons, the diversity of their responses, and their synaptic connectivity.
Diverse preferences for courtship song modes among WED/VLP neurons
Auditory neurons tested for song spectrotemporal selectivity so far have been found to be pulse-preferring (Deutsch et al., 2019; Wang et al., 2020b; Zhou et al., 2015), but pan-neuronal imaging suggests there are a greater number of sine-preferring neurons throughout the central brain (Pacheco et al., 2019). We aimed to identify such elusive cell types in our screen. Auditory responses of the WED/VLP neurons we screened consisted of increases/decreases in fluorescence indicative of net excitation (Fig. 1E), net inhibition (Fig. 1F), or a combination (Fig. 1G). To understand how these responses contribute to the encoding of courtship songs, we used the integral of the fluorescence trace during the stimulus to determine preference for pulse vs. sine stimuli in each fly imaged (Fig. 3, Supp. Fig. 5, and Methods). The effectiveness of this approach is supported by results from pan-neuronal imaging of auditory responses (Pacheco et al., 2019): when stimuli were morphed from a sine wave to a pulse train by varying the amplitude of a low-frequency envelope convolved with the stimulus, regions of interest in the WED and VLP regions preferred either the sinusoidal or the pulsatile stimulus, but not intermediate stimuli. For a given WED/VLP neuron to be classified as pulse- or sine-preferring, we reasoned that at least 75% of the imaged flies from a split-GAL4 line should fall into the same preference category (see Methods). With these criteria, 12/28 cell classes (43%) were sine-preferring, 9 (32%) were pulse-preferring, and the remaining 7 (25%) showed mixed preferences. Within these categories, we observed diversity in preference strengths. Many cell types, such as AVLP_pr01, AVLP_pr02, AVLP_pr22, AVLP_pr31, AVLP_pr12, AVLP_pr18, AVLP_pr24, and AVLP_pr26, were so selective that they barely responded to the other mode (Fig. 3A), whereas other cell types, such as AVLP_pr05, B2, IPS/WEDpr01, WED-VLP, A2, and AVLP/PVLP_pr01 still responded to the other mode (albeit more weakly) (Fig. 3A).
Auditory response variability varies by cell type
Within a cell class, the temporal dynamics of responses to song could vary across flies, even if the categorical preference for pulse vs. sine was consistent (e.g., Fig. 1G). This variability might be important - for example, for generating different preferences to courtship song features across individual female flies. To quantify this variability, we clustered the trial-averaged temporal traces to the three main types of stimuli presented, similar to a recent study (Pacheco et al., 2019), and agnostic to the cell type the recording came from. In other words, each fly recording was treated as a separate data point. Trial-averaged responses from WED/VLP neurons could be grouped into 13 clusters (Fig. 4A, see Methods), based on differences in the strength of excitation or inhibition, and the strength of adaptation.
We then determined how often responses from a given cell class (across flies) were grouped into the same cluster (Fig. 4B). The across-individual responses of 7 cell classes (AVLP_pr02, AVLP_pr01, AVLP_pr18, AVLP_pr22, AVLP_pr36, AVLP_pr32) were similar enough to each other to always be assigned to the same functional clusters, whereas the responses of 3 cell classes (IPS_pr01, AVLP_pr35, AVLP_pr11) were different enough to never be assigned to the same clusters. We further quantified inter-individual variability by calculating the Euclidean distance between responses from every pair of flies within each cell class (Fig. 4C), and found that, as expected, recordings from neurons that tended to fall into multiple clusters showed greater pairwise distances, even though we controlled for developmental age, sex, receptivity state, and rearing conditions (see Methods). These results suggest that auditory responses from some cell types vary across individuals, with important implications for the processing of courtship songs among a population of flies (see Discussion).
Differences in sine frequency and pulse rate tuning among WED/VLP neurons
To assess the tuning of WED/VLP neurons for sine and pulse features, we varied the frequency of sine stimuli and the rate of pulse train stimuli to examine frequency and rate tuning across our dataset (Fig. 3B, 5A, 6A). Behavioral data show that both males and females change their locomotor speed most in response to conspecific sine frequencies (across a wide range from 150-400 Hz) and pulse rates (peaked at 35-40 ms interpulse intervals) (Deutsch et al., 2019). However, we observed a variety of tuning strengths and types (Figs. 5B, 6B). We analyzed responses by first sorting according to the selectivity of each cell type for sine versus pulse stimuli (Fig. 3). Within these categories, we then analyzed frequency and rate tuning for each individual, as responses from a given cell type could vary across individuals (Fig. 4). For sine frequency, the majority of responses showed low-pass or band-pass tuning, with roughly equal numbers of responses in these two categories (Fig. 5C-D). Recordings from neurons we had characterized as pulse-preferring (Supp. Fig. 5) tended to show band-pass frequency tuning, whereas recordings from neurons characterized as sine-preferring showed either low- or band-pass tuning. These results suggest that sine-preferring neurons may, as a population, tile the frequencies within the conspecific range (100-400 Hz) (Arthur et al., 2013), whereas pulse-preferring neurons are more likely to represent intermediate frequencies reflective of conspecific pulse carrier frequencies (Clemens et al., 2018a).
For pulse rate tuning, the majority of neurons we recorded exhibited short-pass responses, although this category was dominated by sine-preferring neurons (Fig. 6C-D). Neurons with mixed preference showed long-pass responses, whereas pulse-preferring neurons exhibited the most diversity in rate tuning, with short-, long- and band-pass tuned responses (Fig. 6C-D, Supp Fig. 5). However, relatively few recordings (2 flies from AVLP_pr36 and 1 fly each from SAD_pr01, SAD/AMMC_pr01, and AVLP_pr35) exhibited band-pass tuning for the conspecific pulse interval (35-40 ms), suggesting that the responses of the neurons recorded in this study may serve as a set of building blocks for generating band-pass tuning downstream in the pathway, for example in pulse feature detector neurons such as pC2l (Deutsch et al., 2019; Wang et al., 2020a).
WED/VLP neurons project throughout the central brain
Regardless of preference for sine versus pulse stimuli, WED/VLP auditory neurons sent projections throughout the central brain, including to regions considered primarily visual (ie, posterior lateral protocerebrum or PLP) and olfactory (ie, peduncle and calyx of the mushroom body (MB) and the lateral horn (LH)) (Fig. 7). Auditory neurons also frequently targeted neuropils known to be rich in the processes of sexually dimorphic neurons, such as the superior clamp (SCL), inferior clamp (ICL), and inferior posterior slope (IPS) (Fig. 7). These findings are consistent with results of pan-neuronal imaging that found auditory responses throughout 33/36 neuropils of the central brain (Pacheco et al., 2019). Ten of 12 WED/VLP cell classes with projections in the AMMC responded to auditory stimuli in our screen (Fig. 7B), which is consistent with the AMMC being the primary target of auditory receptor neurons (Matsuo et al., 2016). For other neuropils, there were no obvious differences in neuropils innervated by neurons based on auditory response type or lack thereof (Fig. 7), suggesting that the likelihood of responding to auditory cues could not be predicted based solely on which brain regions a WED/VLP neuron targeted.
WED/VLP neurons with different song mode preferences target distinct regions of the WED and VLP
There were no systematic differences between brain regions targeted by sine-preferring and pulse-preferring neurons, but what about organization within the WED and VLP? Do sine-preferring and pulse-preferring neurons target different territories within these neuropils? In the AVLP, we found that pulse-preferring neurons most frequently targeted the medial and anterior parts, whereas sine-preferring neurons were biased towards a posterior tract extending from ventral to dorsal (yellow in Fig. 8A). Whereas mixed preference lines targeted areas innervated by both pulse- and sine-preferring lines, non-auditory lines most frequently targeted the lateral half of the AVLP. In the PVLP, projections of sine-preferring neurons were most common in an anterior region roughly in the middle of the medial-lateral and dorsal-ventral axes, whereas projections of pulse-preferring lines were found more medially and dorsally (yellow in Fig. 8B). In contrast to the AVLP, mixed-preference lines innervated a region separate from the major areas innervated by pulse- and sine-preferring neurons. Non-auditory lines tended to innervate more lateral PVLP regions than auditory lines (yellow in Fig. 8B). In contrast to VLP projections, WED projections showed less pronounced differences between pulse-vs. sine-preferring, and auditory vs. non-auditory projections. (Fig. 8C). Taken together, these results suggest that sine and pulse preference arise in the VLP in separate anatomical regions, and this transformation in stimulus preference likely occurs between WED to VLP projecting neurons and VLP projection neurons.
Neurons with different song preferences target distinct optic glomeruli
A portion of the PVLP and posterior lateral protocerebrum (PLP) innervated by many of our WED/VLP neurons (Fig. 7) contains the optic glomeruli (Panser et al., 2016; Wu et al., 2016). Neurons innervating the optic glomeruli play important roles in a number of visual behaviors (Bidaye et al., 2020; Keleş and Frye, 2017; Wu et al., 2016). We found that 11 auditory WED/VLP neurons and 6 infrequently responding WED/VLP neurons contained processes in at least one optic glomerulus (Supp. Fig. 6; see Methods). The two optic glomeruli with the most innervation by our WED/VLP cell classes were LC9 and LC11, both small object-detecting visual neurons that might be important for courtship behaviors (Bidaye et al., 2020; Keleş and Frye, 2017; McKinney and Ben-Shahar, 2019). The LC9 glomerulus is exclusively innervated by neurons that prefer pulse stimuli whereas the LC11 glomerulus is mostly innervated by neurons that prefer sine stimuli (Supp. Fig. 6). As sine song is produced at close range to females while pulse is produced farther away (Coen et al., 2014), it is interesting to note that LC11 neurons are tuned for small objects (Keleş and Frye, 2017) while LC9 neurons provide inputs to P9 neurons that drive female-directed turning and walking in courting males (Bidaye et al., 2020) (see Discussion).
Synaptic connectivity among auditory neurons
The auditory pathway begins in the Johnston’s Organ of the antenna, where JO-A and JO-B neurons detect sounds and relay information to the AMMC (Kamikouchi et al., 2009). Information then splits into a pathway including the giant fiber neuron and A1, which receive inputs largely from JO-A neurons, integrate this information with other sensory streams, and drive escape maneuvers (Allen et al., 2006; von Reyn et al., 2014); and another pathway including A2, B1, and B2, which receive inputs from a mix of JO-A and JO-B neurons and are thought to be part of the courtship song processing pathway (Kim et al., 2020; Vaughan et al., 2014; Yamada et al., 2018). However, recent connectomic work shows interconnections between these two pathways (Dorkenwald et al., 2020). From the AMMC, information ascends to regions including the WED and VLP (Lai et al., 2012; Matsuo et al., 2016). There was limited information on neurons downstream of WED-VLP and no functional data for WED-VLP neurons, which motivated our functional screen. By identifying the neurons from split-GAL4 lines characterized above in a whole brain EM volume (FlyWire) (Dorkenwald et al., 2020; Zheng et al., 2018) (Fig. 2), we could then determine all synaptic connections (via automated synapses (Buhmann et al., 2019)) between these neurons and earlier neurons in the pathway. We also examined connections to higher-order neurons shown to be pulse-preferring (vpoEN (Wang et al.) and pC2l (Deutsch et al., 2019)), to drive a persistent internal state in females (pC1d/e (Deutsch et al., 2020)), and descending neurons that drive song-responsive behaviors (pMN1/DNp13 (Wang et al.) and pMN2/vpoDN (Deutsch et al., 2019; Wang et al., 2020b)) (Fig. 9A).
We found that auditory WED/VLP neurons form synaptic connections with previously known auditory neurons such as A1, A2, B1, WED-VLP, and WV-WV (Azevedo and Wilson, 2017; Dorkenwald et al., 2020; Lai et al., 2012; Tootoonian et al., 2012). However, they also form connections with the giant fiber neuron (GFN), which is involved in escape behaviors. Indeed, instead of a clear separation of information into distinct “song” and “escape” pathways, cell classes such as SAD_pr01, GNG_pr01, and WED_pr02 receive synaptic inputs from other auditory neurons and then send outputs to the GFN (Fig. 9A), suggesting that song may influence GFN activity. This adds to results from Dorkenwald et al. (2020) that show interactions between B1 neurons and the escape pathway.
Several WED/VLP neurons (WED-VLP, WED_pr02, AVLP_pr01, AVLP_pr02, AVLP_pr23, AVLP_pr36, and PVLP_pr03) are synaptically connected to pC1d, pC2l, and pMN2/vpoDN. Two cell types in particular, AVLP_pr36 and PVLP_pr03, primarily get inputs from higher-order auditory neurons and then converge onto pMN1/DNp13 (Fig. 9A).
To understand how synaptic connectivity may relate to function, we took into account the pulse-vs. sine-preference for each auditory cell class, where available, either from this study or from prior work (Deutsch et al., 2019; Wang et al., 2020a, 2020b), keeping in mind that there is diversity in preference strength across the population (Supplemental Figure 5). We found that connections between neurons of different song preference were quite common (Fig. 9B). For instance, 2/3 of the postsynaptic targets of the pulse-preferring WED-VLP neuron are sine-preferring neurons. The pulse-preferring B1 sends outputs to and vpoEN receives inputs from at least one cell type of each preference type. These results suggest that, instead of segregating pulse- and sine-preferring responses, auditory WED/VLP neurons of opposite song mode preference likely influence one another to shape tuning to song features. This result also suggests that when examining tuning for one particular song feature (e.g., pulse rate) it is important to consider the tuning of all input neurons, not only those that prefer one song mode over another (e.g., pulse over sine). However, in higher-order pathways, such as those including pC1d/e, AVLP_pr36, and pMN2/vpoDN, synaptically connected auditory neurons are more likely to strongly prefer the same song mode (Fig. 9B). These results are in line with the finding that anatomical segregations between pulse- and sine-preferring neurons become more pronounced in the VLP (Fig. 8).
DISCUSSION
Here we have identified and characterized 24 new central auditory neuron types. Prior work had focused on either early or higher-order auditory neurons, leaving open the characterization of intermediate cell types in the auditory pathway. We examined their tuning for the modes of courtship song and their synaptic connectivity to each other, to early auditory neurons, and to higher-order auditory neurons. We uncovered diversity in tuning for sines and pulses, but extensive interconnectivity between cell classes with different preferences. These results rule out a model in which selectivity for song arises early within the auditory pathway, and instead supports a model in which interconnectivity along the pathway between neurons with different preferences acts to sharpen tuning of neurons that control song-responsive behaviors. The new neurons we discovered send projections throughout the brain, including to olfactory, visual, and pre-motor areas, suggesting a role in multisensory integration.
We identified new auditory neurons by targeting specific cell classes within the WED/VLP, putatively downstream from previously known auditory neurons (Lai et al., 2012; Matsuo et al., 2016). In the auditory connectome we built using FlyWire (Dorkenwald et al., 2020), which allows proofreading of EM reconstructions in an entire female brain (Zheng et al., 2018), we were able to examine inter-hemispheric connections, as well as ipsilateral vs. contralateral connection patterns (Fig. 9). However, several cell types formed no synaptic connections with other auditory neurons in our data set, and many other neurons had synaptic inputs but no outputs (and vice versa) within our data set. We also found one pathway, including AVLP_pr32, AVLP_pr04, and AVLP_pr11, that did not synaptically connect with any other auditory neurons. These results indicate that despite our attempts at identifying new auditory WED/VLP neurons, the auditory connectome is incomplete. In support of this, we were not able to identify split-GAL4 lines for several auditory VLP neurons characterized previously (Clemens et al. 2015). How many neurons may be still unknown in the auditory pathway is difficult to estimate. Because higher-order neurons with strong feature tuning have processes in the VLP (pC2l and vpoEN), we posit that feature tuning arises within the VLP itself. Continued circuit tracing in EM combined with functional imaging of sparse and specific driver lines, as we have done here, should ultimately fill out the auditory connectome. It is important to note that since auditory information is present in 33 out of 36 central brain neuropils (Pacheco et al., 2019), we expect auditory circuits to be highly interconnected to circuits of other modalities and functions.
We found that the responses of some auditory cell types were highly stereotyped across animals, whereas others were more variable. Pan-neuronal imaging also indicated across-animal variability in auditory responses (Pacheco et al., 2019) but could not map this variability to individual cell types, as we have done here. In the mushroom body, Kenyon cells (KCs) of a single type also have variable responses across animals (Murthy et al., 2008) which arises through stochasticity in connectivity between antennal lobe projection neurons and KCs (Caron et al., 2013; Zheng et al., 2020). This stochasticity might be important for generating variation in odor preferences or odor learning capacity across animals. Across-animal wiring differences in the central complex lead to differences in left vs. right turn biases (Buchanan et al., 2015), which may be important for generating diversity in navigation strategies across a population of flies. The role of variability in auditory responses is currently unknown, but it may function to generate variation in song preference across animals, potentially useful for diversifying female mate selection preferences.
Our study uncovered differences in frequency and pulse rate tuning patterns between sine- and pulse-preferring neurons. Sine-preferring neurons exhibited diverse frequency tuning patterns but more uniform rate tuning (Figs. 5D, 6D). The short-pass rate tuning in sine-preferring neurons is consistent with their preference for sinusoidal, or continuous, stimuli. In contrast, pulse-preferring neurons exhibited more uniform frequency tuning but diverse rate tuning patterns (Figs. 5D, 6D). Pan-neuronal imaging of neurons in the AMMC and WED neuropils found tonotopy in JON axons that was maintained in the AMMC and potentially sharpened in the WED (Patella and Wilson, 2018). However, the extent to which the frequency and pulse rate tuning of WED/VLP neuron types is inherited from inputs and/or shaped downstream of the JONs remains unknown. Very few WED/VLP neurons showed band-pass tuning for conspecific pulse rate, suggesting that downstream processing is required for generating such selectivity, as is observed in neurons such as pC2l or vpoEN (Deutsch et al., 2019; Wang et al., 2020b; Zhou et al., 2015). This pattern would be consistent with that of vertebrate auditory systems, in which frequency tuning arises in the periphery, whereas tuning for temporal sound features, such as pulse rate, arises through central computations (Winer and Schreiner, 2005).
An imaging study showed that tonotopy was stronger in the AMMC than in the WED (Patella and Wilson, 2018), but that study did not examine pulse vs. sine song preference. We show here that it is not easy to sort pulse vs. sine preference based on anatomical location within the WED. In contrast, there seem to be distinct anatomical regions for pulse-vs. sine-preferring neurons within the VLP, suggesting that this transformation must occur in the projections between WED and VLP. Many auditory neurons have projections in both regions (Fig. 7), so this study suggests where to look for mapping the transformation from broad tonotopy to more specific feature selectivity.
We also found that many WED/VLP neurons sent projections to optic glomeruli. Because this analysis was performed at the level of light microscopic images, we do not yet know whether WED/VLP neurons receive inputs from visual projection neurons in each optic glomerulus, or whether they send auditory information there. Identifying LC neurons in FlyWire and analyzing their connections with WED/VLP neurons in the FAFB EM volume will be important for answering this question. Several pulse-preferring neurons projected to the LC9 glomerulus, whereas many sine-preferring neurons projected to LC11 (Supp. Fig. 6). The segregation we find in song preference and projections to these two glomeruli already suggests that specific visual information and specific song information must be integrated in the brain to shape behavior. Of interest, none of the auditory WED/VLP neurons studied here targeted the LC10 glomerulus. LC10 neurons are important for initiation and maintenance of courtship chasing in males, and potentially aggression in females (Schretter et al., 2020; Sten et al., 2020). If LC10 visual responses are integrated with song information, this likely occurs elsewhere in the brain or via neurons not represented in our auditory screen.
Mapping synaptic connectivity among auditory neurons revealed a high degree of intermixing between preferences for the two song modes (Fig. 9B). This finding, combined with the diversity of tuning patterns across neurons, suggests that the responses of WED/VLP neurons may serve as a set of building blocks from which downstream neurons selectively pool inputs to establish tuning to different sound features. One such example is inhibitory interactions that refine tuning. For instance, the convergence of weakly pulse-preferring excitatory inputs with sine-preferring inhibitory inputs would result in stronger pulse-preference than the excitatory inputs alone. The use of inhibition to sharpen central tuning is widespread in auditory processing across taxa. In grasshopper and katydids, the convergence of narrowly-tuned inhibition with broadly-tuned excitation shapes frequency tuning (Romer et al., 1981; Stumpner, 2002). In mammalian auditory circuits, sideband inhibition, in which stimuli just outside of a neuron’s preferred range elicit inhibition, also sharpens frequency tuning in central neurons (Woolley and Portfors, 2013). Determining the neurotransmitter used by each auditory WED/VLP cell class and confirming the excitatory vs. inhibitory nature of each connection with functional connectivity will be necessary for determining the role inhibition may play in cross-song-mode circuit interactions.
Despite the interconnectivity of pulse- and sine-preferring neurons, there is evidence of circuit specialization in song mode in higher-order neurons. For instance, most connections in the pathway starting with vpoEN occur among only pulse-preferring neurons (Fig. 9B). pC2l, which serves as a pulse feature detector by responding to the species-typical range of multiple pulse song features, from the frequency of individual pulses to the durations of pulse trains, but only very weakly to sine song (Deutsch et al., 2019), is also part of this pathway. These findings suggest that at the interface with behavior (convergence onto descending command neurons) there is specialization for one song mode. This is aligned with behavioral data showing different responses to sine versus pulse song in Drosophila (Deutsch et al., 2019). However, as of yet, no sine-song feature detectors have been identified. A neuron would be considered a sine-song feature detector if it responded strongly and selectively to the multiple species-typical features, such as frequency, envelope dynamics, and duration, of sine song, but did not respond to pulse song. Whether any of the sine-preferring WED/VLP neurons (Fig. 9) constitute sine song feature detectors will require more detailed tuning and behavioral characterization. In addition, sines and pulses are ultimately combined into song bouts, which have the strongest effect on female locomotor behaviors (Clemens et al., 2015). How the pathways characterized here analyze longer timescale song features, in addition to handling extensive across-song bout variability (Coen et al. 2014), remains to be determined. However, this study lays the groundwork for exploring fundamental questions of coding of acoustic communication signals along the auditory circuits of the Drosophila brain.
AUTHOR CONTRIBUTIONS
CAB, CM, and MM designed the study; CM, CAB and AN screened for split-GAL4 lines in collaboration with BJD; CAB collected and analyzed neural recording data; CAB conducted connectome analyses, with assistance from SD; CAB and MM wrote the manuscript.
FUNDING
We acknowledge funding from the Jane Coffin Childs Foundation (to CAB), NIH NINDS DP2 New Innovator Award and HHMI Faculty Scholar Award (to MM), NIH BRAIN Initiative R01 MH117815-01 (to CM, SD, and MM), and the Howard Hughes Medical Institute (to CM, AN, and BJD).
METHODS
Fly stocks
The split-GAL4 system (Luan et al., 2006; Pfeiffer et al., 2010) was used to express the activation domain (AD) and the DNA-binding domain (DBD) of GAL4 under the separate control of two genomic enhancers, to obtain the intersection of their expression patterns, with the goal of obtaining a sparser pattern. Split-GAL4 stocks were gifts of Gerald Rubin and Barry Dickson (Pfeiffer et al., 2010; Tirian and Dickson, 2017). 20xUAS-GCaMP6s, td-Tomato/CyO was generated by Diego Pacheco (Pacheco et al., 2019). The genotype of imaged flies was 20x-UAS-GCaMP6s, td-Tomato/GAL4-AD; GAL4-DBD/+, with the “+” originating from NM91 stocks. For expression pattern staining, split-GAL4s were combined with 20xUAS-CsChrimson-mVenus trafficked in attP18, except that 6 lines (17963, 21914, 23281, 23627, 28822, 29146) were shared from a different project that instead used pJFRC51-3XUAS-IVS-syt::smHA in su(Hw)attP1,pJFRC225-5XUAS-IVS-myr::smFLAG in VK00005, and one (27932) used pJFRC200-10XUAS-IVS-myr::smGFP-HA in attP18, pJFRC216-13XLexAop2-IVS-myr::smGFP-V5 in su(Hw)attP8. For multicolor flip-out (MCFO) staining, all were crossed to MCFO-1 [pBPhsFlp2::PEST (attP3);; pJFRC201-10XUAS-FRT>STOP>FRT-myr::smGFP-HA (VK0005), pJFRC240-10XUAS-FRT> STOP>FRT-myr::smGFP-V5-THS-10XUASFRT>STOP>FRT-myr::smGFP-FLAG (su(Hw)attP1)]. Flies were kept at 25C with a 12h:12h light:dark cycle.
Split-GAL4 creation
GAL4 images from the Rubin and Dickson collections (Jenett et al., 2012; Pfeiffer et al., 2010; Tirian and Dickson, 2017) were visually screened for lines targeting neurons in the WED and AVLP. For each cell type, a color depth MIP mask search (Otsuna et al., 2018) was conducted to find other GAL4 lines with expression in similar cells. Split-GAL4 hemidrivers for these lines were crossed in various combinations to find a combination that targeted the cell type of interest but with sparse expression elsewhere, determined by expression and staining of mVenus in one female fly of genotype 20xUAS-csChrimson::mVenus (attP18)/w; Enhancer-p65ADZp (attP40)/+; Enchancer-ZpGAL4DBD (attP2)/+. 65 combinations were chosen by this method, representing over 50 cell types. In some cases, lines targeting neurons with similar gross morphology revealed one line with a commissure (see 16374 and 27885; 41728 and 41730 in Supp. Fig. 1). For these examples, we characterized both lines due to the difficulty in determining whether they were truly similar cell types. Split-GAL4 combinations were then double balanced and combined in the same fly stain to make a stable split-GAL4, for which expression pattern staining was carried out in an additional female to confirm the expression seen in the initial screen. For multiple split-GAL4 lines targeting morphologically similar neurons, one (or sometimes a few) were selected according to the following criteria: least off-target expression in the central brain, largest number of neurons with expression, and/or most reliable calcium responses to acoustic stimuli across flies. Expression was further confirmed in multiple flies by multicolor flip-out staining (see below).
CNS immunohistochemistry & imaging
Brains and ventral nervous systems were dissected and stained using published methods (Nern et al., 2015). Antibodies used were rabbit anti-GFP (1:500, Invitrogen, #A11122), mouse anti-Bruchpilot (1:50, Developmental Studies Hybridoma Bank, University of Iowa, mAb nc82), Alexa Fluor 488-goat anti-rabbit (1:500, ThermoFisher A11034), and Alexa Fluor 568-goat anti-mouse (1:500, ThermoFisher A11031). Serial optical sections were obtained at 1μm intervals on a Zeiss 700 confocal with a Plan-Apochromat 20x/0.8NA objective.
Stochastic labeling
For multicolor flip-out (MCFO) stochastic labeling (Nern et al., 2015), approximately 8 females per split-GAL4 line received a 15 min heat shock at 37°C at 1-3 days old, and were dissected at 6-8 days. Staining was performed as described in (Nern et al., 2015), with the following antibodies: rat anti-flag (Novus Biologicals, LLC, Littleton, CO, #NBP1-06712), rabbit anti-HA (Cell Signaling Technology, Danvers, MA, #3724S), mouse anti-V5 (AbD Serotec, Kidlington, England, #MCA1360), AlexaFluor-488 (1:500; goat anti-rabbit, goat anti-chicken, goat anti-mouse; Thermo Fisher Scientific), AlexaFluor-568 (1:500; goat anti-mouse, goat anti-rat; Thermo Fisher Scientific), AlexaFluor-633 (1:500; goat anti-rat; Thermo Fisher Scientific).
Image processing
Images were adjusted for gain and contrast without obscuring data. Images were processed in ImageJ (https://imagej.nih.gov/ij/) and Photoshop (Adobe Systems Inc.). Where noted, neurons were rendered and segmented from confocal stacks with VVDviewer software (https://github.com/takashi310/VVD_Viewer) (Wan et al., 2009, 2012) to visualize them in isolation. For this rendering and for computational alignment of brain images used where noted, brain images were registered using the Computational Morphometry Toolkit (Rohlfing, 2011) to a standard brain template (“JFRC2014”) that was mounted and imaged with the same conditions.
Neuropil innervation
We registered the neuropil (Ito et al., 2014) and optic glomeruli (Panser et al., 2016) maps to the unisex JRC2018 template by first bridging from IBNWB to JFRC2 (Bates et al., 2020), and then JFRC2 to unisex JRC2018 (Rohlfing and Maurer, 2003). To determine which neuropils each neuron type targeted, we used the segmented neuron stacks with manually removed somata. We then calculated the percentage of the neuron’s voxels in each neuropil and optic glomerulus. In Fig. 7 and Supp. Fig. 3, and 6 we report all neuropils containing at least 1% of each segmented neuron’s total volume.
For naming auditory neurons not previously described as auditory, we used the neuropil in which the segmented neuron had the greatest percentage of expression. If a neuron had nearly equal (<1% difference) expression in two neuropils, we used both neuropil names. A two digit number was added to the neuropil name in sequential order based on the split-GAL4 line number (Table 2). To disambiguate our names from those of the hemibrain project (Xu et al., 2020) (as our neuron types/cell classes can likely be further segregated into subtypes based on both morphological and connectivity differences within a line), we added a ‘ pr’ before each number for Princeton.
EM reconstructions
We identified individual neurons in the FAFB volume using FlyWire.ai (Dorkenwald et al., 2020). Neurons were proofread using the editing tools in FlyWire to add missing pieces of arbor and remove incorrect pieces, focusing on the main backbone of a neuron, not attempting to add very small missing twigs. Sister cells from the same cell type were used to verify that a cell’s overall morphology appeared generally correct. WED/VLP reconstructions were proofread by 12 proofreaders consisting of both scientists and expert tracers from the Seung and Murthy labs. To find all candidates belonging to a given cell type, we found an EM cross-section of the primary neurite tract, and investigated every cell in the cross-section. We compared candidate reconstructions with LM images of our auditory lines (Supp. Fig. 1–2). We were conservative with which reconstructions we included for each cell type; if a reconstruction did not match neurons present in MCFO images (Supp. Fig. 2), we excluded it. Since a limited number of MCFO images were available for each line, it is possible that we may have missed some EM reconstructions belonging to each line. Two auditory cell types (AVLP_pr18 and AVLP_pr24) did not have enough cell-type resolution in MCFO to enable individual EM reconstruction identification and were excluded from the synaptic connectivity mapping. EM reconstructions for pC1d/e, pMN1/DNp13, and pMN2/vpoDN came from a previous study (Deutsch et al., 2020). We identified vpoEN by inspecting the inputs to pMN2/vpoDN neurons (Wang et al., 2020b). pC2l reconstructions were identified by D. Deutsch (personal communication).
Synaptic connectivity among auditory neurons
We used FAFB reconstructions corresponding to previously known auditory neurons (Dorkenwald et al. 2020), higher-order auditory neurons (Deutsch et al., 2020; Wang et al., 2020b), and all auditory WED/VLP neurons for which we found EM reconstructions. We then found all automatically detected synapses between every pair of individual neurons (Buhmann et al., 2019). We ignored synapses from one neuron onto itself and between individual neurons of the same cell type, due to the high rate of false positives. For the remaining indicated synaptic connections, we set a threshold of 15 synapses for a likely true connection between any pair of neurons. In the connectivity diagram in Fig. 9, we show connections in which at least 3% of all possible pairs of individual neurons within a cell type crossed the 15 synapse threshold. This criterion was based on the findings of Dorkenwald et al, who spot-checked synapses between cell types to confirm which connections between early auditory cell types were due to true positives. Applying our criteria to early auditory neurons results in the same connectivity results among the neurons in Dorkenwald et al, with one exception. Dorkenwald et al found that one automatically indicated connection between cell types was due to a high number of inverted synapses (ie, automatically detected presynaptic partner was determined to be postsynaptic by a human observer). Our connectivity analysis method is unable to confirm synapse directionality, and it is not yet known how often this type of false positive occurs. Five WED/VLP cells (AVLP_pr05, AVLP_pr12, AVLP_pr22, IPS/WED_pr01, AVLP_pr35) did not have connections meeting these criteria with any other cell type in our dataset.
Calcium imaging
For calcium imaging experiments, we used 3-11 day-old virgin female flies reared at 25C and housed in groups of 1-8 flies/vial after eclosion. Calcium imaging experiments were performed during the fly’s light cycle. Flies were mounted and dissected as described in (Murthy and Turner, 2013) with the following modifications. Before mounting the fly, we removed the wings and legs with forceps. We removed fat covering the brain with forceps or with mouth suction through a sharp glass pipette. We ensured the aristae were intact and free by gently blowing on the fly before and after each experiment. We also looked for abdominal contractions in response to gentle blowing to indicate the fly was alive and healthy before and after each experiment. We monitored temperature and humidity with a thermometer and hygroscope (Traceable 15-077-963, Webster TX) placed on the air table with the two-photon microscope. Temperature and humidity were stable within an experiment, with fluctuations of <1C and <5% humidity across 8 hours of imaging (~1 fly/hr).
Imaging was performed as previously described (Deutsch et al., 2019). Briefly, we used a custom-built two-photon laser scanning microscope controlled in MATLAB by ScanImage 5.1 (Vidrio). We imaged single planes at 8.5 Hz (256 x 256 pixels). Pixel size was 0.75 μm x 0.75 μm. After dissection, a fly was placed under the microscope with continuous saline perfusion delivered to the meniscus. Sound was delivered through an earbud speaker (Koss, 16 Ohm impedance; sensitivity, 112 dB SPL/1 mW), which was attached to a long thin tube (12cm, diameter: 1mm) placed ~2mm from the fly’s head (directed toward the aristae) and controlled by custom software in MATLAB (Clemens et al., 2018b). Sound intensity was calibrated by measuring the sound particle velocity component for a range of frequencies (100-800 Hz). The detailed procedures and cross-calibration between the pressure and the pressure gradient microphone were described in (Göpfert et al., 2006). To estimate the sound amplitude of each stimulus we placed the calibrated gradient microphone at the same position as the fly (2-3 mm from sound tube outlet) in separate experiments. The recorded voltage was then converted to particle velocity (with units mm/s). The output signal was corrected according to the measured intensities to ensure equivalent intensity across frequencies, as previously described (Tootoonian et al., 2012).
ROI selection
To decide which region of interest (ROI) to record from in each neuron, we first sampled from multiple ROIs, depending on the morphology of the neuron and on the level of baseline GCaMP6s expression (ie, only ROIs with some baseline level of GCaMP expression were visible under the two-photon microscope). In some lines, multiple ROIs were visible and responded relatively robustly. In those cases, we narrowed down which ROI to focus on based on response strength and ability to locate roughly the same ROI across flies. In other lines, baseline GCaMP expression was low, so we imaged from the ROI that was visible within a moderate laser power, with the goal of choosing roughly a similar ROI in each fly. We sampled from both the left and the right hemisphere within each line but from different flies. In all of the lines, responses occurred in both hemispheres, suggesting either that we stimulated both aristae or that ROIs at the level of the WED/VLP respond bilaterally.
Stimulus generation and delivery
Sound stimuli were generated at a sampling frequency of 10kHz using custom MATLAB software and previously published techniques (Deutsch et al., 2019). To characterize a given neuron type as auditory or non-auditory, we used a stimulus set consisting of pulse song, sine song, and white noise, each with 10 sec duration and 10 sec pre- and post-stimulus silent periods. The intensity of pulse and sine stimuli was 5 mm/sec and the intensity of the white noise stimulus was 2 mm/sec. To further characterize auditory tuning, we used a stimulus set consisting of 5 pure tones at a range of frequencies (100, 200, 300, 500, and 800 Hz) and 5 pulse stimuli with a range of interpulse intervals (16, 36, 56, 79, and 96 ms), all at an intensity of 4 mm/sec. Each of these stimuli were 4 sec in duration, with 5 sec pre-stimulus and 10 sec post-stimulus silent periods. Within a stimulus set, a block was defined as one presentation of each stimulus. Stimulus order was randomized within each block, and 3-7 blocks were presented to each fly.
Analysis
Data were analyzed using custom software in MATLAB. ROIs were drawn manually based on a z-projection of the td-Tomato signal. We calculated the mean fluorescence within each ROI frame-by-frame. If any stimulus block contained drastic, brief (1-3 sec) increases in fluorescence, indicative of wax or other particles moving into the imaged region, we discarded the entire block. We used only flies in which 3 or more blocks were available for analysis. We corrected for gradual, modest changes in fluorescence during each recording by de-trending the data. To detrend, we concatenated fluorescence traces over an entire recording, used a running percentile filter (20th percentile, 50 sec window, 5 sec shift), and subtracted this long-term trend from the recording. For each stimulus, we calculated dF/F as (F(t)-F 0)/F0, where F0 was defined as the mean fluorescence during the 10 sec prestimulus silent period.
To quantify across-fly variability within a spit-GAL4 line, we measured Euclidean distances between responses to pulse, sine, and white noise between every possible pair of flies imaged from a given line. For each fly, we calculated the mean calcium trace elicited by pulse, sine and, white noise stimuli, and then concatenated the mean traces in that order before measuring Euclidean distances.
Stimulus modulation of calcium signals
To determine whether the fluorescence within a given ROI was modulated by at least one of the acoustic stimuli, we followed the method previously described in (Pacheco et al., 2019). Briefly, we modeled the GCaMP fluorescence trace as a convolution of the stimulus history and a set of filters, one per stimulus (ie, 3 filters for the screen stimuli and 10 filters for the frequency/IPI tuning stimuli), with the filter duration matching the duration of the stimulus. To estimate each filter, we used 80% of the stimulus repetitions as training data and the remaining 20% of the repetitions as test data. Filters were estimated using ridge regression (Park and Pillow, 2011). We convolved the estimated filters with the stimulus history to generate the predicted signal for each ROI. All possible combinations of training and test data repetitions were used, which gave a total of 3-15 predicted signals (3-6 total repetitions, respectively) per ROI. We then measured the Pearson correlation coefficient between the raw (ie, test repetitions) and the predicted signals. We used bootstrapping to determine the statistical significance of the resulting correlation coefficients. We randomly shuffled each test fluorescence trace in 10 sec bins, and then calculated the Pearson correlation coefficient between each of 10,000 independently shuffled test signals and the predicted signal. P-values for the correlation coefficients were defined as the fraction of shuffled correlation coefficients > 30th percentile of the estimated correlation coefficients. P-values were corrected for multiple comparisons with the Benjamini-Hochberg procedure, with a false detection rate of 0.05.
Analysis of auditory responses
We measured the strength of calcium responses to auditory stimuli only in ROIs which we determined to be statistically modulated by the stimuli. In these cases, we calculated the integral of the dF/F signal starting with stimulus onset and ending 4 sec after stimulus offset, and then divided by the total time of the integral window.
We used the integrals elicited by pulse and sine during the screen stimuli to assess each fly’s song mode preference. If the integrals elicited by pulse and sine stimuli were both negative, we classified the response as inhibition-dominated. If the absolute value of the ratio of sine/pulse integrals was between 0.85 and 1.15, we classified the response as unselective. We classified the remaining responses as pulse- or sine-preferring based on the larger integral. To classify the preference of a given split-GAL4 line, we required 75% of the imaged flies to have the same preference class. Otherwise, the preference for a given line was classified as mixed.
Hierarchical clustering of auditory responses
We clustered responses to pulse, sine, and white noise stimuli according to previously published methods (Pacheco et al., 2019). Briefly, we first calculated the median dF/F trace from each fly across stimulus repetitions, and concatenated the traces starting with pulse and ending with white noise (including pre- and post-stimulus periods). We then z-scored this trace for each fly. Next we hierarchically clustered these traces based on Euclidean distances and inner square distance metric between clusters (Ward’s method). To determine the number of clusters, we set a lower limit of three responses per cluster, which resulted in 13 total clusters across all our responses.
Tuning curve generation and classification
To generate frequency and interpulse interval tuning curves, we measured the integrals in response to each stimulus. Next we classified frequency and interpulse interval tuning curve types based on previously published methods (Baker and Carlson, 2014; Groh et al., 2003). Briefly, if the minimum tuning curve value was >85% of the maximum tuning curve value, we classified the tuning as all-pass (ie, untuned to the tested stimulus parameter). For all other tuning curves, we fit both a sigmoid and a Gaussian function. If the r2 of both the sigmoid and Gaussian fits were <0.5, we classified the tuning as complex. Since high- and low-pass tuning curves can be well fit by both a Gaussian and a sigmoidal curve, we used the r2sigmoid/r2Gaussian ratio. If r2sigmoid/r2Gaussian was <0.85, we classified the tuning curve as band-pass if the Gaussian amplitude was positive and band-stop if the Gaussian amplitude was negative. For frequency tuning curves, if r2sigmoid/r2Gaussian was ≥ 0.85, we classified the tuning as high-pass if the ratio of the sigmoid slope to the sigmoid amplitude was positive, and low-pass if this ratio was negative. For IPI tuning curves, if r2sigmoid/r2Gaussian was ≥ 0.85, we classfied the tuning as long-pass if the ratio of the sigmoid slope to the sigmoid amplitude was positive, and short-pass if this ratio was negative.
ACKNOWLEDGEMENTS
We thank Xiao-Juan Guan for help with crosses during split-GAL4 generation; Diego Pacheco for sharing analysis code and neuropil registrations; David Deutsch for help with calcium imaging pre-analysis and for identifying pC2l neurons in FlyWire; Sebastian Seung for assistance with FlyWire infrastructure; Nat Tabris for help with connectivity analysis; Lucas Encarnacion-Rivera, Ben Silverman, Jay Gager, Merlin Moore, Selden Koolman, James Hebditch, Sarah Morejohn, Kyle Willie, Austin Burke, and Celia David for proofreading FlyWire neurons; and the Janelia Fly Core, FlyLight, and Scientific Computing groups. A.N. thanks Gerry Rubin, in whose lab he performed this work, for his support and encouragement.