Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Systematic Integration of Structural and Functional Data into Multi-Scale Models of Mouse Primary Visual Cortex

View ORCID ProfileYazan N. Billeh, Binghuang Cai, Sergey L. Gratiy, Kael Dai, Ramakrishnan Iyer, Nathan W. Gouwens, Reza Abbasi-Asl, Xiaoxuan Jia, Joshua H. Siegle, Shawn R. Olsen, Christof Koch, Stefan Mihalas, Anton Arkhipov
doi: https://doi.org/10.1101/662189
Yazan N. Billeh
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Yazan N. Billeh
  • For correspondence: yazanb@alleninstitute.org antona@alleninstitute.org
Binghuang Cai
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sergey L. Gratiy
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kael Dai
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ramakrishnan Iyer
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nathan W. Gouwens
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Reza Abbasi-Asl
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Xiaoxuan Jia
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Joshua H. Siegle
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shawn R. Olsen
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Christof Koch
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Stefan Mihalas
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Anton Arkhipov
1Allen Institute for Brain Science, Seattle, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: yazanb@alleninstitute.org antona@alleninstitute.org
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Supplementary material
  • Data/Code
  • Preview PDF
Loading

Abstract

Structural rules underlying functional properties of cortical circuits are poorly understood. To explore these rules systematically, we integrated information from extensive literature curation and large-scale experimental surveys into data-driven, biologically realistic models of the mouse primary visual cortex. The models were constructed at two levels of granularity, using either biophysically-detailed or point-neuron models, with identical network connectivity. Both models were compared to each other and to experimental recordings of neural activity during presentation of visual stimuli to awake mice. Three specific predictions emerge from model construction and simulations: about connectivity between excitatory and parvalbumin-negative inhibitory neurons, functional specialization of connections between excitatory neurons, and the impact of the cortical retinotopic map on structure-function relationships. Finally, despite their vastly different neuronal levels of granularity, both models perform similarly at the level of firing rate distributions. All data and models are freely available as a resource for the community.

Introduction

Mechanisms connecting structural properties of cortical circuits to patterns of neural activity are poorly understood. Elucidation of such mechanisms requires systematic data collection, sophisticated analyses, and modeling efforts to “understand” this data. Such an understanding is always relative to a particular domain of interest – be it modeling the physics of highly excitable brain tissue composed of a myriad of heterogeneous neurons (Koch, 1999), mimicking the computations that lead to a particular set of firing rates (Yamins and DiCarlo, 2016), or diagnosing and ultimately curing psychiatric and neurological brain diseases. The first option – biologically realistic modeling – appears necessary to disentangle the extreme biological complexity of the cortex (Harris and Mrsic-Flogel, 2013; Harris and Shepherd, 2015; Amunts et al., 2016; Koch and Jones, 2016; Martin and Chun, 2016; Chevée and Brown, 2018).

Simulating cortical circuits has a long history (e.g. (Wehmeier et al., 1989; Zemel and Sejnowski, 1998; Troyer et al., 1998; Krukowski and Miller, 2001; Traub et al., 2005; Zhu, Shelley and Shapley, 2009; Potjans and Diesmann, 2014; Markram et al., 2015; Arkhipov et al., 2018; Joglekar et al., 2018; Schmidt et al., 2018; Antolík et al., 2019)), with models incrementally building upon their predecessors. The simulations described here are a further instance of this evolution toward digital simulacra that predict new experiments, are insightful, and ever more faithful to the vast complexity of cortical tissue, in particular its heterogeneous neuronal cell classes, connections, and in vivo activity.

We developed data-driven models of the mouse primary visual cortex (area V1) for in silico visual physiology studies with arbitrary visual stimuli (Fig. 1A). The models, constrained by experimental measurements and reproducing multiple observations from in-house Neuropixels high-density electrical recordings in vivo (Jun et al., 2017), have the same network graph of ~230,000 nodes of two different levels of granularity: biophysically detailed compartmental models of 17 different cells types (Gouwens et al., 2018) and Generalized Leaky Integrate and Fire (GLIF) point-neuron models of these 17 types (Teeter et al., 2018). Otherwise, the two V1 models are identical at the connectivity level.

Figure 1:
  • Download figure
  • Open in new tab
Figure 1:

Overview of the V1 models. A: The models consist of one excitatory class and three inhibitory classes (Pvalb, Sst, Htr3a) in each of four layers - L2/3, L4, L5 and L6; L1 has a single Htr3a inhibitory class. Visual stimuli are conveyed by thalamocortical projections (from the LGN; see Figs. 2 and 3). B: Image of the mouse posterior cortex (from the Allen Brain Explorer), illustrating V1 and higher visual cortical areas, and the region covered by the models (400 μm radius for the core; 845 μm radius with surrounding annulus). C: Visualization of the biophysically detailed network (1% of neurons shown). For each class, the total number of neurons is indicated and one exemplary dendritic morphology is displayed.

While building and testing the models, we derived three specific predictions. The first concerns like-to-like connectivity of excitatory and non-parvalbumin expressing interneurons. The second involves the dependence of synaptic weights on similarity of preferred direction of motion of two connected neurons (while connection probability is orientation dependent (Ko et al., 2011)) and a bias for the strongest cortical inputs to derive from a stripe perpendicular to the preferred direction-of-motion of target neurons. The third is a consequence of the dependency of cortical magnification on elevation and azimuth (Schuett, Bonhoeffer and Hübener, 2002; Kalatsky and Stryker, 2003) that leads to an asymmetry between neurons preferring vertical- vs. horizontal- direction. We predict that this is compensated for by respective asymmetric specializations of the circuit architecture.

Our models use the Brain Modeling ToolKit (BMTK, github.com/AllenInstitute/bmtk) (Gratiy et al., 2018) that facilitates building large-scale network and parallel simulations with NEURON (Hines and Carnevale, 1997) and NEST (Gewaltig and Diesmann, 2007). The architecture and outputs are saved using the standardized SONATA format (github.com/AllenInstitute/sonata, (Dai et al., 2019)). All models, code, and meta-data resources are publicly available via the Allen Institute for Brain Science’s web portal (brain-map.org/explore/models/mv1-all-layers). As an open public resource, these models will be useful for making direct predictions as well as complementing other experimental and modeling endeavors. The in vivo extracellular recordings used for comparison are recorded from a standardized pipeline and will be freely available in October with the next Allen Institute data release.

Results

Populating Diverse Cortical Cell Classes With Neuronal Models

Our biophysical and GLIF models of V1 use the same connectivity graph (i.e., each neuron of ~230,000 nodes in one model has an exact counterpart in the other, with the same coordinates, presynaptic sources, and postsynaptic targets). The first step in building this network is to instantiate and distribute neurons in a data driven manner. The models span a 845 μm radius of mouse V1 (Fig 1B). For the biophysically detailed model, the “core” (400 μm radius) is composed of spatially extended neurons, surrounded by an annulus of leaky-integrate-and-fire neurons, to avoid boundary artifacts (as described previously, (Arkhipov et al., 2018)). In both the GLIF and biophysical models, the focus of our analysis is on the network within this central core.

Neuron models are reconstructed from slice electrophysiology and are publicly available from the Allen Cell Types Database (Gouwens, Berg, et al., 2018; Teeter et al., 2018, celltypes.brain-map.org). Although the most recent transcriptomic and electrophysiological/morphological in vitro surveys suggest ~50-100 neuronal classes in V1 (Tasic et al., 2018; Gouwens et al., 2019), the currently available neuronal models, connectivity data, and in vivo recordings offer lower cell class resolution. Thus, we adopted a coarser classification, which, however, still reflects a substantial diversity of neuronal classes (Fig. 1A, C).

Specifically, we draw inhibitory neurons in layer 1 (L1) from one inhibitory class of Htr3a neurons and in layers 2/3 to 6 from three classes of interneurons - Paravalbumin- (Pvalb), Somatostatin- (Sst), and Serotonin- (Htr3a) positive cells (Lee et al., 2010; Tremblay, Lee and Rudy, 2016). We note that the commonly studied Vasoactive Intestinal Polypeptide (VIP) interneurons are a subclass of Htr3a in L2/3 to L6 (Tremblay, Lee and Rudy, 2016). Since VIP is the subclass most extensively characterized experimentally within the Htr3a class, we resorted to using VIP studies to constrain the Htr3a class in our models (see below).

Excitatory neurons in L2/3, L4, L5, and L6 are represented by one class per layer. For L4 and L5, more specific sub-classes are used to draw models for neurons reconstructed from the Scnn1a, Nr5a1, and Rorb Cre-lines, as well as unlabeled neurons for L4 and Rbp4 Cre-line and unlabeled neurons for L5. Cells from these lines exhibit some differences in their morphologies (Gouwens et al., 2019), but it is not known whether they differ in connectivity. Furthermore, they do not appear to show substantially distinct patterns of activity in vivo (de Vries et al., 2018). Therefore, for all subsequent steps and analyses, we lump excitatory neurons into a single class per layer (E2/3, E4, E5, E6).

In total, the two V1 networks contain 17 cell classes, represented by 112 unique individual neuron models for the biophysical and 111 for the GLIF network, copied and distributed in layers according to the best data available (see Methods). Cell densities (including inhibitory subclasses) across layers are estimated from anatomical data (Schüz and Palm, 1989; Lee et al., 2010), with an 85%:15% fraction for excitatory and inhibitory neurons. The final network contains 230,924 cells, of which 51,978 are in the core (see Methods).

We determined synaptic connectivity using three design iterations. In the first, discussed immediately below, we determined the feed-forward geniculate input into each isolated V1 cell. In the second iteration, we introduced massive synaptic recurrency that depended on the difference in preferred orientation among pairs of V1 cells. In the third and final step, we refined the connectivity by adding dependencies on the phase and on the difference of the preferred direction of motion among connected cells.

Thalamocortical Input To The V1 Models

The Lateral Geniculate Nucleus (LGN) of the thalamus mediates retinal input to V1. We created an LGN module that generates action potentials for arbitrary visual stimuli, as described below.

Creating LGN Units

The LGN module is composed of spatio-temporally separable filter units (released publicly via the Brain Modeling ToolKit, github.com/AllenInstitute/bmtk) fitted to electrophysiology recordings from mouse LGN (Durand et al., 2016). In a substantial elaboration over our previous work (Arkhipov et al., 2018), we developed filters for four classes of experimentally observed functional responses (Piscopo et al., 2013; Durand et al., 2016): sustained ON, sustained OFF, transient OFF, and ON/OFF (the latter is related to the DS/OS class of (Piscopo et al., 2013)). These four filter groups are further subdivided according to their maximal response to drifting gratings of different temporal frequencies (TF) (Fig. 2A). We average the experimentally recorded responses for each class to create linear-nonlinear filter models that process any spatio-temporal input and compute a firing rate output (see Methods, example in Fig. 2B). These filters are distributed in visual space according to occurrence ratios of the LGN cell classes (Durand et al., 2016), translating any visual stimulus into firing rates. The firing rate output is converted, via a Poisson process, into spike times (see Methods).

Figure 2:
  • Download figure
  • Open in new tab
Figure 2:

LGN filter models. A: LGN classes fit from electrophysiological recordings (Durand et al., 2016) using spatiotemporally separable filters. Every major class has sub-classes that respond maximally to a specific temporal frequency (TF). The numbers in parentheses indicate the rate of occurrence in our model. B: Example filter for the sON-TF8 class. Top: the spatial and temporal components of the filter. Bottom: plots of the F0 (cycle averaged mean rate response) and F1 components (modulation of the response at the input stimulus frequency) of the data and the model fit (see Methods) in response to drifting gratings (mean ± s.e.m). C: Schematic of thalamocortical architecture for a candidate pool of LGN cells projecting to a V1 cell with matching retinotopic positions. The putative LGN units are separated into sustained and transient subfields. D: Schematic illustrating the direction selectivity mechanism. When a bar moves from left to right (the preferred direction), the responses from the sustained and transient components overlap and exceed a threshold, while movement in the opposite, null, direction prevents overlapping responses.

Direction Selective Input Into V1 Cells

Direction selectivity is a prominent characteristic of V1 neurons (Niell and Stryker, 2008; Durand et al., 2016). How it is generated is a central question in the field. We seek here to recapitulate physiological levels of direction selectivity (see below). Although some direction-selective neurons are observed in the LGN (Marshel et al., 2012; Piscopo et al., 2013; Scholl et al., 2013; Zhao et al., 2013; Sun et al., 2016), recent work indicates that direction selectivity is produced de novo in V1 from convergence of spatio-temporally asymmetric LGN inputs (Lien and Scanziani, 2018). Based on this, we assume that LGN innervation into V1 neurons has two subfields, one with slow (sustained) and the other with fast (transient) kinetics (Fig. 2C). These produce an asymmetry in responses to opposite directions of motion (Fig. 2D). A simplified theoretical framework demonstrates (see Methods, Fig. S1) that sufficiently high orientation and direction selectivity indices (OSI and DSI) can be achieved with such input subfields (Lien and Scanziani, 2013, 2018). Interestingly, this analytic treatment predicts reversal of preferred direction as the spatial frequency of a grating increases, which we confirmed experimentally to be a ubiquitous phenomenon in the mouse visual system (Billeh et. al 2019). This mechanism has analogy to aliasing found in the fly (Hassenstein and Reichardt, 1956; Barlow and Levick, 1965; Van Santen and Sperling, 1984; Borst and Egelhaaf, 1989) and parallels the OFF pathway motion detection system in fly T5 neurons (Serbe et al., 2016; Arenz et al., 2017).

Creating And Testing Thalamocortical Connectivity

We instantiated individual filters to represent the diverse LGN responses (Fig. 2A), placing 17,400 LGN units in visual space. LGN axons project to all layers of V1 (Kloc and Maffei, 2014; Morgenstern, Bourg and Petreanu, 2016), selectively innervating excitatory neurons and Pvalb interneurons in L2/3-L6, as well as non-Pvalb interneurons in L1 (Ji et al., 2015). We targeted LGN inputs to V1 neuron classes accordingly and then established connections to individual neurons using the following three-step procedure (see Methods).

The first step selects the LGN units projecting to a particular V1 neuron, leveraging the fact that spatiotemporally asymmetric architecture yields direction and orientation selectivity (Lien and Scanziani, 2013, 2018). For each V1 neuron, we determined the visual center, size, and directionality (a pre-assigned preferred angle of stimulus motion) of elliptical subfields from which LGN filters will be sampled, according to the neuron’s class and position in the cortical plane (Fig. 3A). We then identified LGN receptive fields (RFs, parameterized during filter construction) that overlap with these elliptical subfields of the V1 neuron. One subfield always samples from transient OFF LGN filters and the other from sustained ON or OFF (see Methods).

Figure 3:
  • Download figure
  • Open in new tab
Figure 3:

LGN inputs to the V1 models. A: LGN filters connecting to four different V1 neurons. Black triangles and colored circles indicate the centers of the receptive fields of the V1 neuron and those of presynaptic LGN neurons, respectively. Gray circles indicate all other LGN filters. The elliptical subfields used to select the projecting LGN filters are shown. B-G : Responses of the biophysical V1 model to LGN input without any intracortical connections nor background activity. B: Postsynaptic currents in V1 neurons responding to 500 ms of gray screen followed by a drifting grating. The mean current is matched to experimental measurements and is largest for layer 4 neurons. C: Boxplots of postsynaptic currents for every neuron class (for preferred drifting grating), after matching to target values (boxes span 25th till 75th percentile and whiskers extend a maximum of 1.5 the interquartile range). D: Example tuning curve of an E4 neuron (mean ± s.e.m). E: Example raster plot, same stimulus as in B. Neuron classes with large EPSC current values (boxplots in C), show significant spiking activity. F: Boxplots characterizing firing rates of neuronal classes. For reference, experimental data from in vivo extracellular electrophysiology recordings from awake mice (i.e., fully connected cortical circuit) are shown. G: Characterization of DSI from responsive neurons (see Methods). Some DSI values are high as these simulations are purely feedforward and thus exhibited low firing rates that bias the DSI metric.

The second step, only applicable to the biophysical model, determines the number and placement of synapses on V1 neurons, using data on LGN axonal density in V1 (Morgenstern, Bourg and Petreanu, 2016) and estimates of synapse numbers per neuron (Schoonover et al., 2014; Bopp et al., 2017). In the third and final step, the strength of synapses is established, constrained by experimental current measurements (Lien and Scanziani, 2013; Ji et al., 2015). The synapse strengths are scaled to match the target mean current (Fig. 3B, C) in response to a drifting grating (see Methods). Layer 4 is the main primary input target of the LGN, and therefore the current amplitudes are largest in this layer (Fig. 3B, C).

To test the outcome of this procedure, we carried out simulations of the entire V1 network without recurrent connections using drifting grating stimuli. Individual neurons are direction selective (Fig. 3D), consistent with experimental measurements of LGN input currents (Lien and Scanziani, 2013, 2018). At the network level (example raster in Fig. 3E), the average firing rates, DSI, and OSI due to LGN-only input are calculated (Figs 3F, 3G, and S2 respectively). For reference, data from in vivo extracellular electrophysiology recordings from awake mice (a fully recurrent biological network) are included in Fig. 3F. These experiments are performed with Neuropixel probes (Jun et al., 2017) from a standardized pipeline (data release by the Allen Institute in October 2019) and are used throughout the manuscript as a benchmark for the models (examples in Fig. S3). Note that experimental data are robustly classified into the regular-spiking (RS) and fast-spiking groups (FS), roughly corresponding to excitatory and Pvalb inhibitory neurons (although small contributions from non-Pvalb inhibitory neurons are likely present in both groups). Hence here, and throughout the Results section, we compare model excitatory and Pvalb neurons with experimental RS and FS, respectively. Finally we define a similarity score, S, between distributions to compare the population of excitatory and Pvalb neurons in experiments and models (one minus the Kolmogorov–Smirnov distance, see Methods). If the distribution in the simulated population is close to the experimental one, the similarity measure will be close to unity; S close to zero indicates quite different distributions (Fig. S4). As expected, in the absence of intra-cortical amplification, S is low for firing rates, (E-biophysical = 0.15, E-GLIF = 0.17, Pvalb-biophysical = 0.60, Pvalb-GLIF = 0.35) orientation selectivity (E-biophysical = 0.24, E-GLIF = 0.24, Pvalb-biophysical = 0.49, Pvalb-GLIF = 0.58) and for direction selectivity (E-biophysical 0.23, E-GLIF = 0.24, Pvalb-biophysical = 0.59, Pvalb-GLIF = 0.63). We also note that the two model resolutions compare well to one another (for example S values: E-rates = 0.96, E-OSI = 0.95, E-DSI = 0.96).

Finally, a background pool, mimicking the influence of the rest of the brain on V1, provides inputs from a single Poisson source firing at a constant rate of 1000 Hz to all V1 cells. The final weights of this background were adjusted with the recurrent connectivity in place, to ensure that the baseline firing rates of all neurons match experiments (see below). While more sophisticated models of background can be implemented (e.g., Arkhipov et al., 2018) depending on the question of interest (e.g., state transitions), these questions were not the focus of our current study. Therefore, we chose a simple background approximation.

Creating the recurrent connectivity in the V1 network

We now turn to the considerably more complex problem of determining cortico-cortical synaptic connections.

V1 circuits feature extensive recurrent connections which amplify LGN inputs and shape V1 computations (Douglas, Martin and Whitteridge, 1989; Douglas et al., 1995; Douglas and Martin, 2007; Lien and Scanziani, 2013; Arkhipov et al., 2018). Despite many studies (e.g., (Cauli et al., 1997; Dantzker and Callaway, 2000; Beierlein and Connors, 2002; Thomson et al., 2002; Beierlein, Gibson and Connors, 2003; Mercer et al., 2005; Song et al., 2005; West et al., 2005; Yoshimura, Dantzker and Callaway, 2005; Lefort et al., 2009; Hofer et al., 2011; Ko et al., 2011; Levy and Reyes, 2012; Olsen et al., 2012; Pfeffer et al., 2013; Vélez-Fort et al., 2014; Bortone, Olsen and Scanziani, 2014; Cossell et al., 2015; Jiang et al., 2015)), data on the exact patterns and magnitude of V1 recurrent connectivity remains sparse, and no resource exists that comprehensively characterizes all connections under standardized conditions. We set out to construct recurrent connections in a data-driven manner via extensive curation of the literature supplemented by Allen Institute data (Seeman et al., 2018) when available. This resulted in four key resources (Fig. 4) containing estimates of (1) connection probability, (2) synaptic amplitude or strengths, (3) axonal delays, and (4) dendritic targeting of synapses. These resources are provided to the community (brain-map.org/explore/models/mv1-all-layers) with every estimate and assumption documented in interactive files. Our V1 network contains specific instantiations of these connectivity rules. Unfortunately, data do not exist for many connection classes in mouse V1; therefore we used other data in the following order of preference as a guiding principle: mouse visual cortex, followed by mouse non-visual or rat visual cortex, then rat non-visual cortical measurements. Additional entries were filled using assumptions of similarity and/or the rat somatosensory cortex model (Markram et al., 2015; Reimann et al., 2015). 89 out of the total 289 entries remained undetermined (empty cells in Fig. 4A, B) and were set to zero due to lack of data (see Methods).

Figure 4:
  • Download figure
  • Open in new tab
Figure 4:

Summary of recurrent connectivity rules used in both models. A: Probability of connection at an intersomatic distance of 75 μm. B: Strength of connections (somatic unitary post-synaptic potential (PSP)). C: The distance-dependent connection probability profiles used for different classes of connections. D: The functional rules for connection probability (applied only to E-to-E connections) and synaptic strengths (applied to all connection classes) as a function of the difference in preferred angle between the source and target neurons. E: Axonal delays for connections between classes. F: Example schematics of dendritic targeting rules. For detailed descriptions, see Methods.

Fig. 4A reports connection probability values at 75 μm planar intersomatic distance, used as parameters for Gaussian distance-dependent connectivity rules for different source-target class pairs (Fig. 4C). Excitatory-to-excitatory (E-to-E) connections in L2/3 of mouse V1 also exhibit “like-to-like” preferences (Ko et al., 2011; Cossell et al., 2015; Wertz et al., 2015; Lee et al., 2016) – that is, cells preferring similar stimuli are preferentially connected. We here assume that such like-to-like rules are ubiquitous among all E-to-E connections, both within and across layers. These rules are illustrated in Fig. 4D (see Methods), based on the preferred direction of motion angle assigned to each neuron. No such rules were applied for E-to-I, I-to-E, and I-to-I connection probabilities, following experimental observations (Bock et al., 2011; Fino and Yuste, 2011; Packer and Yuste, 2011; Znamenskiy et al., 2018).

Recent experiments indicate that, besides connection probability, the amplitude (strength) of E-to-E synaptic connections in L2/3 also exhibit a like-to-like dependence (Cossell et al., 2015; Lee et al., 2016). In earlier work, we found these to be even more important for response tuning than connection probability rules (Schaub et al., 2015; Arkhipov et al., 2018). A similar like-to-like rule for synaptic strength (but not connection probability) has been reported for I-to-E connections (Znamenskiy et al., 2018). Thus, we assume that all synaptic strength classes (Fig. 4B) are modulated by such a rule (Fig. 4D). At this point, all like-to-like connection probability and synaptic strength profiles were symmetric with respect to the opposite preferred directions (i.e., orientation-dependent but not direction-dependent).

Notably, some of the first predictions from our models came from this data-driven building stage. One important rationale for imposing the like-to-like synaptic weights rule for all connection classes is that the Sst and Htr3a classes receive little to no LGN input (Fig. 3; (Ji et al., 2015)), yet exhibit orientation and direction tuning (Liu et al., 2009; Kerlin et al., 2010; Ma et al., 2010). We assumed that these classes become tuned due to like-to-like inputs from excitatory neurons, and, indeed, our simulations implementing these rules exhibit substantial orientation and direction selectivity for these interneuron classes (see below).

The third resource contains synaptic delays between different neuronal classes. Given that measurements of these properties were particularly sparse, our final table is of coarser resolution (Fig. 4E). The fourth resource, applicable to the biophysical model only, is a set of dendritic targeting rules for each connection class (examples illustrated in Fig. 4F). Experimental data for this (typically, from electron microscopy) are only available for a relatively small number of scenarios, and we used what was available from internal data and the literature (see Methods).

Optimization of synaptic weights

Although our data-driven approach systematically integrates a large body of available data, these data are still incomplete and are obtained under disparate conditions. It is therefore not surprising that after construction, our models need to be tuned to obtain physiologically realistic spiking patterns and avoid run-away excitation or epileptic-like activity. While efficient optimization methods for recurrent spiking networks have been described (e.g., (Sussillo and Abbott, 2009; Nicola and Clopath, 2017)), their performance has not yet achieved the level required for optimization of the computationally expensive and highly heterogeneous networks we constructed. We therefore use a heuristic optimization approach with identical criteria applied to the biophysical and GLIF models.

Following (Arkhipov et al., 2018), we used three criteria: (i) spontaneous firing rates should match experimental values, (ii) peak firing rates in response to a single trial of a drifting grating (0.5 s long) should match experiments, and (iii) the models should not exhibit epileptic activity. The optimization applied to synaptic weights only, via grid searches along weights of connections between neuronal classes, using uniform scaling of the selected weight class. The LGN-to-L4 weights were fixed as they were matched directly to experimental recordings in vivo (Lien and Scanziani, 2013) (Fig. 3), whereas the net current inputs from LGN to other layers could vary (within strict bounds) since the corresponding experimental data were obtained in vitro (Ji et al., 2015). Optimizing a full recurrent network at once was very challenging; instead, we followed a stepwise, layer-by-layer procedure. We first optimized the recurrent weights within L4, with all recurrent connections outside L4 removed. Then we added L2/3 recurrent connections and optimized the weights in both L4 and L2/3. This approach was repeated by adding L5, then L6, and finally L1 (see Methods for details).

After optimization, a typical response to a drifting grating exhibits irregular activity, with the strongest spiking among neurons tuned to that particular grating (Fig. 5A). The firing rates for both V1 models across all neuronal classes are similar to those measured in vivo (Fig. 5B, S values: E-biophysical = 0.73, E-GLIF = 0.72, Pvalb-biophysical = 0.84, Pvalb-GLIF = 0.83). Excitatory neurons show improved, relative to LGN only simulations (Fig. 3G), yet unsatisfactory orientation tuning (Fig. S5, S values: E-biophysical = 0.49, E-GLIF = 0.56, Pvalb-biophysical = 0.27, Pvalb-GLIF = 0.42). Similarly, the direction selectivity match is also poor, particularly for Pvalb interneuons (Figs. 5C, 5D, S values: E-biophysical 0.54, E-GLIF = 0.55, Pvalb-biophysical = 0.29, Pvalb-GLIF = 0.32). We therefore further explored the functional rules of recurrent connections, aiming to improve the DSI levels while keeping firing rates close to experimental values. Finally, we once again observed strong similarity between both model resolutions (see Discussion; S values: E-rates = 0.90, E-OSI = 0.89, E-DSI = 0.95).

Figure 5:
  • Download figure
  • Open in new tab
Figure 5:

Initial simulation results from the biophysical and GLIF recurrent V1 models. A: Raster plot in response to a drifting grating (biophysical model). Within each cell class, the cell IDs are sorted according to the cells’ preferred angles. B: Peak firing rate boxplots for both V1 models and in vivo extracellular electrophysiology recordings. C: Example tuning curves (mean ± s.e.m) for both the biophysical recurrent model and LGN-only model (same neuron as Fig. 3D). D: DSI boxplots for both V1 models and in vivo measurements.

Refined synaptic functional connections amplify direction selectivity

Up to this point, all like-to-like connectivity rules in our models were “orientation-based”, i.e., the probability and weights were symmetric with respect to Δθ=90°, where Δθ is the difference between the preferred angles of the two neurons (Fig. 4D). This must be contrasted with “direction-based” asymmetric rules, where a pair of neurons preferring opposite directions of motion is treated differently from a pair preferring the same direction (Fig. 6A). We reasoned that low levels of direction selectivity in our V1 models are due to the absence of such direction-based rules, since the orientation-based rules enhance neurons’ responses to their anti-preferred direction due to inputs from the oppositely tuned neurons. However, the models are also grounded in data, which show symmetric, orientation-based like-to-like rules for probability of E-to-E connections and no like-to-like rules for I-to-E connections (Fino and Yuste, 2011; Ko et al., 2011; Packer and Yuste, 2011; Lee et al., 2016; Znamenskiy et al., 2018) (although the data is mostly limited to connection classes in L2/3). In the absence of data to the contrary, we assumed that E-to-E connection probabilities obey the orientation-based rule and other connection probability classes do not follow like-to-like rules at all. Therefore, the only remaining flexibility is in the functional rules specifying synaptic weights of any connections formed.

Figure 6:
  • Download figure
  • Open in new tab
Figure 6:

Refined synaptic functional connections. A: The original orientation-based (dotted black, “Sym”; Fig. 4D) and the refined, direction-based (colors) synaptic strength profiles as a function of the difference between the preferred angles in two connected neurons. The like-to-like rule for E-to-E connection probabilities remains orientation-based (Fig. 4D) and no like-to-like rules are applied to other connection probabilities. B: The phase-based rule for synaptic strengths of E-to-E connections. Left: Schematic example of neurons preferring 0° direction, as they respond to a 0° drifting grating (background shows phase alignment with the drifting grating). Arrow lengths indicate magnitude of response. Neurons aligned vertically with the center neuron have a matching phase. Right: stronger weights are assigned to phase-matched than phase-unmatched neurons (the heat-map illustrates the scaling factor applied in the models). C: Log firing rates of excitatory neurons in response to their preferred grating direction (median ± s.e.m), for the biophysical model. Applying the rules from (A) and (B) results in a firing rate bias for vertical- vs. horizontal-preferring neurons due to differential cortical magnification (magenta); the bias is not observed experimentally (grey). The bias disappears after additional direction-dependent scaling is applied to synaptic weights according to the target neuron’s assigned preferred angle (black). D: Net synaptic inputs for horizontal- and vertical-preferring E4 biophysical neurons (rules in (A) and (B) and the additional direction-dependent scaling), in retinotopic (left) and cortical (right) coordinates (averages over 100 neurons with after aligning their centers). E: Histogram of incoming synaptic weights onto E4 neurons based on their preferred orientation. Horizontal-preferring neurons have a heavier tail than vertical-preferring neurons.

Less is known about functional rules for synaptic strength. Available data from L2/3 (Cossell et al., 2015; Znamenskiy et al., 2018) indicate that synaptic amplitude correlates with similarity of responses, for both E-to-E and I-to-E connected pairs. However, similarity of preferred direction alone is a poor predictor of synaptic strength for E-to-E connections, whereas similarity of receptive fields (ON-OFF overlap) is a better predictor (Cossell et al., 2015). Furthermore, in vivo patch-clamp measurements in L4 indicate that excitatory neurons responding in phase with each other to a drifting grating are preferentially connected (Lien and Scanziani, 2013). Motivated by these observations, we introduce two modifications to the synaptic strength rules: (1) a direction-of-motion-based like-to-like Gaussian profile applied to all connection classes (Fig. 6A) and (2), for the E-to-E classes only, a decrease of the synaptic strength with distance in retinotopic visual space between the source and target neurons, projected on the target neuron’s preferred direction (Fig. 6B). Rule (2) confines the sources of sufficiently strong connections to a stripe perpendicular to the target neuron’s preferred direction, biasing the inputs to come primarily from neurons that respond in phase with the target neuron when stimulated by a drifting grating or a moving edge (Fig. 6B). These assumptions are consistent with theory based on optimal Bayesian synaptic connectivity for integrating visual stimuli (Iyer and Mihalas, 2017).

We tested 8 specific choices of rules (1) and (2), sampling multiple selections of parameters for each choice (over 100 variants in total), primarily employing the GLIF V1 model for this purpose (Fig. S6), before converging on a final set (Fig. 6A). With a sufficiently narrow Gaussian curve characterizing the direction-based dependence on Δθ (Fig. 6A), substantial improvement in the levels of DSI are obtained across all layers (Fig. S7). This allows us to predict that like-to-like rules (1) and (2) above may apply across all layers in the mouse V1, potentially with cell-class specific parameters – in fact, in our models we use relatively narrow rule (1) profiles for E-to-E connections, since excitatory populations typically exhibit high DSIs, and wider profiles for other connections (Fig. 6A). Given that multiple different values of parameters for rules (1) and (2) result in networks with robust levels of direction selectivity, we cannot reliably choose a single “optimal” parameter set. The set (Fig. 6A) we use for subsequent simulations should be considered a representative example among possible solutions. In the absence of direct experimental measurements, we simply note that application of rules (1) and (2) with sufficiently narrow profiles (e.g., a Gaussian with standard deviation of 30° for rule (1) in E-to-E connections) enables amplification of direction selectivity by the recurrent connections, consistent with available data.

In testing the connectivity, we notice that rules (1) and (2) are not sufficient by themselves as they introduce a firing rate bias depending on the neuron’s preferred direction of motion. Vertical-preferring neurons exhibit higher peak firing rates than do horizontal-preferring neurons, but such a bias is not present in experimental data (Fig. 6C). The root cause of this is the experimentally observed asymmetric retinotopic magnification mapping in cortex (Schuett, Bonhoeffer and Hübener, 2002; Kalatsky and Stryker, 2003), which is implemented in our models (see Methods). Specifically, moving along the horizontal direction in the cortical retinotopic map (azimuth) by 100 μm corresponds to ~7° in the visual space, whereas along the vertical direction (elevation) 100 μm corresponds to ~4°. Consequently, the stripe from rule (2) (Fig. 6B) is wider in cortical space for vertical than for horizontal preferring neurons, thus providing stronger net inputs from presynaptic V1 neurons (Fig. S8). Since such a firing rate bias is not empirically observed (Fig. 6C), some mechanisms must adjust for the horizontal-vertical mismatch of translating retinotopy to connectivity. Multiple mechanisms are plausible, including, e.g., different distance dependence of connectivity rules for vertical- vs. horizontal-preferring neurons, different strengths of LGN inputs, or different strengths of recurrent connections. We implement the latter, in a simple linear fashion where horizontal-preferring target neurons receive synapses scaled by 0.5×(7+4)/4=1.38 and vertical neurons scaled by 0.5×(7+4)/7=0.79, with a linear interpolation in-between (see Methods). This approach fixes the firing rate bias and synaptic weight bias (Figs. 6C, S8).

In the finalized model, horizontal- and vertical-preferring cells receive, on average, equal amounts of excitatory synaptic input, sourced from the same size of strips in retinotopic space, but different sizes in physical, cortical space (the strip for horizontal-preferring cells is almost half the width of the strip for vertical-preferring cells; Fig. 6D). A consequence of this (Fig. 6E) is that the distribution of incoming excitatory weights has a heavier tail for horizontal- than vertical-preferring neurons, an observation that could be tested in future experimental datasets as an indication of the mechanism we implement here.

With the third, and final, rule set, we carry out simulations to test the models’ responses to drifting gratings (Fig. 7A, B; S scores for firing rate: E-biophysical = 0.71, E-GLIF = 0.69, Pvalb-biophysical = 0.80, Pvalb-GLIF = 0.75). Note the emergence of horizontal patches of excitatory neurons in the raster plot (Fig. 7A) due to pronounced direction-selectivity not previously present (Fig. 5A). For excitatory cells, the OSI distributions approximately match experimental recordings (Fig. S9; S scores: E-biophysical = 0.87, E-GLIF = 0.71, Pvalb-biophysical = 0.42, Pvalb-GLIF = 0.44), indicating that the new direction- and phase-based rules are not detrimental to their orientation selectivity. Most importantly, the match of DSI to experimental values (Fig. 7C, D; S scores: E-biophysical = 0.89, E-GLIF = 0.88, Pvalb-biophysical = 0.82, Pvalb-GLIF = 0.83) is much improved for all cell classes, compared to the models with purely orientation-based rules (Fig. 5C, D). The Sst and Htr3a interneurons showed near-zero DSI in Fig. 5D, but now exhibit DSIs equal or higher than those of Pvalb interneurons, consistent with published observations (Kerlin et al., 2010; Ma et al., 2010). Thus, the new rules at the synaptic strength level successfully enable direction selectivity in distinct populations of neurons, while obeying diverse constraints from experimental data. Furthermore, we note that both model resolutions still maintain strong similarity with one another (see Discussion; S values: E-rates = 0.96, E-OSI = 0.76, E-DSI = 0.80; Table S1).

Figure 7:
  • Download figure
  • Open in new tab
Figure 7:

Simulated responses to drifting gratings for the final V1 connectivity rules (from Fig. 6). A: Raster plot in response to a drifting grating (note the horizontal stripes corresponding to strong responses of the cells that prefer the direction of the grating; neuron IDs are sorted within each class by the preferred angle). B: Peak firing rate boxplots compared to in vivo recordings. C: Example tuning curves (mean ± s.e.m) for an E4 neuron for the final rules, in comparison to purely orientation-based rules (Fig. 5C), and no recurrent connections (LGN only of Fig. 3D). D: DSI boxplots for the final V1 models and in vivo recordings.

V1 Model Responses to Natural and Global Luminance-Altering Stimuli

With this final synaptic design in place, and having observed good model performance for drifting gratings (Fig. 7), we model responses to drastically different stimuli – flashes and natural movies (Fig. 8).

Figure 8:
  • Download figure
  • Open in new tab
Figure 8:

Responses of V1 models to full-field flashes and a natural movie. A: Raster plot in response to full field flashes (ON and OFF) of 250ms duration. B: Time-to-peak for excitatory and Pvalb neurons in models and Neuropixels experiments in response to flashes. C: Signal correlations of responses to flashes. D: Raster plot in response to a natural movie. E: Correlation between signal and noise correlations (see Methods) for responses to a natural movie in models and experiments. F: Lifetime sparsity, averaged over trials, for responses to a natural movie.

A full-field luminance change is one of the strongest stimuli to test the stability of the network. Our models remain stable in each of 10 trials with ON and OFF-flashes (Fig. 8A). The time-to-peak values are comparable to Neuropixels experimental recordings (Fig. 8B), an important indication that the dynamics of initial transformation of the visual signal is well captured. The network, however, shows less variability in time-to-peak compared to the experiments. The full-field flash stimulus affects all points of visual space equally and simultaneously; to quantify the degree to which neurons follow the same time course in response to this global stimulus, we compute the signal correlations (see Methods) between neurons. Despite the highly correlated structure of the input, neurons tend to have low correlations with each other (Fig. 8C). The signal correlations in the models are slightly higher than, but otherwise overlap closely with the experimental ones; the Pvalb-Pvalb correlations deviate the most from experimental measurements, but are still well below 1. As seen for DSI (Fig. 7C), agreement with the experiment is somewhat better for the biophysical than GLIF model, for both correlations and time-to-peak.

In comparison with artificial stimuli like gratings or flashes, natural stimuli exhibit distinct statistical features and evoke highly heterogenous responses. We test our models on a clip from one movie shown to mice in the Allen Brain Observatory, which used Ca2+ imaging to quantify responses of many neuronal populations across most layers of visual cortex (de Vries et al., 2018). An example raster plot (Fig. 8D) highlights more irregular and sparser response patterns than responses to flashes (Fig. 8A) or gratings (Figs. 7A). Following (de Vries et al., 2018), we compute the correlation between the signal correlations and noise correlations for spiking responses of neuron pairs from our models and in vivo electrophysiology recordings (for direct comparison) and find similar, almost all positive, values between the models and experiment (Fig. 8E). Another major characteristic of responses to natural stimuli is the high lifetime and population sparsity of neurons (Vinje and Gallant, 2000; de Vries et al., 2018). This finding is also reproduced by our models (Figs. 8F, S10; see Methods), albeit with higher values than our electrophysiology data. Interestingly, in the Allen Brain Observatory, the VIP (subclass of Htr3a) neurons in L2/3 and L4 exhibit reduced sparsity compared to excitatory and SST classes (that survey did not include Pvalb). Htr3a (or VIP) neurons are not readily identifiable in electrophysiological recordings and are arguably less well parameterized in the models (due to lower data availability) than excitatory and Pvalb classes. Nevertheless, in the biophysical model the Htr3a class does exhibit reduced sparsity in L2/3 and L4. The model shows high sparsity for Htr3a in L5 and L6 (mostly non-VIP in these layers), an observation that has not been yet tested experimentally.

Discussion

We here present two closely related network models of mouse V1. Both have the identical network graph, i.e., connectivity, with ~230,000 nodes of two different flavors, either biophysically-elaborate or highly simplified ones. The models were constrained by a plethora of experimental data: the representation of the individual cells and their firing behavior in response to somatic current injections, LGN filters, thalamocortical connectivity, recurrent connectivity, and activity patterns observed in vivo. This work continues the trend of developing increasingly more sophisticated models of cortical circuits in general (e.g., Traub et al., 2005; Zhu, Shelley and Shapley, 2009; Potjans and Diesmann, 2014; Markram et al., 2015; Joglekar et al., 2018; Schmidt et al., 2018) and visual cortex in particular (Wehmeier et al., 1989; Troyer et al., 1998; Zemel and Sejnowski, 1998; Krukowski and Miller, 2001; Arkhipov et al., 2018; Antolík et al., 2019). Our main goal was to integrate existing and, especially, emerging multi-modal experimental datasets describing the structure and in vivo activity of cortical circuits into biologically realistic network models.

Our models are represented with a standardized data format SONATA (Dai et. al 2019, github.com/AllenInstitute/sonata) via the Brain Modeling ToolKit (BMTK, github.com/AllenInstitute/bmtk; (Gratiy et al., 2018)) and the open source software NEURON (Hines and Carnevale, 1997) and NEST (Gewaltig and Diesmann, 2007). The SONATA format is also supported by other modeling tools including Blue Brain’s Brion (github.com/BlueBrain/Brion), RTNeuron (Hernando et al., 2013), NeuroML (Gleeson, Steuber and Silver, 2007), PyNN (Davison et al., 2009), and NetPyNE (Dura-Bernal et al., 2019).

Recent studies (Rössert et al., 2016; Arkhipov et al., 2018) demonstrated that the conversion of a biophysical network model to a GLIF counterpart could result in good qualitative and quantitative agreement in spiking output. We here likewise observed an overall agreement between the biophysical and GLIF models of V1. Although both graphs are identical, the input-output functions of every neuron are different; yet, to judge by their firing rate distributions, the two models are very similar at the population level. This reveals, yet again, the perhaps unreasonable effectiveness of point neuron models given their vastly reduced degrees of freedom (Koch 1999). This is true for both passive (Arkhipov et al., 2018) as well as active dendritic models (Rössert et al., 2016). A potential reason for this effectiveness at system level simulations originates from their effectiveness at single-cell simulation of input/output transformations. In particular, the GLIF (Teeter et al., 2018) and biophysical models (Gouwens et al., 2018) we use here, show similar levels of explained variance when mapping a noisy current injection at the soma to an output spike train. These results support broad applicability of the computationally less expensive GLIF network models (here approximately 5000 times less expensive) although ultimately the level of resolution to use should be based on the scientific question under investigation. For instance, computing the extracellular field potential requires spatially extended neurons (Rall and Shepherd, 1968; Lindén et al., 2011; Einevoll et al., 2013; Reimann et al., 2013; Hagen et al., 2019). On the other hand, for robust in-silico perturbation studies, the GLIF network allows for many more rapid iterations and tests. Developing our models at two levels of resolution enables a larger spectrum of possible studies.

In the process of building and testing the models, we made three major predictions about structure-function relationships in V1 circuits. The first addresses observations that non-Pvalb interneurons (Htr3a/VIP and Sst) show direction and orientation tuning (Liu et al., 2009; Kerlin et al., 2010; Ma et al., 2010), but receive connections from other V1 neurons that are distributed uniformly rather than in a like-to-like fashion (Fino and Yuste, 2011), and little to no LGN input (Ji et al., 2015). We thus implemented like-to-like rules for synaptic strengths between excitatory and non-Pvalb inhibitory neurons, which resulted in robust tuning of Htr3a and Sst classes in our models.

Our second prediction extends from experimental work investigating functional connections between excitatory neurons (Bock et al., 2011; Ko et al., 2011; Cossell et al., 2015; Wertz et al., 2015; Lee et al., 2016), thus far primarily in L2/3. Our results suggest that synaptic weights follow rules that are different from the rules that allow two neurons to connect in the first place: whereas the latter are organized in a like-to-like orientation-dependent manner (Ko et al., 2011), the former follow direction-dependent rules (Fig. 6A) with phase dependence (Fig. 6B, E). In our models, these weight rules were implemented among excitatory and inhibitory populations within and across layers (Figs. 6A) to enable realistic levels of orientation and direction tuning (Figs. 7C, D, S9). How can this be reconciled with the report (Cossell et al., 2015) that similarity of preferred direction is not a good predictor of synaptic strength (in L2/3)? Because our models employ additional phase-dependent rules (Fig. 6B, E), where incoming connection weights are close to zero outside of a stripe perpendicular to the target neuron’s preferred direction, many presynaptic neurons that share the target neuron’s direction preference connect very weakly to it (if they are outside of the stripe). Therefore, direction similarity by itself is not a strong determinant of weights in our models either, whereas combined with the phase-related geometric constraints it does determine the weights. Interestingly, as we were finalizing this report, a new experimental study (Rossi, Harris and Carandini, 2019) appeared, showing (in L2/3) the preferential location of presynaptic neurons to be within a stripe, as in our connectivity implementation (Fig. 6B, E), thus supporting our prediction (although the new data suggest this architecture may be realized in connection probabilities rather than in synaptic weights).

Our third prediction concerns the asymmetry in cortical retinotopic mapping between the horizontal and vertical axes (Schuett, Bonhoeffer and Hübener, 2002; Kalatsky and Stryker, 2003). This results in higher firing rates for vertical- than for horizontal-preferring neurons, which is not observed experimentally (Fig. 6C). We thus infer the existence of one or more compensatory mechanisms, which may occur at many levels, including connection probability, LGN projections, etc. Our models addressed this at the synaptic strength level (Figs. 6E, F).

These three predictions concern important relationships between the circuit structure and in vivo function. The first prediction is significant because mechanisms of tuning of Sst and Htr3a/VIP interneurons are likely to be critical in enabling diverse Sst- and Htr3a-mediated functions (see, e.g., (Liu et al., 2009; Kerlin et al., 2010; Ma et al., 2010; Adesnik et al., 2012; Pfeffer et al., 2013; Fu et al., 2014; Tremblay, Lee and Rudy, 2016; Muñoz et al., 2017)). The second prediction suggests a set of general mechanisms that apply across layers and neuronal classes to shape the essential computations in the visual cortex of orientation and direction selectivity. The third prediction illuminates the potentially widespread wiring and/or homeostatic mechanisms that equalize firing rates between vertical- and horizontal-preferring neurons. All three predictions are amenable to experimental investigation (Bock et al., 2011; Hofer et al., 2011; Ko et al., 2011; Cossell et al., 2015; Wertz et al., 2015; Lee et al., 2016; Znamenskiy et al., 2018; Rossi, Harris and Carandini, 2019).

These predictions are a starting point for further elaborations of predictions. The GLIF V1 model, in particular, minimizes the entry barrier to biologically realistic modeling for researchers, due to the low computational demands. Our models, together with all meta-data and code, are freely accessible via the Allen Institute for Brain Science’s web portal at brain-map.org/explore/models/mv1-all-layers. We hope that the community will exploit these resources to investigate more biologically refined models of cortex, the most complex piece of active matter in the known universe.

Methods

Instantiating the network

The V1 neurons were instantiated and distributed through every layer with raw number estimates available in the supplemental document V1_structure.xlsx. We considered the estimated cell densities measured in every layer based on nuclear stains (Schüz and Palm, 1989) with the assumption of an 85% and 15% fractions for excitatory and inhibitory neurons, respectively. The fractions used for the interneuron subclasses were based on expression levels in double in-situ hybridization experiments (Lee et al., 2010). The layer thicknesses were taken from the Allen Mouse Brain Atlas (see Cortical Layer Thickness Measurements). Our model incorporated inhibitory neurons in layers L2/3 through to L6 from three broad classes, Paravalbumin- (Pvalb), Somatostatin- (Sst), and Htr3a-prositive; and excitatory neurons in each layer were comprised of one or more cell classes corresponding to major Cre lines labeling these layers (Figure 1A, C). Layer 1 (L1) had only a single inhibitory class of Htr3a neurons (Lee et al., 2010; Tremblay, Lee and Rudy, 2016). L2/3 excitatory neurons (class E2/3) were reconstructed from the Cux2 Cre-line, which is almost pan-excitatory in this layer. L4 excitatory cells were represented by four sub-classes – the Scnn1a, Nr5a1, and Rorb Cre-lines, as well as reconstructions where the Cre-line was not known (“L4 other-exc”) due to reconstructions from non-Cre-animals. L5 contained two excitatory sub-classes – the cells labeled by the Rbp4 Cre-line and unlabeled neurons (“L5 other-exc”). As described in the Main Text, due to uncertainties regarding distinct properties of subclasses in L4 and L5 in terms of connectivity and in vivo activity, for all effective purposes we combined the L4 and L5 excitatory sub-classes into a single class per layer (E4 and E5). L6 contained one excitatory class (E6), with neurons from Ntsr1 Cre-lines only (due to availability at the time of creating the models). Altogether, we used 112 unique neuron models for the biophysical and 111 for the GLIF networks. At time of model building, there were no Htr3a reconstructions for L6 neurons and therefore we re-used the two deepest L5 Htr3a models to populate this cell class in L6. Although the Allen Cell Types Database had more cell models, not all models could fit geometrically in the V1 volume without protruding beyond the pia. This was due to Cre-lines not labeling specific layers exclusively, resulting in cases where cells from certain Cre-lines resided in adjacent layers (see Somatic Coordinates).

The neuron models were fit to in-vitro measurements (Gouwens et al., 2018; Teeter et al., 2018) and publically available via the Allen Cell Types Database (celltypes.brain-map.org/). All our biophysical models used passive dendrites although the Allen Cell Types Database includes neuron models with active dendritic conductances. This was due to active-dendritic models being too computationally expensive models (prohibitively) for the extent of our work. Further, the somatic spike output from the active-dendrite models do not show much better performance than the models with active conductances restricted to the soma (celltypes.brain-map.org/). Therefore, we used the less computationally expensive neuron models.

Cortical Layer Thickness Measurements

Layer thicknesses for the model were taken from the Allen Mouse Brain Atlas (Oh et al., 2014 - atlas.brain-map.org/). They were calculated from a mouse common coordinate framework in which voxels were annotated with cortical areas and layers. In this framework, streamlines were calculated that connected pia to white matter using the shortest paths (Oh et al., 2014 - Documentation in atlas.brain-map.org/). For each voxel on the surface of V1, the thickness of each layer was calculated along the associated streamline, and the median values across all of V1 were used to construct the model.

Somatic Coordinates

With the number of neurons identified (V1_structure.xlsx), we needed to assign somatic coordinates for every cell and select appropriate neuron models. For the biophysically detailed neurons we also had to assign to a neuron a rotation about the depth axis (white-matter to pia). This is due to our V1 model using a fixed number of reconstructed neuron models relative to the total number of neurons simulated and hence when reusing a model, we randomly rotated the individual neurons between 0 and 2π around the depth axis. For the somatic coordinates, cells for each population were uniformly distributed within a cylindrical domain and within the specified layer depth. For the biophysical models, the depth of a neuron would affect which neuron model was assigned to it. The first condition was that a model would not be assigned to a particular cell if that model's morphology significantly extended out of the Pia when placed at the cell’s somatic location (with a tolerance of 100 μm). Once all putative cell models that pass this criterion were identified, we randomly selected a model based on a Gaussian probability density function (with standard deviation of 20 μm).

Visual Coordinates

Neurons’ positions are defined in the physical space, whereas visual stimuli (see Visual Stimuli) supplied to the models, as well as the LGN filters converting these stimuli to spike trains impinging on V1 neurons, are defined in the visual space. Thus, a mapping between the two spaces needs to be defined. The cortical plane (plane perpendicular to the depth axis) was mapped to the visual space, with the geometrical center of the model corresponding to the center of the visual space. Retinotopic mapping experiments in the mouse V1 identified how much displacement in visual cortex corresponded to displacements in visual space (Schuett, Bonhoeffer and Hübener, 2002; Kalatsky and Stryker, 2003). Using these results (Figure 3 from (Schuett, Bonhoeffer and Hübener, 2002) and Figure 4 from (Kalatsky and Stryker, 2003)), we approximated that the visual degrees traversed per mm of cortex are 70 degrees/mm in the azimuth and 40 degrees/mm in elevation. Note the asymmetry between the two directions. From this we can convert any translation of azimuth and elevation in cortex to a translation in visual space. For example, consider moving 845 μm in the azimuth (radius of the V1 model): the movement in visual space is then estimated to be 0.845 mm * 70 degrees/mm = 59.15 degrees. The somatic position of every neuron was used, via such translations, to establish the assigned neuron’s position in the visual space, which was then used in algorithms establishing connectivity from the LGN to V1 (see below).

Thalamocortical Connectivity

Distributing LGN Units

We sought to create an LGN model that roughly captures the entire LGN with an estimated 18,000 neurons in the mouse. In our model, we do not explicitly model the shell and core of the LGN and simply distribute the LGN units on a 2D plane in visual space to model 240 degrees (horizontal) by 120 degrees (vertical). We imposed a lattice structure on the 2D plane by dividing it into girds (15 blocks horizontally by 10 blocks vertically of size 16×12 degrees). Each block had a total of 116 LGN units (Table 1) distributed uniformly within the block to give a total of 17,400 LGN units that can process arbitrary visual stimuli.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1:

Distribution of LGN unit numbers in every block and the receptive field sizes per class.

Each LGN unit are represented by a spatio-temporally separable filter, which operated on the movies in the visual space as inputs, and returned a time series of the instantaneous firing rate as output (this rate was then converted to spikes in each individual trial using a Poisson process). The spatial components of the LGN filters are spatially symmetric two-dimensional Gaussian kernels and the temporal components are a sum of weighted raised-cosine bump basis functions (Pillow et al., 2005). The temporal kernel was designed to have a bi-phasic impulse response: Embedded Image where there are six parameters: i) two time constants (t1, t2) for the basis functions, ii) two weights (w1, w2) used to linearly sum the functions and iii) offsets (d1, d2). All data and code are available through the BMTK (github.com/AllenInstitute/bmtk). The spatial and temporal filters are combined to form a 3D spatiotemporal kernel to respond to input signals that are grayscale, represented on a −1 to 1 scale (from black to white), with a time step of 1 ms.

The LGN filters were sampled from 14 classes (Table 1) that approximated the diversity observed in experimental recordings in vivo (Durand et al., 2016) (see Main Text and Fig. 2A). The LGN filter parameters used for every class were obtained by fitting filter responses to the mean experimental responses for every class (resulting parameter values are available in the BMTK). A ±2.5% jitter was added for every parameter when instantiating individual LGN filters. We observed that receptive field sizes of cells from most of the LGN classes in the experimental recordings (Durand et al., 2016) spanned a large range within class. We thus assigned every LGN unit a randomly generated spatial size within the recorded ranges drawn from a triangular distribution defined as follows: zero at lower bound, peak at the lower bound plus 1 degree, and then zero again at the upper bound (to approximate the experimental distributions).

Thalamocortical Architecture Impact on Direction Selectivity

The major guiding purpose for creating thalamocortical connections in our V1 models was to enable direction selectivity, which was proposed to arise due to integration of sustained and transient LGN inputs by V1 cells (Lien and Scanziani, 2018). Before instantiating such rules for the full-scale model, we performed a simplified theoretical analysis to investigate how combinations of transient and sustained pools of LGN inputs, using biologically realistic parameters, would create direction-selective responses in target V1 cells. For this analysis we approximated the LGN input to a V1 cell using a sustained ON and a transient OFF subfields.

For the thalamocortical projections to a V1 neuron in our full models (see Forming Thalamocortical Connections), we would first identify all suitable LGN filters that have overlapping retinotopic positions with the V1 cell. This pool of filters was then split into a sustained subfield ellipse in one half of the receptive field and a transient subfield ellipse in the other half (Figure 2C). The orientation of the ellipses would depend on the assigned preferred angle of the V1 neuron. The ellipses’ major axis would be perpendicular to the preferred orientation of the V1 neuron and the sustained subfield would be positioned such that it is activated first in the case of a bar moving in the preferred direction of the V1 neuron (Fig. 2D). We would then randomly select filters from within these ellipses from the population of sustained or transient LGN filters (Figs. 2C, 3A). For the simplified theoretical analysis here, we consider the sustained ON and transient OFF subfields, represented by a single elliptical filter each, approximating contributions from all LGN cells within a subfield.

The synaptic input current from one of the subfields (labeled as F = ON or F = OFF) to the V1 cell in response to a stimulus is then described by Embedded Image where A is the constant determining the magnitude of the current (assumed to be the same for both subfields), Embedded Image is a baseline (spontaneous) firing rate, and ReLU(x) is a rectified linear unit function that is zero below a threshold (here set at zero) and linear above the threshold. The response is dependent on the stimulus S(x, y, t): Embedded Image

We consider the case where the two subfields are offset along the x-axis, so that each subfield is described as: Embedded Image

The assumption used here is that each kernel is spatio-temporally separable.

The temporal kernel used here is a sum of weighted raised-cosine bump basis functions as used above (Pillow et al., 2005; see Distributing LGN units). The spatial kernel is described by an elliptical Gaussian profile: Embedded Image with the standard deviations σx, σy, respectively. We will study a special case of subfields separated by a distance d along the x-axis using lON = d/2 and lOFF = −d/2: Embedded Image

Let us examine the response of a cell to moving grating stimuli having maximum luminance Smax and a contrast c: Embedded Image where k = (kx, ky) defines the direction of the grating wave front: kx = kcos(θ), k = 2πSF, ω = 2πTF and SF (cpd) and TF (Hz) are the spatial and temporal frequencies of a grating, respectively.

It is more convenient to work in the complex space: Embedded Image

The input current from each subfield is Embedded Image where Embedded Image is independent of stimulus and Embedded Image is a stimulus dependent response: Embedded Image

Here we use a short hand notation Embedded Image and Embedded Image.

Substituting RF (x, y, t) we find: Embedded Image

Since the temporal kernel Embedded Image when τ < 0, we can simply extend the integration to negative infinity over τ.

The temporal integral in Embedded Image is the Fourier transforms over time: Embedded Image that could be expressed using the magnitude Embedded Image and phase ψF(ω): Embedded Image

The spatial integral in Embedded Image is the spatial Fourier transform: Embedded Image

Thus, we can express Embedded Image as Embedded Image

Thus, the response to grating with temporal angular frequency ω is determined by the Fourier component at that frequency only. We can compute the temporal components (raised cosine bumps) Fourier transforms numerically.

We can compute the spatial transform analytically to find: Embedded Image which has an amplitude: Embedded Image so that: Embedded Image

The total input current to a cell is the sum from the two subfields: Embedded Image

Using these equations, we can estimate both the direction selectivity index (DSI) and the orientation selectivity index (OSI) of the F0 and F1 components for a variety of filter parameters: subfield separation d, ellipse aspect ratio or width (determined by σx, σy), and temporal parameters. The F0 response is a commonly used metric that calculates the cycle average mean of the response to a drifting grating while the F1 component computes the modulation response at the input temporal frequency (Movshon, Thompson and Tolhurst, 1978).

We used filter parameters from sON-TF8 and tOFF-TF8 as well all other default values: d = 5 degrees (Lien and Scanziani, 2013), SF = 0.025cpd, TF = 8Hz, ellipse aspect ratio = 3.0, ellipse minor axis = 4.0 degrees. For a set of fixed stimulus (drifting grating), we changed one parameter at a time and observed the impact on OSI and DSI. For the distance between the elliptical sustained and transient subfields (d; Fig. S1A), we note that the F1 component switches direction preference (i.e., its DSI changes sign) as d grows, due to a shifting phase difference between the subfields. The DSI of the F0 component is always zero as the net input remains constant for the preferred and null directions, consistent with experimental recordings (Lien and Scanziani, 2013, 2018). On the other hand, the OSI of the F0 component is constant but non-zero due to the elliptical structure of the subfields that biases the net input per grating cycle for specific orientations (but not directions). The OSI of the F1 component is positive even when d = 0 due to the elliptical shape of the subfields (and temporal properties). Second, by varying the sustained time-to-peak parameter (starting from the transient subfield’s time-to-peak of 30ms, Fig. S1A), we observe, as expected, that asymmetry in the temporal properties of the subfields is essential for producing direction selectivity. There is no direction selectivity in the F1 component when both filters are identical temporally; but as the time-to-peak of the sustained subfield increases, there is a quick rise in F1 DSI. This is followed by a reversal in the direction preference for very high (non-biological) time-to-peak values. The F1 OSI shows a sharp monotonic decrease with the sustained time-to-peak while the F0 OSI is non-monotonic but roughly constant. Other changes investigated in the subfield parameters were the aspect ratio of the ellipses and the size of the ellipses that both showed relatively constant F1 DSI as both ellipse sizes were altered together (Fig. S1A). On the other hand, the OSI values showed a monotonic increase with both illustrating the contribution of the elongated structure for endowing orientation selectivity. An aspect ratio of one still showed some orientation selectivity due to the temporal offsets of the filters giving slight orientation selectivity (our OSI metric is based on circular variance, see Orientation Selective Index below).

We next investigate the effect of changing the spatial frequency of the drifting grating (Fig. S1B). As before, the F0 DSI always remains zero. As the spatial frequency increases, we again observe a reversal in the preferred direction for the F1 component as observed experimentally in mouse cortex (Billeh et. al 2019). For orientation selectivity, the F1 OSI shows a sigmoidal increase as spatial frequency increased while the F0 OSI shows a peak with a fast decay due to reduced responsiveness of the LGN ellipses to high spatial frequencies. On the other hand, the F1 OSI is relatively flat while the F0 OSI shows a peak response as a function of temporal frequency, albeit with a slower decay, again due to the reduced responsiveness of the LGN subfields to high temporal frequencies (Fig S1B). For our choice of subfield parameters, the F1 DSI does not switch sign as we varied temporal frequency, but such switching can occur as observed experimentally and with different filter properties and time constants (Billeh et. al, 2019).

In summary, these simplified calculations confirm that the overarching model of the integration of sustained and transient LGN responses (Lien and Scanziani, 2018) indeed enables directionally selective input currents into V1 cells when biologically realistic parameters are used. Given this reassuring result, the next step was to create a similar architecture of connections to the V1 model from the thousands of filters representing LGN cells in the visual space.

Forming Thalamocortical Connections

The connections from the LGN to V1 neurons followed an approach similar to previous work (Arkhipov et al., 2018). The first step was to establish shared retinotopy between the V1 neurons and the LGN units. The coordinates of the LGN units were in visual space (degrees) while the V1 neurons’ coordinates were in regular 3D space mapped to the cortical surface and white-matter-to-pia depth (see Somatic Coordinates). By imposing that the center of the V1 model mapped to the center of the visual space, the location of each V1 neuron was converted to visual space using the cortical magnification factor, as described in section Visual Coordinates. This procedure assigned each V1 neuron a position in visual space, which may be expected to correspond approximately to the center of that neuron’s RF in the complete model. We then identified which LGN units would project to every V1 neuron (from the classes to receive LGN inputs; see Main Text and Table 2), as follows.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 2:

Properties of the subfields in the visual space used to select LGN neurons projecting to V1 neurons (for every cell class receiving LGN inputs; the remaining classes are assumed to receive no LGN input (Ji et al., 2015)). The connection probability refers to probability a neuron receives input from the LGN (Ji et al., 2015). The mean LGN input current corresponds to the mean excitatory LGN current the neuron class receives (Lien and Scanziani, 2013; Ji et al., 2015) when voltage clamped at the inhibitory synapse reversal potential (see Thalamocortical Synaptic Weights). The V1 TF column represents the preferred temporal frequency of the V1 neuron class (Niell and Stryker, 2008; Durand et al., 2016). The SON Ratio refers to the probability the sustained component will be ON instead of OFF (Lien and Scanziani, 2013) – the transient component was always OFF. The Separation Range refers to the distance between the sustained and transient subfield ellipses – E4 estimated from Lien and Scanziani, 2013. The Width Range refers to the minor-axis width of the ellipses (diameter). The Aspect Ratio refers to the length of the major-axis relative to the minor-axis. Note the aspect ratio is relative to neurons’ visual space center and once sizes of LGN receptive fields are incorporated, the results match experimental measures (Lien and Scanziani, 2013) more accurately as shown previously (Arkhipov et al., 2018). The final column refers to the number of synapses an LGN neuron makes to a V1 neuron if a connection exists. This was extrapolated from experimental work (Morgenstern, Bourg and Petreanu, 2016) as discussed in Thalamocortical Synapse Estimate.

Given the directionally selective architecture to be imposed, every V1 neuron was assigned a preferred angle of stimulus motion to determine the placement of the elliptical subfields from which LGN units would be sampled (Figs. 2C, 2D, 3A). There was always a transient OFF subfield and a sustained subfield that was either ON or OFF (this choice was made based on the relative abundance of the different classes of LGN cells in our experimental recordings (Durand et al., 2016), as summarized in Fig. 2A). The two subfields were identically oriented and offset by certain distance; the offset and the short axes of both ellipses were co-aligned with the assigned preferred direction of the target V1 neuron. The position of the target neuron was at the middle of the line connecting the centers of the two subfields (Fig. 3A). The subfields were positioned along the vector of the preferred direction of the target neuron in such a way that the vector pointed from the sustained subfield to the transient one (Figs. 2C, 2D). Note that the assigned angle was also used for the recurrent connectivity (see below) and was set such that every V1 neuron class represented every angle in the range [0, 360°) with even spacing. The dimensions of the subfields and their separation varied based on the V1 neuron’s class (Table 2); these choices were made according to estimates of the expected metrics – such as the OSIs and DSIs – for the class, based on experimental reports (see details and references in V1_parameter_estimate.pptx). The subfield parameters for the E4 target population were informed by our previous model of L4 (Arkhipov et. al, 2018), and parameters for the other populations were chosen following the assumption that V1 cell classes with stronger orientation/direction selectivity would utilize smaller and more elongated LGN subfields. Importantly, we chose these subfield parameters once and did not vary them to tune the model for target OSI/DSI values. The good agreement with the experiment observed for the final model (Fig. 7) suggests that our initial choice of these subfield parameters was appropriate (and, to the best of our knowledge, it is consistent with available experimental observations); however, it is possible that the agreement could be further improved by tuning the subfield parameters.

As reported previously, a linear angle approximation was used (Arkhipov et. al, 2018). Further, every V1 neuron was assigned a preferred temporal frequency drawn from a Poisson distribution with a mean as measured experimentally (Table 2, (Niell and Stryker, 2008; Durand et al., 2016)). This determined the probability of selecting LGN units preferring particular temporal frequencies. Given that there was a discrete number of LGN filters for every class (sON, sOFF, tOFF), the probability of selecting a particular subclass (i.e. a particular TF) was based on the distance of the V1 neuron’s temporal frequency from the LGN unit’s preferred temporal frequency, divided by the total possible distance for that class.

Once the subfields were established, the LGN units to be connected to the target cell were selected among the units that had the centers of their spatial kernels within the subfields (and of the LGN class matching to each subfield, see Fig. 3A). From this total pool, LGN units were connected randomly based on the probability of connections (given their temporal frequency as mentioned above). Thus, not every LGN unit in the subfield formed a connection with the target V1 cell (Figs. 2C, 3A). Finally, for the ON/OFF filters, a restriction was set that required the axis of the ON/OFF subfield to be within 15-degrees relative to the assigned orientation preference angle of the V1 neuron (Arkhipov et. al, 2018). With all these choices, the suitable LGN units were selected probabilistically to project to each target V1 cell. Based on these rules, the average number of LGN units connecting to a V1 cell for excitatory neurons is: 19.3 ± 6.0 (mean ± SD), median = 19, min = 2, max = 47. For inhibitory neurons: 15.0 ± 4.4 (mean ± SD), median = 15, min = 2, max = 32. The mean number of LGN projecting units to V1 neurons is below the recently reported estimates (Lien and Scanziani, 2018); although the authors themselves acknowledge their measurements are likely overestimates. Nevertheless, the most important parameter is the total synaptic current that every population receives (see Thalamocortical Synaptic Weights) which was matched to experimental measurements (Lien and Scanziani, 2013, 2018) and could compensate for the differences we have in this version of the model.

Thalamocortical Synapse Estimate

For the biophysical model we estimated the number of synapses impinging on different V1 neurons. The exact numbers of synapses are only estimates as the more critical step was ensuring the total excitatory current received from the LGN matched experimental measurements (see below). Should the number of synapses be incorrectly estimated, this was compensated for by the final synaptic weights.

Our calculation and formalism for the number of thalamocortical synapses per neuron is described below; we also provide a supplementary document (Num_TC_synapses.xlsx) where all the calculations were done. As the field advances, in particular with electron-microscopy technology, we would need fewer assumptions and simply use the available data. In the model, synapses were placed along the dendrites up to 150 μm away from the soma but excluding the soma, as done in a previous model of the layer 4 of V1 based on experimental reports (Schoonover et al., 2014; Arkhipov et al., 2018)

One key resource we used was the fluorescence measurements of the density of thalamocortical axons across cortical depth (Morgenstern, Bourg and Petreanu, 2016). We used this work to determine the fraction of fluorescence across cortical layers as an estimate of the fraction of LGN projections to different layers. The full calculation is in the accompanying supplemental document (Num_TC_synapses.xlsx) and here we explain our technique and assumptions. In particular we assume the Fluorescence Signal (FS) is a function of the following factors:

  1. Number of cells in a layer (Schüz and Palm, 1989)

  2. Percentage of cells that actually get innervated in a layer from the LGN (Ji et al., 2015)

  3. At a specific depth (layer), the proportion of dendrites from cells in different layers that extend to other layers

    1. For inhibitory neurons, dendrites where assumed to stay within their layers and not extend to other layers.

  4. The fraction of LGN synapses on a stretch of dendrite is the same whether that dendrite is from an E or Pvalb cell.

    1. Assumption includes that, out of all interneurons, Pvalb cells are the only ones to receive significant innervation except for layer 1 (Ji et al., 2015).

      From here, for a specific layer, the below calculation was used to approximate the fluorescence signal (FS) from labeled thalamocortical axons. This example is for layer 4: Embedded Image

In the accompanying document, this is found by summing the rows for the gray matrix. The different notations mean:

  • FSL4 = Fluorescence signal in layer 4

  • A = Constant factor converting fluorescence signal to biological innervation numbers. We assume fluorescence is a linear function of axon density, and so A is constant for every layer. We will need to solve for A (see below)

  • NE4 = Number of excitatory cells in L4 (Schüz and Palm, 1989)

  • IRE4 = Innervation ratio of LGN onto L4 pyramids (Ji et. al, 2015)

  • NTCE4 = Number of synapses that are thalamcortical for every L4 excitatory cell – the numbers we are seeking for every layer. From (4) above, it is assumed that NTCi4Pvalb = NTCi4Pvalb.

  • FracE4L4 = The fraction of excitatory cells’ dendrites in L4 that is contributed from L4 cells (from assumption (3) above). See the light green matrix in the accompanying excel sheet.

  • ∘ Note that FracE4L4 + FracE2/3L4 + FracE5L4 = 1.

  • ∘ Note that we assumed FracE4L6 = 0 and thus that is not included in the above example of L4.

  • ∘ Note that Fraci4pvalbL4 = 1 is assumed for all layers for Pvalbs (assumption (3.a) above).

We note that the document had a finer division of every layer (split in two: upper (A) and lower (B) components) and the idea of single layers here is just used for explanation purposes.

All these assumptions can be written in a matrix form as follows: Embedded Image

Where FS is an N×1 matrix of the fluorescence signal across layers and NTC is the Number of thalamocortical synapses that is also N×1. Mp holds the properties described above and is a matrix of dimensions N×N (contributions from all layers).. We can thus solve for NTC by taking the inverse: Embedded Image

Since the constant factor A is not known, the values of NTC are not the actual numbers of synapses. To account for this, we use the experimental finding that, in the mouse visual cortex, the number of thalamocortical synapses on L4 excitatory cells is approximately 1200-1500 (Schoonover et al., 2014; Arkhipov et al., 2018). This gives us the scaling factor to account for A and hence allows us to estimate NTC for all layers.

For the supplemental document, which was divided into finer divisions, 1200 was used as the average of all the L4 divisions (see scaling factor). The final numbers of synapses are shown in Table 2.

Thalamocortical Synaptic Weights

Various studies have identified the thalamic innervation pattern into the visual cortex across laminae (Lien and Scanziani, 2013, 2018; Kloc and Maffei, 2014; Schoonover et al., 2014; Ji et al., 2015; Morgenstern, Bourg and Petreanu, 2016; Bopp et al., 2017). We used these results to identify the total current that different cell classes should receive from the LGN. One study, already published during building of the model, measured that the net current into layer 4 excitatory cells responding to drifting gratings at their preferred angle was on average 46 pA (Lien and Scanziani, 2013). Other work using optogenetic stimulation identified the cell classes that are innervated by the thalamus, for both the probabilities and relative strengths (Ji et al., 2015). Assuming linear scaling to layer 4 excitatory neurons, we estimated the target mean current for every cell class in response to a grating at a neuron’s preferred direction (Table 2).

To attain the target currents, for the biophysically detailed model, we created networks that had 100 cells from every model, all preferring a single direction, that receive LGN innervation as described above (but no other connections). A grating at 2Hz, full contrast, full field with a spatial frequency of 0.04 cycles per degree (to match the experimental work precisely (Lien and Scanziani, 2013)) was shown to these networks. Further, the neurons were clamped at the reversal potential of the inhibitory (GABA) synapses in our model (again as performed experimentally). The net mean current during exposure was measured and the synaptic weights iteratively adjusted until the target current was reached with 2% tolerance. For surrounding LIF neurons, for the same stimulus, we matched the firing rates that were observed with purely LGN input in the biophysically detailed core neurons of the same class. As mentioned in the Main Text, during optimization of the full V1 model the weights of synapses from LGN to excitatory layer 4 cells were not adjusted at all, given that the measurements we used as targets in the procedure described here were of high precision and obtained in vivo (which is the condition we were aiming to match in our full model). Weights of all other synapses from LGN were adjusted, but the adjustment was allowed to be no more than by a factor of 2 for the mean input current (Table 2).

Finally, the GLIF V1 model used the same strategy to attain the same target mean currents using the same grating LGN stimulus. However, as the GLIF models employed in the V1 model were using post-synaptic current based synapses (see Synaptic Characteristics), the weights were initially set as the target currents and no voltage clamping was required. However, the average rheobase (minimal current step amplitude to elicit an action potential) of the GLIF models in the model are bigger than experimental measurements (Fig. S11), except for Pvalb neurons that had smaller rheobase values. To match closely to the experimental data, the established weights from LGN to V1 were scaled by the average ratio between average rheobase of GLIF model and experiment data i.e., 0.81 for Pvalb population and 1.36 for other populations.

Background Connectivity

A second source of input to the V1 models was a background to coarsely represent the “rest of the brain”. This was modeled as a single input unit that fired at 1 kHz with a Poisson distribution. All neurons received connections from this unit, and the weights were optimized (at the same time with the optimization of weights for the recurrent connectivity) to ensure the V1 spontaneous firing rates matched target experimental rates (see below).

Recurrent Connectivity

The cortico-cortical connection probabilities for different cell-class pairs were estimated based on an extensive and systematic survey of the existing literature and curated into a resource that we make publicly available (Figure 4, see details and notes regarding assumptions and the literature used in Connection_probabilities.pptx). It is important to note that in many cases the values reported in the literature do not take into account two effects that strongly influence connection probabilities. The first is distance dependence: cells closer to each other typically have a higher chance of being connected than cells further apart. The second is that connection probabilities can be affected strongly by differences or similarities in functional preferences of cells, such as preference for orientation. Pyramidal cells in L2/3 of mouse V1, for instance, have a higher chance of being connected with one another if they prefer similar orientations, compared to orthogonally tuned cells (Ko et al., 2011; Cossell et al., 2015; Wertz et al., 2015; Lee et al., 2016). Based on these two factors, the adjustments described below were made.

It is reasonable to assume, for the mouse visual cortex, that both these factors are independent (given the “salt and pepper” arrangement of orientation tuned cells in the mouse (Harris and Mrsic-Flogel, 2013; Seabrook et al., 2017)) and thus the total probability of connection for a cell-class pair is a product of the distance-dependent and preferred-angle-dependent factors (functions of r and Δϕ, respectively): Embedded Image

First we will discuss each of the components separately, and the final section will illustrate our approach for combining the two.

Distance dependent adjustment

We noted that the majority of the experimental literature reporting probability of connections tended to consider inter-somatic distances that were within approximately 0 − 50 μm to 0 − 100 μm. Since we aimed to have a Gaussian profile for distance dependence (Levy and Reyes, 2012), the probability at the origin had to be adjusted to account for these measurements. Since measurements were made in the approximate range of 50 – 100 μm for the upper bound, we chose to consider the mid-point of 75 μm as our reference point for such upper bound. Note the distance is only measured in a plane and is independent of cortical depth in our calculations.

For the Gaussian probability distribution: Embedded Image

Given our assumptions, the integral of this probability from 0 to R0 = 75 μm, divided by the area within the radius R0, should be equal to the reported measured probability, Prep: Embedded Image

Converting to polar coordinates: Embedded Image

This establishes the relationship between the values reported in the literature and our distance-dependent formula for connection probability.

From work in the mouse cortex (Levy and Reyes, 2012), the standard deviations were estimated to be (Fig. 4): Embedded Image

From internal data at the Allen Institute during model building: Embedded Image

In the absence of data for other connection classes, we assumed that connections between excitatory neurons and Htr3a neurons follow the same dependence as between excitatory and Sst neurons (bidirectionally). Finally, we also assumed that connections among all inhibitory classes have the same distance dependence (i.e., same as σPvalb→Pvalb).

Orientation tuning adjustment for excitatory-to-excitatory connections

For orientating tuning dependence, our system is modeled such that pairs containing cells with similar preferred orientation angles have higher connection probabilities than pairs of orthogonally tuned cells, when the presynaptic neuron is excitatory (like-to-like connectivity) (Ko et al., 2011; Cossell et al., 2015; Wertz et al., 2015; Lee et al., 2016). Here we assume the dependence is linear (Figure 4D) as a function of the orientation tuning difference (Δϕ): Embedded Image

Since we considered orientation selective tuning for connectivity (not direction selective), the difference of preferred angles of any two cells can be compressed to be between 0° and 90°. For this model, we can see that the intercept occurs at (0, B1). At the other extreme of the model, we set the point to be (90, B2). The relative strength of the dependence can be described by a ratio Q = B2/B1. As can be seen, for like-to-like, Q < 1 (i.e., G < 0).

Our model is developed such that the integral of the function Pangle (Δϕ), normalized by the range of Δϕ, is always equal to 1. This was implemented because this function is used as a multiplier with the distance dependence function Pdist(r), and since we assume that experimentalists measuring in-vitro probability of connections sample equally from cells preferring all possible orientation angles in vivo. This does restrict the ratio Q one can select, based on the distance dependence and measured connection probabilities from experimental literature. As will be discussed below, if the ratio is outside of a suitable range, we rescaled it to reach the correct range.

Because B2 = QB1, the gradient can be expressed as: Embedded Image

Integral of Psrc→trg (Δϕ) (with normalization for the angle range) should be set to 1 to determine the scaling factor: Embedded Image

Substitute G: Embedded Image

Solving for B1: Embedded Image

And thus: Embedded Image

The value of Q for layers 2/3, 4, and 6 was set to 0.5 given the high orientation selectivity (Niell and Stryker, 2008; Durand et al., 2016). For layer 5, it was set at 0.8 for the excitatory-to-excitatory connections due to lower orientation selectivity in this layer (Niell and Stryker, 2008; Durand et al., 2016).

Combining distance-dependent and orientation-dependent adjustments

As can be observed from the above, the scaling can increase the measured connection probability and to ensure our probabilities were never greater than 1, we forced the following condition: Embedded Image

Thus, we used the following algorithm:

Figure
  • Download figure
  • Open in new tab

In this formalism (pseudo-code above), if one selects a specific value of Q that happens to push the probability values above 1, the worst-case scenario would be that Q is rescaled to 1.0 and hence there is no orientation tuning dependence. The trend will never reverse. And this scenario will only occur if there already exists a very high connectivity probability between two cell classes.

With this approach, we have accounted for distance dependence and functional connectivity between the different cell classes in our model. Our next step was to determine the dendritic targeting rules for the biophysically detailed model.

Dendritic Targeting for the Biophysical Model

The location of synapses between connected neurons has been demonstrated to have different patterns depending on the neuronal classes (Thomson and Lamy, 2007; Egger et al., 2015; Narayanan et al., 2015). Although, unfortunately, the available information is sparse, it does delineate trends that may be generalizable, and thus we used these data to implement the rules described below.

Excitatory-to-Excitatory Connections

All excitatory-to-excitatory connection avoided the soma and targeted the apical and basal dendrites. For layers 2/3, and 4, the connections were within 200 μm from the soma while for layers 5 and 6, the synapses could form anywhere along the dendrites (Thomson and Lamy, 2007; Egger et al., 2015; Narayanan et al., 2015). Note that the literature sources are mostly measurements from rat somatosensory cortex. The cortex depth in the rat is approximately 2 mm while our model it is 0.9 mm, and hence we scaled values accordingly.

Excitatory-to-Inhibitory Connections

For excitatory-to-inhibitory synapses, both the soma and dendrites could be targeted with no distance limitations (Thomson and Lamy, 2007). This was implemented for all layers and the values were again approximations from the relevant sources.

Inhibitory-to-Excitatory Connections (Inhibitory-to-Inhibitory Connections)

For inhibitory-to-excitatory connections we again depended on the data form rat cortex (Thomson and Lamy, 2007). Synapses from the Pvalb class were placed on the soma and dendrites within 50 μm from the soma of any target neuron. Synapses from Sst neurons were placed on the dendrites, 50 μm or further from the soma. Finally, synapses from Htr3a neurons were placed on the dendrites, from 50 μm to 300 μm from the soma. These rules also considered the morphology of neurons in the mouse visual cortex from reconstructions of axons and dendrites (Jiang et al., 2015). We assumed for these purposes that Pvalb neurons correspond to basket cells, Sst neurons to Martinotti cells, and Htr3a neurons to Bitufted and Bipolar cells described by (Jiang et al., 2015).

Due to the lack of information on inhibitory-to-inhibitory connections, for this class of connections we used rules identical to the inhibitory-to-excitatory connections described above.

Layer 1

Finally, for layer 1 neurons, which are Htr3a only in our V1 model, we used the rules below that heavily depended on data from rat neocortex (Jiang et al., 2013) and neuron morphology from mouse V1 (Jiang et al., 2015), and are similar to other layers due to lack of references with explicit measurements. Our original goal for the model was to project i1Htr3a-to-E2/3 to apical dendrites (no somatic connections) from 50 μm and greater (see below). This is based on distance estimates from the bottom of L1 to upper L2/3 that are approximately 50 μm. This was decided by observing the extent of axonal arbors of L1 (according to Jiang et al., 2015, reconstructions). Similarly: i1Htr3a-to-E4 projected to apical dendrites that are 200 μm away from the soma; i1Htr3a-to-E5 projected to apical dendrites that are 300 μm away from the soma and greater; i1Htr3a-to-E6 projected to apical dendrites that are 500 μm away from the soma; i1Htr3a-to-i1Htr3a projected everywhere including soma; i1Htr3a-to-i2/3 projected to basal dendrites from 50 μm and greater. For the other inhibitory layers that project to layer 1, the same rules were used as for within-layer i-to-Htr3a. Finally, excitatory projections to layer 1 were placed on the soma and dendrites with no distance limitations.

Note, however, that during our post-synaptic-potential optimization (see below), we had to change the rules of synaptic placement when L1 was the source onto excitatory cells. Our optimization methodology would create 100 target cells of a specific cell model that receive 1 spike at 0.5 seconds and we would record the generated postsynaptic potential (PSP). The weight would be scaled until we were within 1% of the target PSP. We observed that the when L1 was the source impinging on excitatory cells, the targets sections were so far away that the somatic PSP would reach a maximum and never match the target PSP regardless of how strongly the weight was scaled. This was due to the most distal compartments reaching their maximum membrane deviation that is equal to the reversal potential of the synaptic drive. With these distal compartments being at their maximum, and the attenuation that occurs due to dendritic filtering (recall dendrites in our model are passive), the soma would reach a maximum PSP that did not match our target values.

Thus, to address this issue, we changed the synaptic placement rules for all L1-to-Excitatory neurons so that synapses were placed on dendrites at 50 μm or further from the soma. This is just a highly simplified approximation, but, in terms of reaching closer to the soma than our original rules, it is reasonable since L1 neurogliaform cells are known to bulk release GABA into large volumes and not form well-targeted synapses with post-synaptic cells (Szabadics, Tamás and Soltesz, 2007; Oláh et al., 2009; Tremblay, Lee and Rudy, 2016). Finally note that in our optimization we always let the cells relax to their baseline. Since the resting potential is lower than the reversal potential of the synapses, the single spike at 0.5 seconds would always cause a depolarization. We still used this depolarization level to optimize weights for excitatory PSPs and inhibitory PSPs.

Orientation Rule for Synaptic Strength

Matching Target Post Synaptic Potentials

The first version of our V1 model (Figs. 4, 5) used an orientation-dependent like-to-like rule for synaptic weights of all connection classes: E-to-E, E-to-I, I-to-E, and I-to-I (see Main Text). Since neurons had pre-assigned preferred angles, the connection strength was a function of the difference between the assigned angles of two connected neurons, defined within 90°. The synaptic strength between two cells was then defined as: Embedded Image where Δθ is the difference between the assigned angles of two neurons and σW is the standard deviation set to 50° for all connection classes. Finally, AW is the weight constant that needed to be determined for every connection class to be matched to Post Synaptic Potential (PSP) targets.

For the biophysical model the units of W are in μS (defined as the peak conductance), and for GLIF model, in pA (see Synaptic Characteristics). Since most of the studies used to construct our PSP resource (Connection_strengths.pptx) employed in vitro patch-clamp experiments, the data do not distinguish neuron’s functional preferences, such as preferred angle. Therefore, we assumed the neurons were targeted uniformly and, thus, for optimization we created 100 target cells from every model that were assigned tuning angles with equidistant spacing in the range [0, 360°). We then created a virtual source node for every connection class using the rules described above. The source node would emit 1 spike every 0.5 seconds. We then averaged the post-synaptic responses over all 100 target cells and iteratively updated the weight value (the factor AW in the equation above) until the mean PSP was within 1% of the target value.

For scaling the weights when the target was a LIF neuron, 1000 source cells were created, each firing at 1Hz from a Poisson distribution. These cells would first target every biophysical cell model, using the synaptic weights that were already optimized as described above, and the resulting firing rates due to this input would be calculated. The target firing rate for the LIF neurons were then estimated as the weighted average rate (relative to the proportion of times a model would appear as part of a population). The same source cells (with identical spike times) would then be connected to LIF targets and the firing rate would be matched to within 5% of the desired firing rate.

For inhibitory connections onto the target LIFs, we used the same scaling factors as calculated for their excitatory counterparts. Although not ideal, we chose this route after checking our previous Layer 4 model (Arkhipov et al., 2018) and observing that indeed in that previous work the scaling ratios for LIFs for inhibitory input were approximately equal to the scaling ratios of excitatory inputs.

Finally, for the GLIF model, the weights could be calculated analytically based on connection strengths (i.e., PSPs) between the source and target populations (shown in Connection_strengths.pptx) and the mathematical model of the postsynaptic current (i.e., alpha function, see Synaptic Characteristics), together with the GLIF model membrane potential dynamics (Teeter et al., 2018). Namely, the weights were computed by solving the following equation that describe dynamics in the GLIF model after one spike injection. Embedded Image where V (t) is the membrane potential, C is the capacitance of the target neuron, Isyn(t) is the alpha-shaped post-synaptic current function with weight WGLIF (definition in Synaptic Characteristics), R is the resistance of the target neuron, and EL is the resting potential. Note that weights in the GLIF model are current based while they are conductance based for the biophysical model. The steps for computing the weight WGLIF based on the above GLIF model voltage dynamics are:

  1. Solving the above dynamic equation to get the analytical solution of membrane potential V (t);

  2. Computing the derivative of the solution of V (t), i.e., ∂V (t) /∂t;

  3. Setting ∂V (t) /∂t to zero and solving the equation to get the optimal time point tmax at which V (t) reaches its maximum;

  4. Substituting tmax for t and the target PSP for V (t) to the solution of V (t);

  5. Solving the equation generated in 4) to get the weight WGLIF.

The resultant solution for the weight WGLIF is Embedded Image with Vtarget being the target PSP, τsyn being the synapse time constant, and τm being the membrane time constant.

Optimization of Full V1 Models

As described in the Main Text, running simulations after the above optimization did not yield suitable network behaviors in either of our V1 models. Thus, we used an iterative grid search method (Arkhipov et al., 2018), where weights were uniformly scaled for every class (e.g. scaling weights of excitatory layer 4 to excitatory layer 5 connections all by the same amount, as one iteration). We searched in discrete increments weight changes across connection classes and selected the best result before moving on to the next connection class (although there was still a need to revisit connections classes during this process). The optimization employed a small training set consisting of a two 0.5-second-long simulations: one of gray screen, and the other of a single drifting grating. We aimed to satisfy three criteria: (i) match spontaneous firing rates (gray screen stimulus) to experimental observations, (ii) match peak firing rates for the drifting grating, and (iii) avoid epileptic-like activity where the network would ramp up to have large global bursts and then enter a period of silence until the next very rapid burst. The weight adjustments were kept in a strict range where, for example, the LGN to L4 excitatory weights were not adjusted at all given that they were fit to direct in vivo experimental measurements (Lien and Scanziani, 2013). Other LGN connections were restricted to be scaled only in the range [0.5, 2] from the target net input current as those were scaled from optogenetics experiments (Ji et al., 2015). The optimization was performed starting from L4 only and adding successive layers one by one. First, all interlayer connections were set to zero and only the intra-layer connections in L4 were optimized. Once our criteria were met, we added L2/3 to the optimization, including the interactions between the two layers. This procedure simplified the optimization process even though weights optimized at one step had to be readjusted at the next step (typically minor). This process was continued for layer 5, followed by layer 6, and finally layer 1. During our optimization, the weight scaling was restricted in the range of [0.2, 5]. In the deeper layers (layers 5 and 6), this rule had to be expanded to reach the net adjustment range of [0.12, 18] for the biophysical model and [0.17, 6.0] for the GLIF model. Note that adjusting the synaptic weights in the biophysical model did not translate directly to scaling the PSP (see the Layer 1 description in Matching Target Post Synaptic Potentials).

Optimization with the Direction-Based Rule and Phase Dependence for Synaptic Strength

As described in the Main Text, the next version of our V1 models used a rule for synaptic strengths that was asymmetric with respect to the reversal of direction and included phase dependence, such that strongest synaptic inputs were sourced from a stripe perpendicular to the preferred direction of the target cell (Figs. 6A, 6B). Once this rule was introduced, the weights needed to be optimized further, as the balance in the network was affected. As a first step, we scaled the recurrent synaptic weights so that the net current (area under the curve, Fig. 6A) became the same as in the previous version of the model (Fig. 4D) for every connection class. However, this was not sufficient, and, thus, we further performed another round of optimization as described in the above section. It turned out that because of the scaling to match the area under the curve, the weights were already close to the correct solution, and we found that these new optimizations required only a few iterations before converging to meet our criteria. For the same reason, here it was not necessary to optimize the models layer-by-layer, and instead the optimization was performed with the full recurrent connectivity. The weight scaling was not constrained to tight limits, however, due to the new synaptic strength profiles that deviated substantially and in a non-linear fashion from those used before.

Correcting for Biases between Horizontal- and Vertical-Preferring Neurons

After finalizing the optimization using the rules above, we noticed biased firing rates in our models, in that vertical drifting gratings evoked higher firing rate relative to horizontal gratings (Fig. 6C). Since this was not observed experimentally and was a result of extra excitatory synaptic drive into vertically preferring neurons (Fig. S8), we adjusted incoming synaptic weights to maintain equal net synaptic drive. The adjustment depends on the cortical magnification factors in the azimuth and elevation dimensions. As described in Visual Coordinates, the physical dimensions of each V1 neuron was converted to visual space by a conversion factor of 70 degrees/mm in the azimuth (x-dimension) and 40 degrees/mm in elevation (z-dimension), estimated from experimental reports (Schuett, Bonhoeffer and Hübener, 2002; Kalatsky and Stryker, 2003). To adjust for this asymmetry, we collapsed every neuron’s preferred angle to the quadrant θ = [0, 90] and scaled synapses to neurons that preferred horizontal motion (0-degrees) by Embedded Image whereas synapses to neurons preferring vertical motion (90-degrees) were scaled by: Embedded Image

Given these two points, we then fit a linear function to estimate the weight scaling for every intermediate value, resulting in Embedded Image

This weight adjustment fixed the bias (Figs. 6C, S8) and resulted in horizontal-preferring neurons having a heavier tail of the incoming synaptic strength distribution than vertical-preferring neurons (Fig. 6E). Finally, due to our highly non-linear V1 models, this adjustment resulted in deviations from our target optimization firing rates). Thus, a small amount of grid search tuning was needed again to match our target criteria.

Synaptic Characteristics

The synaptic mechanisms used for the biophysical model were as in the L4 model (Arkhipov et al., 2018). The synapses were bi-exponential (using NEURON’s Exp2Syn mechanism) with a reversal potential of −70 mV for inhibition and 0 mV for excitation. The weights units are in μS (peak conductance). The tau1 and tau2 constants for the mechanism were 2.7 ms and 15 ms for inhibitory-to-excitatory synapses, 0.2 and 8 ms for inhibitory-to-inhibitory synapses, 0.1 ms and 0.5 ms for excitatory-to-inhibitory synapses, and 1 ms and 3 ms for excitatory-to-excitatory connections. Note that these are not the somatic temporal characteristics, but time constants at the synaptic location; the PSP shape at the soma depends on dendritic location of the synapse and membrane dynamics.

For GLIF model, postsynaptic current-based synaptic mechanisms were used with dynamics described by an alpha-function: Embedded Image

Where Isyn is the postsynaptic current, τsyn is the synaptic port time constant, and WGLIF is the input connection weight. This function was normalized such that a post-synaptic current with synapse weight WGLIF = 1.0 has an amplitude of 1.0 pA at the peak time point of t = τsyn. The τsyn constants for the mechanisms were 5.5 ms for excitatory-to-excitatory synapses, 8.5 ms for inhibitory-to-excitatory synapses, 2.8 ms for excitatory-to-inhibitory synapses, and 5.8 ms for inhibitory-to-inhibitory connections, which were extracted from LIF models in the L4 model (Fig. S2B of (Arkhipov et al., 2018)).

Visual Stimuli

The visual stimuli used in our simulations were identical to those used for the experiments we compare to. Each simulation included a 500 ms interval of gray screen in the beginning, which was then followed by a single trial of presentation of the stimulus.

Drifting Gratings

For the drifting grating stimuli, we used sinusoidal gratings with a spatial frequency of 0.04 cycles per degree with a temporal frequency of 2Hz (for 2.5seconds after the grey-screen). All stimuli were run for 10 trials for every direction of motion (8 sampled directions with increments of 45 degrees) at 80% contrast (for both the experiments and the models). Although the experimental data from mice (see below) included more temporal and spatial frequencies, we restricted our analysis to match the drifting gratings used in our simulations.

Flashes

The flash stimuli (10 trials) consisted of: 500 ms of grey screen, followed by 250 ms of white screen (ON-flash), returning to a grey screen for 1000 ms, then another 250 ms of black screen (OFF-flash), and a final grey screen for 500 ms). The contrast was at 80% (to match experiments). (We also conducted simulations with full-contrast flashes (100%), and the models were stable and produced results very similar to the 80% contrast case.).

Natural Movies

We tested our models on a clip (10 trials) from one of the natural movies (Touch of Evil, directed by Orson Welles) used in the Allen Brain Observatory (de Vries et al., 2018). The 2.5 seconds shown were matched between the model and experiment.

Data Analysis

Firing Rates

The firing rates were estimated from all trials of a simulation. Since all simulations started with a 500ms grey-screen period followed by the stimulus, the firing rate is estimated using the stimulus duration without these first 500 ms (that is, 2500 ms for a drifting grating or a natural movie). Thus, the firing rate for a neuron in a trial was calculated by dividing the total number of spikes after the grey screen by the stimulus duration (2500ms). Some metrics required time-dependent firing rates that are described below. For the OSI and DSI metrics, to avoid noise from very sparsely firing neurons that could yield spurious OSI/DSI values of 1.0, we imposed that neurons’ firing rates at their preferred drifting grating direction be greater than 0.5 Hz.

Orientation Selectivity Index (OSI)

The OSI metric computed is also referred to as the global Orientation Selectivity Index, as it takes into account the response of a neuron in all directions tested (not just the preferred and orthogonal). The OSI is calculated as: Embedded Image where Rθ is the mean firing rate response to a drifting grating of angle θ.

Direction Selectivity Index (DSI)

Similar to the OSI metric, the DSI also considered responses in all directions of drifting gratings shown (sometimes referred to as the global Direction Selectivity Index). The DSI is calculated as: Embedded Image where Rθ is the mean firing rate response to a drifting grating of angle θ.

Response at Preferred Direction

The plots quantifying neurons’ response at their preferred direction report the mean firing rate values based on the largest mean response (across trials) over all 8 directions tested.

Correlation of Signal and Noise Correlations

To compute the correlation of signal and noise correlations, we computed the signal correlation as the Pearson correlation coefficient between the trial-averaged spike counts for each pair of neurons (Arkhipov et al., 2018). For natural movies, we computed the correlation for binned spike counts in non-overlapping windows of length 50 ms. For gratings, the correlation was computed over the spike counts in 8 different orientations. The noise correlation was computed as the Pearson correlation coefficient between single-trial spike counts for each pair of neurons, and then averaged over stimuli conditions (8 orientations for gratings and non-overlapping 50 ms windows for natural movies).

Lifetime and Population Sparsity

Lifetime sparsity for each neuron was computed using the following definition (Vinje and Gallant, 2000): Embedded Image where N is the number of stimulus conditions and ri is the trial-averaged spike count for stimulus condition i (de Vries et al., 2018). To compute the population sparsity, we used the same equation, but where N is the total number of neurons in the population and ri is the average spike-count of neuron i over all stimulus conditions (de Vries et al., 2018).

Similarity Score

A similarity score was developed to compare the distribution of all excitatory neurons in the models with all regular spiking neurons recorded experimentally as well as for Pvalb neurons in the models with fast-spiking neurons from the same Neuropixels recording. The metric used the D statistic from a Kolmogorov–Smirnov test that calculates the distance between the cumulative distributions of two samples and is bounded in the range [0, 1]. Since we are interested in similarity in this work and matching distributions, this was converted to a similarity score, S = 1 − D. Fig. S4 illustrates how for two different distributions S is close to 0, whereas for two similar distributions it approaches 1.

Electrophysiological Recordings

Animal preparation

All experimental procedures were approved by the Allen Institute for Brain Science Institutional Animal Care and Use Committee. Five weeks prior to the experiment, mice were anesthetized with isoflurane, and a metal headframe with a 10-mm circular opening was attached to the skull with Metabond. In the same procedure, a 5-mm-diameter craniotomy and durotomy was drilled over left visual cortex and sealed with a circular glass coverslip. Following a 2-week recovery period, a visual area map was obtained through intrinsic signal imaging (Juavinett et al., 2017). Mice with well-defined visual area maps were gradually acclimated to the experimental rig over the course of 12 habituation sessions. On the day of the experiment, the mouse was placed under light isoflurane anesthesia for ~40 min to remove the glass window, which was replaced with a 0.5 mm thick plastic window with laser-cut holes (Ponoko, Inc., Oakland, CA). The space beneath the window was filled with agarose to stabilize the brain and provide a conductive path to the silver ground wire attached to the headpost. Any exposed agarose was covered with 10,000 cSt silicone oil, to prevent drying.

Following a 1-2 hour recovery period, the mouse was head-fixed on the experimental rig. Up to six Neuropixels probes coated in CM-DiI were independently lowered through the holes in the plastic window and into visual cortex at a rate of 200 μm/min using a piezo-driven microstage (New Scale Technologies, Victor, NY). When the probes reached their final depths of 2,500–3,500 μm, each probe extended through visual cortex into hippocampus and thalamus. Only data obtained from V1 was included in this study. In total, data from 37 mice were used for the drifting gratings analysis (one experiment per mouse) and 7 mice for the natural movie and flash analysis.

Data acquisition system

Recordings were performed in awake, head-fixed mice allowed to run freely on a rotating disk. During the recordings, the mice passively viewed a battery of visual stimuli, including local drifting gratings (for receptive field mapping), full-field flashes, drifting gratings, static gratings, natural images, and natural movies, with the same parameters as those from the Allen Brain Observatory (de Vries et al., 2018). All spike data were acquired with Neuropixels probes (Jun et al., 2017) with a 30-kHz sampling rate and recorded with the Open Ephys GUI (Siegle et al., 2017). A 300-Hz analog high-pass filter was present in the Neuropixels probe, and a digital 300-Hz high-pass filter (3rd-order Butterworth) was applied offline prior to spike sorting.

Data preprocessing

Spike times and waveforms were automatically extracted from the raw data using Kilosort2 (github.com/mouseland/kilosort2). Kilosort2 is a spike-sorting algorithm developed for electrophysiological data recorded by hundreds of channels simultaneously. It implements an integrated template matching framework for detecting and clustering spikes, rather than clustering based on spike features, which is commonly used by other spike-sorting techniques. After filtering out units with “noise” waveforms using a random forest classifier trained on manually annotated data, all remaining units were packaged into Neurodata Without Borders format (Teeters et al., 2015) for further analysis.

Neuronal Classification

Regular spiking (RS) neurons and fast spiking (FS) neurons were determined by the duration of the spike (time between trough and peak of the waveform). The duration of the spikes showed a bimodal distribution (Hartigan dip test, p=0.004), with a dip at 0.4 ms. We classified a neuron as RS if its duration was > 0.4 ms, and otherwise FS (Fig. S3). In total we had 328 L6 RS neurons, 72 L6 FS neurons, 419 L5 RS neurons, 80 L5 FS neurons, 294 L4 RS neurons, 49 L4 FS neurons, 251 L23 RS neurons, 49 L23 FS neurons, and 81 L1 neurons.

Acknowledgements

We thank Marius Pachitariu for providing spike-sorting code and assistance. We thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement, and support.

Footnotes

  • ↵2 Lead contact

  • https://brain-map.org/explore/models/mv1-all-layers

References

  1. ↵
    Adesnik, H. et al. (2012) ‘A neural circuit for spatial summation in visual cortex’, Nature. Nature Publishing Group, 490(7419), p. 226.
    OpenUrlCrossRefPubMedWeb of Science
  2. ↵
    Amunts, K. et al. (2016) ‘The human brain project: creating a European research infrastructure to decode the human brain’, Neuron. Elsevier, 92(3), pp. 574–581.
    OpenUrl
  3. ↵
    Antolík, J. et al. (2019) ‘A comprehensive data-driven model of cat primary visual cortex’, BioRxiv. Cold Spring Harbor Laboratory, p. 416156.
  4. ↵
    Arenz, A. et al. (2017) ‘The temporal tuning of the Drosophila motion detectors is determined by the dynamics of their input elements’, Current Biology. Elsevier, 27(7), pp. 929–944.
    OpenUrlCrossRef
  5. ↵
    Arkhipov, A. et al. (2018) ‘Visual physiology of the layer 4 cortical circuit in silico’, PLoS computational biology. Public Library of Science, 14(11), p. e1006535.
    OpenUrl
  6. ↵
    Barlow, H. B. and Levick, W. R. (1965) ‘The mechanism of directionally selective units in rabbit’s retina.’, The Journal of physiology. Wiley Online Library, 178(3), pp. 477–504.
    OpenUrlCrossRefPubMedWeb of Science
  7. ↵
    Beierlein, M. and Connors, B. W. (2002) ‘Short-term dynamics of thalamocortical and intracortical synapses onto layer 6 neurons in neocortex’, Journal of neurophysiology. American Physiological Society Bethesda, MD, 88(4), pp. 1924–1932.
    OpenUrlPubMedWeb of Science
  8. ↵
    Beierlein, M., Gibson, J. R. and Connors, B. W. (2003) ‘Two dynamically distinct inhibitory networks in layer 4 of the neocortex’, Journal of neurophysiology.
  9. ↵
    Billeh, Yazan N., Iyer R, Durand S, Mihalas S, Arkhipov A, de V. S. (2019) ‘Motion detection model predicts direction-reversing neurons as observed in the mouse visual cortex’, COSYNE.
  10. ↵
    Bock, D. D. et al. (2011) ‘Network anatomy and in vivo physiology of visual cortical neurons’, Nature. Nature Publishing Group, 471(7337), p. 177.
    OpenUrlCrossRefPubMedWeb of Science
  11. ↵
    Bopp, R. et al. (2017) ‘An ultrastructural study of the thalamic input to layer 4 of primary motor and primary somatosensory cortex in the mouse’, Journal of Neuroscience. Soc Neuroscience, 37(9), pp. 2435–2448.
    OpenUrl
  12. ↵
    Borst, A. and Egelhaaf, M. (1989) ‘Principles of visual motion detection’, Trends in neurosciences. Elsevier, 12(8), pp. 297–306.
    OpenUrlCrossRefPubMedWeb of Science
  13. ↵
    Bortone, D. S., Olsen, S. R. and Scanziani, M. (2014) ‘Translaminar inhibitory cells recruited by layer 6 corticothalamic neurons suppress visual cortex’, Neuron. Elsevier, 82(2), pp. 474–485.
    OpenUrlCrossRefPubMedWeb of Science
  14. ↵
    Cauli, B. et al. (1997) ‘Molecular and physiological diversity of cortical nonpyramidal cells’, Journal of Neuroscience. Soc Neuroscience, 17(10), pp. 3894–3906.
    OpenUrl
  15. ↵
    Chevée, M. and Brown, S. P. (2018) ‘The development of local circuits in the neocortex: recent lessons from the mouse visual cortex’, Current opinion in neurobiology. Elsevier, 53, pp. 103–109.
    OpenUrl
  16. ↵
    Cossell, L. et al. (2015) ‘Functional organization of excitatory synaptic strength in primary visual cortex’, Nature. Nature Publishing Group, 518(7539), p. 399.
    OpenUrlCrossRefPubMed
  17. ↵
    Dai, K. et al. (2019) ‘The SONATA Data Format for Efficient Description of Large-Scale Network Models’, bioRxiv.
  18. ↵
    Dantzker, J. L. and Callaway, E. M. (2000) ‘Laminar sources of synaptic input to cortical inhibitory interneurons and pyramidal neurons’, Nature neuroscience. Nature Publishing Group, 3(7), p. 701.
    OpenUrlCrossRefPubMedWeb of Science
  19. ↵
    Davison, A. P. et al. (2009) ‘PyNN: a common interface for neuronal network simulators’, Frontiers in neuroinformatics. Frontiers, 2, p. 11.
    OpenUrl
  20. ↵
    Douglas, R. J. et al. (1995) ‘Recurrent excitation in neocortical circuits’, Science. American Association for the Advancement of Science, 269(5226), pp. 981–985.
    OpenUrlAbstract/FREE Full Text
  21. ↵
    Douglas, R. J. and Martin, K. A. C. (2007) ‘Recurrent neuronal circuits in the neocortex’, Current biology. Elsevier, 17(13), pp. R496–R500.
    OpenUrlCrossRefPubMedWeb of Science
  22. ↵
    Douglas, R. J., Martin, K. A. C. and Whitteridge, D. (1989) ‘A canonical microcircuit for neocortex’, Neural computation. MIT Press, 1(4), pp. 480–488.
    OpenUrlCrossRef
  23. ↵
    Dura-Bernal, S. et al. (2019) ‘NetPyNE, a tool for data-driven multiscale modeling of brain circuits’, Elife. eLife Sciences Publications Limited, 8, p. e44494.
    OpenUrl
  24. ↵
    Durand, S. et al. (2016) ‘A comparison of visual response properties in the lateral geniculate nucleus and primary visual cortex of awake and anesthetized mice’, Journal of Neuroscience. Soc Neuroscience, 36(48), pp. 12144–12156.
    OpenUrl
  25. ↵
    Egger, R. et al. (2015) ‘Robustness of sensory-evoked excitation is increased by inhibitory inputs to distal apical tuft dendrites’, Proceedings of the National Academy of Sciences. National Acad Sciences, 112(45), pp. 14072–14077.
    OpenUrl
  26. ↵
    Fino, E. and Yuste, R. (2011) ‘Dense inhibitory connectivity in neocortex’, Neuron. Elsevier, 69(6), pp. 1188–1203.
    OpenUrlCrossRefPubMedWeb of Science
  27. ↵
    Fu, Y. et al. (2014) ‘A cortical circuit for gain control by behavioral state’, Cell. Elsevier, 156(6), pp. 1139–1152.
    OpenUrlCrossRefPubMedWeb of Science
  28. ↵
    Gewaltig, M.-O. and Diesmann, M. (2007) ‘Nest (neural simulation tool)’, Scholarpedia, 2(4), p. 1430.
    OpenUrl
  29. ↵
    Gleeson, P., Steuber, V. and Silver, R. A. (2007) ‘neuroConstruct: a tool for modeling networks of neurons in 3D space’, Neuron. Elsevier, 54(2), pp. 219–235.
    OpenUrlCrossRefPubMedWeb of Science
  30. ↵
    Gouwens, N. W. et al. (2018) ‘Systematic generation of biophysically detailed models for diverse cortical neuron types’, Nature communications. Nature Publishing Group, 9(1), p. 710.
    OpenUrl
  31. ↵
    Gouwens, N. W. et al. (2019) ‘Classification of electrophysiological and morphological types in mouse visual cortex’, Nature neuroscience, In press.
  32. ↵
    Gratiy, S. L. et al. (2018) ‘BioNet: A Python interface to NEURON for modeling large-scale networks’, PLoS ONE, 13(8). doi: 10.1371/journal.pone.0201630.
    OpenUrlCrossRef
  33. ↵
    Harris, K. D. and Mrsic-Flogel, T. D. (2013) ‘Cortical connectivity and sensory coding’, Nature. Nature Publishing Group, 503(7474), p. 51.
    OpenUrlCrossRefPubMedWeb of Science
  34. ↵
    Harris, K. D. and Shepherd, G. M. G. (2015) ‘The neocortical circuit: themes and variations’, Nature neuroscience. Nature Publishing Group, 18(2), p. 170.
    OpenUrlCrossRefPubMed
  35. ↵
    Hassenstein, B. and Reichardt, W. (1956) ‘Systemtheoretische analyse der zeit-, reihenfolgen-und vorzeichenauswertung bei der bewegungsperzeption des rüsselkäfers chlorophanus’, Zeitschrift für Naturforschung B. Verlag der Zeitschrift für Naturforschung, 11(9–10), pp. 513–524.
    OpenUrl
  36. ↵
    Hernando, J. et al. (2013) ‘Practical Parallel Rendering of Detailed Neuron Simulations.’, in EGPGV, pp. 49–56.
  37. ↵
    Hines, M. L. and Carnevale, N. T. (1997) ‘The NEURON simulation environment’, Neural computation. MIT Press, 9(6), pp. 1179–1209.
    OpenUrlCrossRefPubMedWeb of Science
  38. ↵
    Hofer, S. B. et al. (2011) ‘Differential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortex’, Nature neuroscience. Nature Publishing Group, 14(8), p. 1045.
    OpenUrlCrossRefPubMed
  39. ↵
    Iyer, R. and Mihalas, S. (2017) ‘Cortical circuits implement optimal context integration’, bioRxiv. Cold Spring Harbor Laboratory, p. 158360.
  40. ↵
    Ji, X. et al. (2015) ‘Thalamocortical innervation pattern in mouse auditory and visual cortex: laminar and cell-type specificity’, Cerebral Cortex. Oxford University Press, 26(6), pp. 2612–2625.
    OpenUrl
  41. ↵
    Jiang, X. et al. (2013) ‘The organization of two new cortical interneuronal circuits’, Nature neuroscience. Nature Publishing Group, 16(2), p. 210.
    OpenUrlCrossRefPubMed
  42. ↵
    Jiang, X. et al. (2015) ‘Principles of connectivity among morphologically defined cell types in adult neocortex’, Science. American Association for the Advancement of Science, 350(6264), p. aac9462.
    OpenUrlAbstract/FREE Full Text
  43. ↵
    Joglekar, M. R. et al. (2018) ‘Inter-areal balanced amplification enhances signal propagation in a large-scale circuit model of the primate cortex’, Neuron. Elsevier, 98(1), pp. 222–234.
    OpenUrlCrossRef
  44. ↵
    Juavinett, A. L. et al. (2017) ‘Automated identification of mouse visual areas with intrinsic signal imaging’, Nature protocols. Nature Publishing Group, 12(1), p. 32.
    OpenUrl
  45. ↵
    Jun, J. J. et al. (2017) ‘Fully integrated silicon probes for high-density recording of neural activity’, Nature. doi: 10.1038/nature24636.
    OpenUrlCrossRefPubMed
  46. ↵
    Kalatsky, V. A. and Stryker, M. P. (2003) ‘New paradigm for optical imaging: temporally encoded maps of intrinsic signal’, Neuron. Elsevier, 38(4), pp. 529–545.
    OpenUrlCrossRefPubMedWeb of Science
  47. ↵
    Kerlin, A. M. et al. (2010) ‘Broadly tuned response properties of diverse inhibitory neuron subtypes in mouse visual cortex’, Neuron. Elsevier, 67(5), pp. 858–871.
    OpenUrlCrossRefPubMedWeb of Science
  48. ↵
    Kloc, M. and Maffei, A. (2014) ‘Target-specific properties of thalamocortical synapses onto layer 4 of mouse primary visual cortex’, Journal of Neuroscience. Soc Neuroscience, 34(46), pp. 15455–15465.
    OpenUrl
  49. ↵
    Ko, H. et al. (2011) ‘Functional specificity of local synaptic connections in neocortical networks’, Nature. Nature Publishing Group, 473(7345), p. 87.
    OpenUrlCrossRefPubMedWeb of Science
  50. ↵
    Koch, C. (1999) Biophysics of computation: information processing in single neurons. Oxford university press.
  51. ↵
    Koch, C. and Jones, A. (2016) ‘Big Science, Team Science, and Open Science for Neuroscience’, Neuron. doi: 10.1016/j.neuron.2016.10.019.
    OpenUrlCrossRef
  52. ↵
    Krukowski, A. E. and Miller, K. D. (2001) ‘Thalamocortical NMDA conductances and intracortical inhibition can explain cortical temporal tuning’, Nature neuroscience. Nature Publishing Group, 4(4), p. 424.
    OpenUrlCrossRefPubMedWeb of Science
  53. ↵
    Lee, S. et al. (2010) ‘The largest group of superficial neocortical GABAergic interneurons expresses ionotropic serotonin receptors’, Journal of Neuroscience. Soc Neuroscience, 30(50), pp. 16796–16808.
    OpenUrl
  54. ↵
    Lee, W.-C. A. et al. (2016) ‘Anatomy and function of an excitatory network in the visual cortex’, Nature. Nature Publishing Group, 532(7599), p. 370.
    OpenUrlCrossRefPubMed
  55. ↵
    Lefort, S. et al. (2009) ‘The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex’, Neuron. Elsevier, 61(2), pp. 301–316.
    OpenUrlCrossRefPubMedWeb of Science
  56. ↵
    Levy, R. B. and Reyes, A. D. (2012) ‘Spatial profile of excitatory and inhibitory synaptic connectivity in mouse primary auditory cortex’, Journal of Neuroscience. Soc Neuroscience, 32(16), pp. 5609–5619.
    OpenUrl
  57. ↵
    Lien, A. D. and Scanziani, M. (2013) ‘Tuned thalamic excitation is amplified by visual cortical circuits’, Nature neuroscience. Nature Publishing Group, 16(9), p. 1315.
    OpenUrlCrossRefPubMed
  58. ↵
    Lien, A. D. and Scanziani, M. (2018) ‘Cortical direction selectivity emerges at convergence of thalamic synapses’, Nature. Nature Publishing Group, p. 1.
  59. ↵
    Liu, B. et al. (2009) ‘Visual receptive field structure of cortical inhibitory neurons revealed by two-photon imaging guided recording’, Journal of Neuroscience. Soc Neuroscience, 29(34), pp. 10520–10532.
    OpenUrl
  60. ↵
    Ma, W. et al. (2010) ‘Visual representations by cortical somatostatin inhibitory neurons—selective but with weak and delayed responses’, Journal of Neuroscience. Soc Neuroscience, 30(43), pp. 14371–14379.
    OpenUrl
  61. ↵
    Markram, H. et al. (2015) ‘Reconstruction and simulation of neocortical microcircuitry’, Cell. Elsevier, 163(2), pp. 456–492.
    OpenUrlCrossRefPubMed
  62. ↵
    Marshel, J. H. et al. (2012) ‘Anterior-posterior direction opponency in the superficial mouse lateral geniculate nucleus’, Neuron. Elsevier, 76(4), pp. 713–720.
    OpenUrlCrossRefPubMedWeb of Science
  63. ↵
    Martin, C. L. and Chun, M. (2016) ‘The BRAIN initiative: building, strengthening, and sustaining’, Neuron. Elsevier, 92(3), pp. 570–573.
    OpenUrl
  64. ↵
    Mercer, A. et al. (2005) ‘Excitatory connections made by presynaptic cortico-cortical pyramidal cells in layer 6 of the neocortex’, Cerebral cortex. Oxford University Press, 15(10), pp. 1485–1496.
    OpenUrlCrossRefPubMedWeb of Science
  65. ↵
    Morgenstern, N. A., Bourg, J. and Petreanu, L. (2016) ‘Multilaminar networks of cortical neurons integrate common inputs from sensory thalamus’, Nature neuroscience. Nature Publishing Group, 19(8), p. 1034.
    OpenUrlCrossRef
  66. ↵
    Movshon, J. A., Thompson, I. D. and Tolhurst, D. J. (1978) ‘Spatial summation in the receptive fields of simple cells in the cat’s striate cortex.’, The Journal of physiology. Wiley Online Library, 283(1), pp. 53–77.
    OpenUrlCrossRefPubMedWeb of Science
  67. ↵
    Muñoz, W. et al. (2017) ‘Layer-specific modulation of neocortical dendritic inhibition during active wakefulness’, Science. American Association for the Advancement of Science, 355(6328), pp. 954–959.
    OpenUrlAbstract/FREE Full Text
  68. ↵
    Narayanan, R. T. et al. (2015) ‘Beyond columnar organization: cell type-and target layer-specific principles of horizontal axon projection patterns in rat vibrissal cortex’, Cerebral cortex. Oxford University Press, 25(11), pp. 4450–4468.
    OpenUrlCrossRefPubMed
  69. ↵
    Nicola, W. and Clopath, C. (2017) ‘Supervised learning in spiking neural networks with FORCE training’, Nature communications. Nature Publishing Group, 8(1), p. 2208.
    OpenUrl
  70. ↵
    Niell, C. M. and Stryker, M. P. (2008) ‘Highly selective receptive fields in mouse visual cortex’, Journal of Neuroscience. Soc Neuroscience, 28(30), pp. 7520–7536.
    OpenUrl
  71. ↵
    Oh, S. W. et al. (2014) ‘A mesoscale connectome of the mouse brain’, Nature. doi: 10.1038/nature13186.
    OpenUrlCrossRefPubMedWeb of Science
  72. ↵
    Oláh, S. et al. (2009) ‘Regulation of cortical microcircuits by unitary GABA-mediated volume transmission’, Nature. Nature Publishing Group, 461(7268), p. 1278.
    OpenUrlCrossRefPubMedWeb of Science
  73. ↵
    Olsen, S. R. et al. (2012) ‘Gain control by layer six in cortical circuits of vision’, Nature. Nature Publishing Group, 483(7387), p. 47.
    OpenUrlCrossRefPubMedWeb of Science
  74. ↵
    Packer, A. M. and Yuste, R. (2011) ‘Dense, unspecific connectivity of neocortical parvalbumin-positive interneurons: a canonical microcircuit for inhibition?’, Journal of Neuroscience. Soc Neuroscience, 31(37), pp. 13260–13271.
    OpenUrl
  75. ↵
    Pfeffer, C. K. et al. (2013) ‘Inhibition of inhibition in visual cortex: the logic of connections between molecularly distinct interneurons’, Nature neuroscience. Nature Publishing Group, 16(8), p. 1068.
    OpenUrlCrossRefPubMed
  76. ↵
    Pillow, J. W. et al. (2005) ‘Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model’, Journal of Neuroscience. Soc Neuroscience, 25(47), pp. 11003–11013.
    OpenUrl
  77. ↵
    Piscopo, D. M. et al. (2013) ‘Diverse visual features encoded in mouse lateral geniculate nucleus’, Journal of Neuroscience. Soc Neuroscience, 33(11), pp. 4642–4656.
    OpenUrl
  78. ↵
    Potjans, T. C. and Diesmann, M. (2014) ‘The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model’, Cerebral cortex. Oxford University Press, 24(3), pp. 785–806.
    OpenUrlCrossRefPubMed
  79. ↵
    Reimann, M. W. et al. (2015) ‘An algorithm to predict the connectome of neural microcircuits’, Frontiers in computational neuroscience. Frontiers, 9, p. 28.
    OpenUrl
  80. ↵
    Rossi, L. F., Harris, K. and Carandini, M. (2019) ‘Excitatory and inhibitory intracortical circuits for orientation and direction selectivity’, bioRxiv. Cold Spring Harbor Laboratory, p. 556795.
  81. ↵
    Van Santen, J. P. H. and Sperling, G. (1984) ‘Temporal covariance model of human motion perception’, JOSA A. Optical Society of America, 1(5), pp. 451–473.
    OpenUrl
  82. ↵
    Schaub, M. T. et al. (2015) ‘Emergence of Slow-Switching Assemblies in Structured Neuronal Networks’, PLoS Computational Biology, 11(7). doi: 10.1371/journal.pcbi.1004196.
    OpenUrlCrossRef
  83. ↵
    Schmidt, M. et al. (2018) ‘Multi-scale account of the network structure of macaque visual cortex’, Brain Structure and Function. Springer, 223(3), pp. 1409–1435.
    OpenUrl
  84. ↵
    Scholl, B. et al. (2013) ‘Emergence of orientation selectivity in the mammalian visual pathway’, Journal of Neuroscience. Soc Neuroscience, 33(26), pp. 10616–10624.
    OpenUrl
  85. ↵
    Schoonover, C. E. et al. (2014) ‘Comparative strength and dendritic organization of thalamocortical and corticocortical synapses onto excitatory layer 4 neurons’, Journal of Neuroscience. Soc Neuroscience, 34(20), pp. 6746–6758.
    OpenUrl
  86. ↵
    Schuett, S., Bonhoeffer, T. and Hübener, M. (2002) ‘Mapping retinotopic structure in mouse visual cortex with optical imaging’, Journal of Neuroscience. Soc Neuroscience, 22(15), pp. 6549–6559.
    OpenUrl
  87. ↵
    Schüz, A. and Palm, G. (1989) ‘Density of neurons and synapses in the cerebral cortex of the mouse’, Journal of Comparative Neurology. Wiley Online Library, 286(4), pp. 442–455.
    OpenUrlCrossRefPubMedWeb of Science
  88. ↵
    Seabrook, T. A. et al. (2017) ‘Architecture, function, and assembly of the mouse visual system’, Annual review of neuroscience. Annual Reviews, 40, pp. 499–538.
    OpenUrl
  89. ↵
    Seeman, S. C. et al. (2018) ‘Sparse recurrent excitatory connectivity in the microcircuit of the adult mouse and human cortex’, eLife. eLife Sciences Publications Limited, 7, p. e37349.
    OpenUrlCrossRef
  90. ↵
    Serbe, E. et al. (2016) ‘Comprehensive characterization of the major presynaptic elements to the Drosophila OFF motion detector’, Neuron. Elsevier, 89(4), pp. 829–841.
    OpenUrlCrossRefPubMed
  91. ↵
    Siegle, J. H. et al. (2017) ‘Open Ephys: an open-source, plugin-based platform for multichannel electrophysiology’, Journal of neural engineering. IOP Publishing, 14(4), p. 45003.
    OpenUrl
  92. ↵
    Song, S. et al. (2005) ‘Highly nonrandom features of synaptic connectivity in local cortical circuits’, PLoS biology. Public Library of Science, 3(3), p. e68.
    OpenUrl
  93. ↵
    Sun, W. et al. (2016) ‘Thalamus provides layer 4 of primary visual cortex with orientation-and direction-tuned inputs’, Nature neuroscience. Nature Publishing Group, 19(2), p. 308.
    OpenUrlCrossRefPubMed
  94. ↵
    Sussillo, D. and Abbott, L. F. (2009) ‘Generating coherent patterns of activity from chaotic neural networks’, Neuron. Elsevier, 63(4), pp. 544–557.
    OpenUrlCrossRefPubMedWeb of Science
  95. ↵
    Szabadics, J., Tamás, G. and Soltesz, I. (2007) ‘Different transmitter transients underlie presynaptic cell type specificity of GABAA, slow and GABAA, fast’, Proceedings of the National Academy of Sciences. National Acad Sciences, 104(37), pp. 14831–14836.
    OpenUrl
  96. ↵
    Tasic, B. et al. (2018) ‘Shared and distinct transcriptomic cell types across neocortical areas’, Nature. Nature Publishing Group, 563(7729), p. 72.
    OpenUrlCrossRefPubMed
  97. ↵
    Teeter, C. et al. (2018) ‘Generalized leaky integrate-and-fire models classify multiple neuron types’, Nature communications. Nature Publishing Group, 9(1), p. 709.
    OpenUrl
  98. ↵
    Teeters, J. L. et al. (2015) ‘Neurodata Without Borders: Creating a Common Data Format for Neurophysiology’, Neuron. doi: 10.1016/j.neuron.2015.10.025.
    OpenUrlCrossRefPubMed
  99. ↵
    Thomson, A. M. et al. (2002) ‘Synaptic connections and small circuits involving excitatory and inhibitory neurons in layers 2–5 of adult rat and cat neocortex: triple intracellular recordings and biocytin labelling in vitro’, Cerebral cortex. Oxford University Press, 12(9), pp. 936–953.
    OpenUrlCrossRefPubMedWeb of Science
  100. ↵
    Thomson, A. M. and Lamy, C. (2007) ‘Functional maps of neocortical local circuitry’, Frontiers in neuroscience. Frontiers, 1, p. 2.
    OpenUrl
  101. ↵
    Traub, R. D. et al. (2005) ‘Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts’, Journal of neurophysiology. American Physiological Society, 93(4), pp. 2194–2232.
    OpenUrlCrossRefPubMedWeb of Science
  102. ↵
    Tremblay, R., Lee, S. and Rudy, B. (2016) ‘GABAergic interneurons in the neocortex: from cellular properties to circuits’, Neuron. Elsevier, 91(2), pp. 260–292.
    OpenUrlCrossRefPubMed
  103. ↵
    Troyer, T. W. et al. (1998) ‘Contrast-invariant orientation tuning in cat visual cortex: thalamocortical input tuning and correlation-based intracortical connectivity’, Journal of Neuroscience. Soc Neuroscience, 18(15), pp. 5908–5927.
    OpenUrl
  104. ↵
    Vélez-Fort, M. et al. (2014) ‘The stimulus selectivity and connectivity of layer six principal cells reveals cortical microcircuits underlying visual processing’, Neuron. Elsevier, 83(6), pp. 1431–1443.
    OpenUrlCrossRefPubMedWeb of Science
  105. ↵
    Vinje, W. E. and Gallant, J. L. (2000) ‘Sparse coding and decorrelation in primary visual cortex during natural vision’, Science. American Association for the Advancement of Science, 287(5456), pp. 1273–1276.
    OpenUrlAbstract/FREE Full Text
  106. ↵
    de Vries, S. E. J. et al. (2018) ‘A large-scale, standardized physiological survey reveals higher order coding throughout the mouse visual cortex’, bioRxiv. Cold Spring Harbor Laboratory, p. 359513.
  107. ↵
    Wehmeier, U. et al. (1989) ‘Modeling the mammalian visual system, Methods in neuronal modeling: From synapses to networks’. MIT Press, Cambridge, MA.
  108. ↵
    Wertz, A. et al. (2015) ‘Single-cell–initiated monosynaptic tracing reveals layer-specific cortical network modules’, Science. American Association for the Advancement of Science, 349(6243), pp. 70–74.
    OpenUrlAbstract/FREE Full Text
  109. ↵
    West, D. C. et al. (2005) ‘Layer 6 cortico-thalamic pyramidal cells preferentially innervate interneurons and generate facilitating EPSPs’, Cerebral cortex. Oxford University Press, 16(2), pp. 200–211.
    OpenUrl
  110. ↵
    Yamins, D. L. K. and DiCarlo, J. J. (2016) ‘Using goal-driven deep learning models to understand sensory cortex’, Nature neuroscience. Nature Publishing Group, 19(3), p. 356.
    OpenUrlCrossRefPubMed
  111. ↵
    Yoshimura, Y., Dantzker, J. L. M. and Callaway, E. M. (2005) ‘Excitatory cortical neurons form fine-scale functional networks’, Nature. Nature Publishing Group, 433(7028), p. 868.
    OpenUrlCrossRefPubMedWeb of Science
  112. ↵
    Zemel, R. S. and Sejnowski, T. J. (1998) ‘A model for encoding multiple object motions and self-motion in area MST of primate visual cortex’, Journal of Neuroscience. Soc Neuroscience, 18(1), pp. 531–547.
    OpenUrl
  113. ↵
    Zhao, X. et al. (2013) ‘Orientation-selective responses in the mouse lateral geniculate nucleus’, Journal of Neuroscience. Soc Neuroscience, 33(31), pp. 12751–12763.
    OpenUrl
  114. ↵
    Zhu, W., Shelley, M. and Shapley, R. (2009) ‘A neuronal network model of primary visual cortex explains spatial frequency selectivity’, Journal of computational neuroscience. Springer, 26(2), pp. 271–287.
    OpenUrlCrossRefPubMedWeb of Science
  115. ↵
    Znamenskiy, P. et al. (2018) ‘Functional selectivity and specific connectivity of inhibitory neurons in primary visual cortex’, bioRxiv. Cold Spring Harbor Laboratory, p. 294835.
Back to top
PreviousNext
Posted June 06, 2019.
Download PDF

Supplementary Material

Data/Code
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Systematic Integration of Structural and Functional Data into Multi-Scale Models of Mouse Primary Visual Cortex
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Systematic Integration of Structural and Functional Data into Multi-Scale Models of Mouse Primary Visual Cortex
Yazan N. Billeh, Binghuang Cai, Sergey L. Gratiy, Kael Dai, Ramakrishnan Iyer, Nathan W. Gouwens, Reza Abbasi-Asl, Xiaoxuan Jia, Joshua H. Siegle, Shawn R. Olsen, Christof Koch, Stefan Mihalas, Anton Arkhipov
bioRxiv 662189; doi: https://doi.org/10.1101/662189
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Systematic Integration of Structural and Functional Data into Multi-Scale Models of Mouse Primary Visual Cortex
Yazan N. Billeh, Binghuang Cai, Sergey L. Gratiy, Kael Dai, Ramakrishnan Iyer, Nathan W. Gouwens, Reza Abbasi-Asl, Xiaoxuan Jia, Joshua H. Siegle, Shawn R. Olsen, Christof Koch, Stefan Mihalas, Anton Arkhipov
bioRxiv 662189; doi: https://doi.org/10.1101/662189

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (4116)
  • Biochemistry (8820)
  • Bioengineering (6523)
  • Bioinformatics (23469)
  • Biophysics (11798)
  • Cancer Biology (9216)
  • Cell Biology (13327)
  • Clinical Trials (138)
  • Developmental Biology (7440)
  • Ecology (11417)
  • Epidemiology (2066)
  • Evolutionary Biology (15160)
  • Genetics (10442)
  • Genomics (14050)
  • Immunology (9176)
  • Microbiology (22170)
  • Molecular Biology (8817)
  • Neuroscience (47600)
  • Paleontology (350)
  • Pathology (1429)
  • Pharmacology and Toxicology (2492)
  • Physiology (3733)
  • Plant Biology (8084)
  • Scientific Communication and Education (1437)
  • Synthetic Biology (2221)
  • Systems Biology (6039)
  • Zoology (1254)