ABSTRACT
Pattern separation is a fundamental brain computation that converts small differences in synaptic input patterns into large differences in action potential (AP) output patterns. Pattern separation plays a key role in the dentate gyrus, enabling the efficient storage and recall of memories in downstream hippocampal CA3 networks. Several mechanisms for pattern separation have been proposed, including expansion of coding space, sparsification of neuronal activity, and simple thresholding mechanisms. Alternatively, a winner-takes-all mechanism, in which the most excited cells inhibit all less-excited cells by lateral inhibition, might be involved. Although such a mechanism is computationally powerful, it remains unclear whether it operates in biological networks. Here, we develop a full-scale network model of the dentate gyrus, comprised of granule cells (GCs) and parvalbumin+ (PV+) inhibitory interneurons, based on experimentally determined biophysical cellular properties and synaptic connectivity rules. Our results demonstrate that a biologically realistic principal neuron–interneuron (PN–IN) network model is a highly efficient pattern separator. Mechanistic dissection in the model revealed that a winner-takes-all mechanism by lateral inhibition plays a crucial role in pattern separation. Furthermore, both fast signaling properties of PV+ interneurons and focal GC–interneuron connectivity are essential for efficient pattern separation. Thus, PV+ interneurons are not only involved in basic microcircuit functions, but also contribute to higher-order computations in neuronal networks, such as pattern separation.
INTRODUCTION
A fundamental question in neuroscience is to understand how higher-order computations in the brain are implemented at the level of synapses, neurons, and neuronal networks. A key computation in the brain is pattern separation, a process that converts slightly different input patterns into highly different action potential (AP) output patterns1–3. Pattern separation is thought to play a particularly important role in the memory circuits of the hippocampus, where separation computations at the input layer, the dentate gyrus4, facilitate reliable storage and recall of memories in the downstream layer, the CA3 region2, 5–7. However, although pattern separation has an important function in memory circuits, the underlying mechanisms remain elusive.
In the cerebellum, a circuit where pattern separation is relevant for precise motor control8, synaptic divergence from a small to a large number of neurons and sparsification of activity are key factors9–11. However, as the connectivity between synaptic input and cerebellar granule cells (GCs) is extremely sparse11, generalization to the dentate gyrus is not straightforward. In the olfactory bulb, a circuit where pattern separation converts broad activation of sensory olfactory neurons into specific activation of mitral cells, a winner-takes-all mechanism mediated by lateral inhibition contributes to pattern separation12–21. However, in olfactory circuits lateral inhibition is mediated by specialized dendro-dendritic synapses, and the number of inhibitory GCs exceeds the number of excitatory mitral cells by more than an order of magnitude22. Whether lateral inhibition contributes to pattern separation in the dentate gyrus, where signaling is mediated by axo-dendritic synapses and excitatory neurons greatly outnumber inhibitory cells23, remains unclear24.
We recently found that in the dentate gyrus lateral inhibition by parvalbumin-expressing (PV+) interneurons is more abundant than in any other studied brain region25, consistent with the idea that lateral inhibition implements a winner-takes-all mechanism underlying pattern separation25. However, principal neuron–interneuron (PN–IN) connectivity in the dentate gyrus is highly focal, which seems incompatible with the central idea of that model, that a winner should be able to globally suppress all non-winners. To clarify the role of lateral inhibition in pattern separation in the dentate gyrus, we constructed a network model of this brain area based on experimentally determined biophysical cellular properties and synaptic connectivity rules. In contrast to several previous studies, the model was implemented in full-scale. We quantitatively analyzed pattern separation in the model to address three main questions. First, is a PN–IN network with biological properties able to perform efficient pattern separation? Second, what is the role of lateral inhibition in pattern separation? Third, how do the fast signaling properties of GABAergic interneurons26 and the focal PN–IN connectivity25 impact on pattern separation? A preliminary account of this work has been published in abstract form27.
RESULTS
A winner takes-all-mechanism is able to decorrelate patterns
Pattern separation is a network computation that converts highly overlapping synaptic input patterns into minimally overlapping AP output patterns. The basic principle is illustrated in Fig. 1a. When two highly overlapping input patterns (A and B) are applied at the input of a neuronal population (Fig. 1a, top), two largely non-overlapping output patterns (A’ and B’) are generated at the output of the population (Fig. 1a, bottom). Quantitatively, the correlation coefficients for the output patterns (Rout = r(A’, B’)) are smaller than the corresponding correlation coefficients of the input patterns (Rin = r(A, B)). Thus, when Rout is plotted against Rin for all pairs of patterns, data points should be located below the identity line (Fig. 1b).
To test these predictions, we used the simplest possible implementation of a winner-takes-all mechanism: an infinite-size network incorporating a thresholding mechanism (Fig. 1c). Under the assumption that input patterns follow a bivariate Gaussian distribution (Fig. 1c), Rout can be analytically computed for any given Rin and average activity level α using Hoeffding’s lemma28 (see Methods). As expected for a pattern separation mechanism, Rout–Rin curves were consistently located below the identity line (Fig. 1d). To assess whether this mechanism also works in finite-size networks, we performed numerical simulations of input and output patterns (Fig. 1e). Random real number input patterns were drawn from a bivariate Gaussian distribution. Interestingly, the parameter dependence was more complex than predicted from the analytical solution for the infinite-size network. For small neuronal populations (nCells= 5,000), reduction of activity α increased Ψ. However, below a certain activity level, the monotonic relation between Rout and Rin was disrupted (Fig. 1e, top, right). In contrast, for larger neuronal populations (nCells = 50,000), the monotonic relation between input and output was maintained over a wider range (Fig. 1e, bottom).
To characterize these complex phenomena, we introduced three quantitative measures of pattern separation (Fig. 1f; see Methods). First, we measured the efficacy of pattern separation Ψ as the normalized area between the data points and the identity line (Fig. 1f, top). Second, we computed the reliability of pattern separation ρ from the rank correlation coefficient of the Rout–Rin data (Fig. 1f, center). Finally, we determined the maximal gain of pattern separation γ from the slope of the input-output correlation for Rin → 1 (Fig. 1f, bottom). For the infinite-size networks, Ψ approached values as high as 0.75 for low values of α. For the finite-size networks, pattern separation efficacy Ψ approached similar values. However, pattern separation reliability ρ was markedly reduced for low levels of α (ρ = 0.74 for α = 0.001 and nCells = 5,000; ρ = 0.94 for α = 0.001 and nCells = 50,000). In conclusion, these results provide a proof-of-principle that a winner-takes-all mechanism is able to separate patterns. However, the performance of the mechanism depends on activity level and network size.
A biologically realistic PN–IN network model is an efficient pattern separator
To explore whether the winner-takes-all mechanism of pattern separation works in biologically realistic networks resembling the dentate gyrus, we developed a model of pattern separation based on empirical experimental data (Fig. 2; Supplementary Figure 1; Table 1). The network was created in full-scale, with 500,000 GCs29, represented as leaky integrate-and-fire neurons, and 2,500 PV+ interneurons, implemented as single-compartment conductance-based models (Fig. 2a). Excitatory GC–PV+ interneuron synapses, inhibitory PV+ interneuron–GC synapses, mutual inhibition, and gap junctions were implemented based on the detailed description of functional connectivity obtained by multi-cell recordings25. At the network input, 50,000 entorhinal cortex cells (ECs) were attached23. The EC–GC connectivity was constrained by the width of the entorhinal cortex neuron axons (20% of the dentate gyrus along the longitudinal axis)30 and the number of spines on the dendrites of GCs (~5,000)31,32. As gamma oscillations may contribute to a winner-takes-all mechanism17, an inhibitory conductance was initiated at the onset of each simulation epoch17,33. Since gamma oscillations show high power in the dentate gyrus34–36, this also contributed to the realism of the model.
We then analyzed pattern separation in the biologically realistic, full-size network model. One-hundred correlated binary activity patterns were applied in ECs, and activity was simulated in GCs and interneurons (Fig. 2b; Supplementary Figure 1; Table 1). Whereas all interneurons generated spikes, the activity in the GC population was only 0.012, indicating sparse coding (Fig. 2b). Input-output correlation curves were located below the identity line, indicating efficient pattern separation in the model (Fig. 2c–e). For the standard network parameters, Ψ was 0.560, indicating a high efficacy of pattern separation (Fig. 2c). ρ was 0.98, implying a high reliability of the pattern separation process (Fig. 2d). Finally, γ was 11.1, suggesting a high gain of pattern separation, i.e. the ability to convert small differences in input patterns into large differences in output patterns (Fig. 2e). Similar results were obtained when the tonic EC–GC drive was replaced by a random train of fast excitatory synaptic waveforms of comparable strength (Supplementary Figure 2). Likewise, efficient pattern separation was also observed in a network model that incorporated feedforward activation of interneurons (Supplementary Figure 3). Finally, efficient pattern separation was observed in a network model with synaptic amplitude fluctuations, i.e. trial-to-trial (“type 1”) variability and synapse-to-synapse (“type 2”) variability (Supplementary Figure 4). In conclusion, a biologically realistic PN–IN network is able to efficiently and reliably perform pattern separation computations.
Pattern separation in the dentate gyrus may facilitate the storage and recall of information in downstream CA3 networks2,5–7. For example, pattern separation may avoid that correlated representations are confused or erased by catastrophic interference18. To test these predictions, we attached our dentate gyrus network model to a single-layer perceptron decoder endowed with backpropagation learning, intended to represent the CA3 network (Fig. 2f)11,37. We trained the perceptron decoder to divide patterns into 10 randomly assigned classes, and assessed the learning rate by plotting the classification error against the number of iterations. To assess the effects of pattern separation, we compared the learning rates of the perceptron decoder for “unprocessed” EC patterns and “processed” GC patterns. Remarkably, the learning rate of the perceptron decoder was substantially faster for the GC patterns than for the corresponding EC patterns (Fig. 2g, h). These results demonstrate that the decorrelation generated by pattern separation in the dentate gyrus can be beneficial for computations in downstream networks, resulting in an improvement in the storage of information.
Lateral inhibition is a primary mechanism underlying pattern separation
To identify the key mechanisms underlying pattern separation in the network model, we systematically varied the biologically relevant parameters (Fig. 3). First, we changed the amplitude of the excitatory synaptic drive (Iµ) and the inhibitory gamma input (Jgamma) in the network, parameters expected to affect thresholding properties of input-output conversion (Fig. 3a). Pattern separation was highly dependent on both parameters. Contour plot analysis revealed that the combination of small excitatory synaptic drive with small gamma input provided efficient pattern separation (Fig. 3b). As the excitatory drive was increased, a higher inhibitory gamma input was required to maintain the efficacy of pattern separation. Thus, the balance between excitatory drive and inhibitory gamma input determined the efficacy of pattern separation.
Next, we determined how the properties of the synaptic input from ECs via the perforant path determined pattern separation9–11,38. To address this, we varied the number of entorhinal cells (nEC), the average EC activity level (αEC), and peak value and width of EC-GC connectivity (cEC–GCand σEC–GC; Fig. 3c)30,39,40. Increasing the number of ECs decreased Ψ, whereas decreasing the number increased it (Fig. 3c, top left). Likewise, increasing the average EC activity decreased Ψ, whereas decreasing the activity had the reverse effect (Fig. 3c, top, right). Furthermore, increasing the EC–GC connection probability and the width mostly decreased Ψ, whereas decreasing probability or width led to opposite changes (Fig. 3c, bottom; Supplementary Figure 5). Effects of connection probability and width were similar when the GC drive values were randomly shuffled, indicating that spatial correlations in the input played only a minor role in pattern separation (Supplementary Figure 6). Interestingly, the effects of the nEC: nGC ratio and cEC–GC remained relatively minor even when the parameters were varied over a much wider range in a simplified system comprised of ECs, GCs, and a winner-takes-all mechanism in which the threshold was set according to the specified activity level (Supplementary Figure 7). Thus, the properties of the excitatory synaptic input influence pattern separation, but quantitatively play a relatively minor role.
Finally, we tested the contribution of lateral inhibition to pattern separation in the network model (Fig. 3d). Complete elimination of both excitatory E–I and inhibitory I–E synapses severely impaired pattern separation. Contour plot analysis of Ψ against Iµ and Jgammain the absence of lateral inhibition revealed Ψ values > 0.5 were only obtained in a small part of the parameter space (Fig. 3d, left). Furthermore, reducing the strength of either excitatory E–I or inhibitory I–E connections (JE–I or JI–E) substantially reduced pattern separation efficacy Ψ (Fig. 3d, right). Similarly, reducing the peak connectivity or connectivity width of either excitatory E–I or inhibitory I–E connections (cE–I and σE–I, cI–E and σI–E) markedly affected pattern separation (Supplementary Figure 8). Thus, interfering with disynaptic inhibition at multiple levels uniformly decreased the efficacy of pattern separation. Taken together, these results indicate that lateral inhibition plays an essential role in pattern separation.
Fast signaling and focal connectivity of PV+ interneurons are necessary for efficient pattern separation
If lateral inhibition plays a key role for pattern separation in the network, how do functional properties and connectivity rules affect this process? A hallmark property of PV+ GABAergic interneurons is their fast signaling at the level of synaptic input, input-output conversion, and synaptic output26,41–43. To test whether these fast signaling properties are relevant for pattern separation, we systematically varied the corresponding parameters in the model (Fig. 4a, b). Increasing the synaptic delay at excitatory GC–PV+ interneuron input synapses markedly impaired pattern separation (Fig. 4a, b, top left). Similarly, prolonging the time constants of the synaptic currents at excitatory GC–PV+ interneuron synapses reduced pattern separation performance (Fig. 4a, b, top right). Furthermore, increasing the membrane time constant of the PV+ interneurons reduced pattern separation performance (Fig. 4a, b, bottom left). Finally, increasing the synaptic delay at inhibitory PV+ interneuron–GC output synapses substantially impaired pattern separation (Fig. 4a, b, bottom right; Supplementary Figure 9). Thus, the fast signaling properties of PV+ interneurons are critical for the pattern separation process.
The high pattern separation efficacy observed in the network model was surprising, because the model contains focal connectivity rules for both excitatory E–I and inhibitory I–E synapses in dentate gyrus25. In contrast, an efficient winner-takes-all mechanism may require lateral inhibition with long-range connectivity to ensure that a winner suppress all non-winners in the network. To resolve this apparent contradiction, we explored the effects of focal E–I and I–E connectivity in the network model (Fig. 4c–e). To address the effects of focal connectivity in isolation, we maintained the total connectivity (i.e. the area under the connection probability–distance curve) through compensatory changes of maximal connection probability (Fig. 4c). Increasing the width of connectivity for either excitatory E–I or inhibitory I–E synaptic connections reduced Ψ; particularly large changes were observed when focal connectivity was fully replaced by global random connectivity (Fig. 4c; Supplementary Figure 9). Thus, focal PN–IN connectivity supported pattern separation more effectively than global connectivity.
Next, we examined the effects of combined changes in the width of excitatory E– I and inhibitory I–E connectivity (Fig. 4d). As in the previous set of simulations, we maintained the total connectivity. Contour plot analysis confirmed that focal connectivity supported pattern separation more effectively than broad connectivity. However, the effects of changes in the width of excitatory E–I and inhibitory I–E connectivity were asymmetric. Thus, a high Ψ was obtained in a configuration in which the excitatory E–I was more focal than the inhibitory I–E connectivity (Fig. 4d). This was consistent with experimental observations that excitatory E–I is more focal than inhibitory I–E connectivity and that lateral inhibition is highly abundant in the circuit25. In conclusion, lateral inhibition effectively supported pattern separation.
Why does focal connectivity support pattern separation better than global connectivity? One possibility is that the effects of focal connectivity might be a consequence of changes in average latency, which are shorter in a focally connected network than in an equivalent random network. To test this hypothesis, we examined the effects of changes in axonal AP propagation velocity at excitatory E–I and inhibitory I–E synapses on pattern separation. Slowing AP propagation reduced Ψ, whereas accelerating propagation increased it (Fig. 4e, left). To test whether changes in synaptic latency fully account for the functional differences between focal and random networks, we changed the connectivity width while maintaining the kinetic properties of disynaptic inhibition through compensatory changes of AP propagation velocity (Fig. 4e, right). Notably, changes in propagation velocity almost completely compensated the effects of changes in connectivity. Thus, focal connectivity and fast biophysical signaling in GC– PV+ interneuron microcircuits play synergistic roles in providing rapid lateral inhibition, an essential requirement for efficient pattern separation.
DISCUSSION
A fundamental question in neuroscience is how the properties of synapses and microcircuits contribute to higher-order computations in the brain. Our network model provides some answers to this central question, for a specific network function (pattern separation) and a specific circuit (dentate gyrus). First, our results provide a proof-of-principle that a biologically realistic network model is a highly efficient pattern separator. Second, our results show that lateral inhibition plays a critical role in the pattern separation process. Finally, they indicate that fast biophysical signaling properties of PV+ interneurons and focal connectivity are essential for efficient pattern separation.
Previous work in the cerebellum suggested that expansion of coding space is a key mechanism underlying pattern separation9–11. Our computational analysis confirms that the connectivity rules between ECs and GCs play an important role in pattern separation. First, the number of ECs is relevant, with a smaller number of neurons resulting in more efficient pattern separation (Fig. 3c). This is consistent with previous models, which emphasized the role of code expansion9–11,38. Second, the average EC– GC connectivity is important, with sparse connectivity enhancing pattern separation performance (Fig. 3c). Although this is also true for the cerebellum11, the mechanisms may be different in the hippocampus, because GCs receive a much higher number of synaptic inputs (> 1,000)31,32 compared to GCs in cerebellum (~5)11. Finally, a mix of structured and random EC–GC connectivity is optimal for the pattern separation mechanism (Supplementary Figure 6). However, the effects of these parameters on pattern separation efficacy are moderate. Thus, the rules of EC–GC connectivity, although clearly important, are not the main determinants of pattern separation in the dentate gyrus.
Previous studies suggested a major role of inhibition in pattern separation in the olfactory bulb of mammals and zebrafish and in the equivalent mushroom body of Drosophila18–20. Furthermore, a role of inhibition has been suggested in the hippocampus24,44. Recent functional connectivity analysis between GCs and interneurons revealed that lateral inhibition is uniquely abundant in the dentate gyrus25. Here, we show that lateral inhibition inserted into a biologically inspired network model generates a powerful winner-takes-all mechanism. Both excitatory E–I synapses and inhibitory I–E synapses are necessary for pattern separation (Fig. 3d). Remarkably, the winner-takes-all mechanism based on lateral inhibition works in a network comprised of a relatively small number of neurons. Winner-takes-all computations are also performed by networks of perceptrons16. However, in such implementations, pattern separation requires a multi-layer structure with a much larger number of neuron-like elements and synaptic connections16. Thus, lateral inhibition represents a compact, resource-efficient implementation of a winner-takes-all computation.
Our results reveal two novel determinants of the efficacy of pattern separation. The first key factor is fast signaling in GABAergic cells. This may have been expected, because sufficient speed is required to ensure that a small number of winners suppresses a large number of non-winners (Fig. 4a, b). Lateral inhibition in the dentate gyrus is primarily mediated by PV+ interneurons, since these interneurons are more connected than other interneurons (such as somatostatin+ or cholecystokinin+ interneurons)25,45. Furthermore, PV+ interneurons express an extensive repertoire of fast biophysical signaling mechanisms at the level of synaptic input, AP initiation, and synaptic output26,41,46. Thus, PV+ interneurons are prime candidates for the neuronal implementation of a winner-takes-all mechanism by lateral inhibition. However, the contribution of other interneuron subtypes cannot be excluded.
The second key factor is focal connectivity between principal neurons and interneurons, which substantially enhances pattern separation. This is counter-intuitive, because a long-range divergent output may be useful to suppress all non-winners 15,16. However, our simulations show that networks with focal connectivity are more effective than networks with wide connectivity (Fig. 4c). Furthermore, the pattern separation mechanism works well if the connectivity is asymmetric, with excitatory E–I synapses showing narrower connectivity and inhibitory I–E synapses wider connectivity, as observed experimentally (Fig. 4d)25.
Our results demonstrate that pattern separation can accelerate learning by a downstream perceptron decoder (Fig. 2f–h). Does this also happen in the biological network? Hippocampal GCs connect to CA3 pyramidal neurons via hippocampal mossy fiber synapses5. Because of their large size, these synapses are often viewed as “detonator” synapses47. If the mossy fiber synapses are detonators, one would expect that the decorrelated signals are relayed to the CA3 network, and trigger efficient storage of information in CA3–CA3 synapses, similar to the perceptron6,7. However, recent work suggests that the signaling properties of mossy fiber synapses are more complex, since subdetonation and conditional detonation can coexist with plasticity-dependent full detonation48. If the mossy fiber output would be slightly below the detonation threshold, this may introduce another mechanism of synaptic integration and thresholding into the network, which could amplify the degree of pattern separation. Large-scale network simulations including both dentate gyrus and CA3 will be needed to further address this possibility.
Taken together, the present results add to the emerging view that PV+ interneurons are not only involved in basic microcircuit functions, such as feedforward and feedback inhibition, but also contribute to higher-order computations in neuronal networks26. Consistent with this idea, pharmacological analysis revealed that inhibition plays a role in pattern separation in behavioral experiments44. More specific optogenetic and pharmacogenetic strategies will be needed to further delineate the contribution of PV+ interneurons and other interneurons to these processes. Finally, since accumulating evidence suggests that PV+ interneuron dysfunction is associated with brain disorders, including schizophrenia26, it will be important to evaluate whether pattern separation is impaired and how exactly inhibition contributes to circuit dysfunction in these diseases49.
METHODS
Topology of a full-size dentate gyrus network model
The pattern separation network model consists of two layers, the first layer representing the entorhinal cortex, with 50,000 ECs, and the second layer representing the dentate gyrus, with 500,000 GCs and 2,500 PV+ interneurons (INs). First and second layer were connected by EC–GC synapses, representing the perforant path input to the dentate gyrus. A winner-takes-all mechanism mediated by lateral inhibition was implemented by connecting GCs and INs by excitatory E–I synapses in one direction and by inhibitory I– E synapses in the other direction.
Unlike other models of dentate gyrus circuits24,50, the model was implemented in full size. The number of GCs was chosen to represent the dentate gyrus of one hemisphere in adult laboratory mice29. Full-scale implementation was necessary: (1) to increase the realism of the simulations, (2) to be able to implement measured macroscopic connectivity rules without scaling51, and (3) to simulate sparse coding regimes, which were unstable in smaller networks (Fig. 1e).
The model was designed to incorporate the connectivity rules of PV+ interneurons and GCs in the dentate gyrus25. Other types of interneurons, such as SST+ hilar interneurons with axons associated with the perforant path or CCK+ hilar interneurons with axons associated with the commissural / associational pathway45,52–54, were not explicitly included because of their low connectivity25 and their slower signaling speed26. While the first property of SST+ or CCK+ interneurons would make them less likely to be activated by GC activity, the second property would make them less suitable for the neuronal implementation of a winner-takes-all mechanism17. In total, the conclusions of the present paper were based on 594 full-scale simulations.
Implementation of inhibitory interneurons
Interneurons were implemented as single-compartment, conductance-based neurons to capture the electrical properties of PV+ interneurons. Membrane potential was simulated by solving the equation: where V is membrane potential, t is time, Cm is membrane capacitance, Idrive is driving current, while INa, IK, and ILrepresent sodium, potassium and leakage current, respectively. INawas modeled as where is the maximal sodium conductance, m is the activation parameter, h is the inactivation parameter, and VNarepresents the sodium ion equilibrium potential.
Similarly, IK was modeled according to the equation where is the maximal potassium conductance, n is the activation parameter, and VK represents the potassium ion equilibrium potential.
Finally, IL was given as where gL is leakage conductance and VL is corresponding reversal potential.
State parameters m, h, and n were computed according to the differential equation and equivalent equations for h and n.
αm, αh, αn values and βm, βh, βn values were calculated according to the equations αm= 0.1 ms-1 × −(V+35 mV) / {Exp[−(V+35 mV)/10 mV] – 1}, βm = 4 ms-1 × Exp[−(V+60 mV)/18 mV], αh = 0.35 ms-1 × Exp[−(V+58 mV)/20 mV], βh= 5 ms-1 / {Exp[−(V+28 mV)/10 mV] + 1}, αn = 0.05 ms-1 × −(V+34 mV) / {Exp[−(V+34 mV)/10 mV] − 1}, and βn = 0.625 ms-1 × Exp[−(V+44 mV)/80 mV]55. Single neurons were assumed to be cylinders with diameter and length of 70 µm, giving a surface area of 15,394 µm2 and an input resistance of 65 MΩ42. Neurons showed a rheobase of 39 pA and a fast-spiking, type I AP phenotype56, as characteristic for PV+ interneurons26. Maximal conductance values , , and gL were set to 35 mS cm−2, 9 mS cm−2, and 0.1 mS cm−2, respectively55. VNa and VK equilibrium potentials were assumed as 55 mV and −90 mV, respectively. Finally, VL was set to −65 mV.
Implementation of GCs
GCs were implemented as leaky integrate-and-fire (IF) spiking neurons. To enable the integration of excitatory and inhibitory synaptic events with different kinetics, the standard IF model was extended as follows57:
The time course of synaptic excitation was described by the differential equation where ke is the synaptic excitation rate constant, i.e. the inverse of the time constant.
Likewise, the time course of synaptic inhibition was described by the differential equation where ki is the synaptic inhibition rate constant.
Finally, the firing of the neuron was controlled by a membrane state variable v; when v reaches one, the cell fires, which resets the membrane by returning v to 0. The time course of v was determined by the differential equation where kmis inverse of the membrane time constant, ae and ai are amplitudes of synaptic events, and idrive represents the excitatory drive any given neuron receives57. Excitation time constant, inhibition time constant, and membrane time constant were set to 3, 10, and 15 ms, respectively25,32,43. The refractory period was assumed as 5 ms. Note that in the IF model v, e, i, and idrive are unitless.
Implementation of synaptic interconnectivity
Synapses between neurons were placed with distance-dependent probability. Normalized distance was cyclically measured as where i and j are indices of pre- and postsynaptic neurons, imax and jmax are corresponding maximum index values, and abs(r) is the absolute value of a real number r. Connection probability was then computed with a Gaussian function as where c is maximal connection probability (cE–I, cI–E, cI–I, and cgap, respectively) and σ is the standard deviation representing the width of the distribution (σE–I, σI–E, σI–I, and σgap; Table 1).
Connection probability between ECs and GCs was computed from a Gaussian function with peak connection probability of 0.2 and a standard deviation of 500 µm, to represent the divergent connectivity from the entorhinal cortex to the dentate gyrus30,39,40. Binary activity patterns in upstream ECs were converted into patterns of excitatory drive of GCs. Although this drive was primarily intended to represent input from entorhinal cortex neurons, it may include contributions from other types of excitatory neurons (e.g. mossy cells or CA3 pyramidal cells)50.
Excitatory GC–interneuron synapses, inhibitory interneuron–GC synapses, and inhibitory interneuron–interneuron synapses were incorporated by random placement of NetCon objects in NEURON57; gap junctions were implemented by random placement of pairs of point processes. For excitatory GC–interneuron synapses and inhibitory interneuron–interneuron synapses, synaptic events were simulated using the Exp2Syn class of NEURON. For excitatory GC–interneuron synapses, we assumed τrise,E = 0.1 ms, τdecay,E = 1 ms, and a peak conductance of 8 nS25,41. For inhibitory interneuron– interneuron synapses, we chose τrise,I = 0.1 ms, τdecay,I = 2.5 ms, and a peak conductance of 16 nS25,58,59. For inhibitory interneuron–GC synapses, the synaptic weight was chosen as 0.025 (unitless, because GCs were modelled as IF neurons). For all chemical synapses, synaptic latency was between 0 and 25 ms according to distance between pre- and postsynaptic neuron. Gap junction resistance was assumed as 300 MΩ, approximately five times the input resistance of the cell25,58,59. Synaptic reversal potentials were 0 mV for excitation and −65 mV for inhibition. The maximal length of the hippocampal network was assumed as 5 mm, consistent with anatomical descriptions in mice60.
Detailed implementation and simulations
Simulations of network activity were performed using NEURON version 7.6.257 in combination with Mathematica version 11.3.0.0 (Wolfram Research). Simulations were tested on reduced-size networks running on a PC using Windows 10. Full-size simulations were run on x86_64-based shared memory systems (Supermicro or SGI UV 3000 systems) using GNU/Linux (Debian, SLES).
Simulations were performed in four steps (Supplementary Figure 1). First, we computed random binary activity patterns in ECs. To generate input patterns with defined correlations over a wide range, 100 uncorrelated random vectors ai of size nEC were computed, where individual elements are pseudorandom real numbers in range of 0 to 1 and nEC is the number of ECs. Vectors were transformed into correlated vectors as r × a1 + (1 − r) × ai, where a1 is the first random vector and r corresponds to the correlation coefficient. r was varied between 0.1 and 1. Finally, a threshold function f(x) = H(x − θ) was applied to the vectors, where H is the Heaviside function and θ is the threshold that determines the activity level in the pattern. Empirically, 100 input patterns were sufficient to continuously cover the chosen range of input correlations. Unless stated differently, the average activity in EC neurons (αEC), i.e. the proportion of spiking cells, was assumed to be 0.1.
Second, the patterns in the upstream neurons were converted into patterns of excitatory drive in GCs, by multiplying the activity vectors with the previously computed connectivity matrix between ECs and GCs. Unless otherwise indicated, the mean tonic current value was set to 1.8 times the threshold value of the GCs (i.e. Iµ = 1.8; unitless, since GCs were implemented as IF units; Table 1). In a subset of simulations (Supplementary Figure 2), the tonic current was replaced by Poisson trains of excitatory postsynaptic currents (EPSCs) to convey a higher degree of realism. In these simulations, events were simulated by NetStim processes. In another subset of simulations (Supplementary Figure 3), the tonic excitatory drive computed from the EC activity and the EC–GC connectivity was applied in parallel to GCs and INs after appropriate scaling to represent feedforward inhibition.
Third, we computed the activity of the network for all 100 patterns. Simulations were run with 5 µs fixed time step over a total duration of 50 ms. At the beginning of each simulation, random number generators were initialized with defined seeds to ensure reproducibility. At the beginning of each simulation, an inhibitory synaptic event of weight 1 (relative to threshold) was simulated in all GCs to mimic recovery from a preceding gamma cycle17. Spikes were detected when membrane potential reached a value of 1 in the GCs and 0 mV in the interneurons. Subsequently, spike times were displayed in raster plot representations. Furthermore, 100 binary output vectors were computed, by setting the value to 1 if a cell generated ≥ 1 spikes in the time interval 0 ≤ t ≤ 50 ms, and to 0 otherwise.
Finally, Pearson’s correlation coefficients were computed for all pairs of patterns , at both input (tonic excitatory drive vector) and output level (spike vector) in parallel, and output correlation coefficients (Rout) were plotted against input correlation coefficients (Rin). Pattern separation was quantitatively characterized by three parameters: (1) The efficacy of pattern separation (Ψ) was quantified by an integral-based index, defined as the area between the identity line and the Rout versus Rin curve, normalized by the area under the identity line . Thus, where f(x) represents the input-output correlation function. In practice, f(x) was determined by linear interpolation of data points after sorting by Rin values, averaging of points with same Rin, and including points (0|0) and (1|1). Based on these definitions, a Ψ value close to 1 would correspond to an ideal pattern separator. In contrast, Ψ = 0 would represent pattern identity, whereas Ψ < 0 would indicate pattern completion7. (2) The reliability of pattern separation (ρ) was quantified by the Pearson’s correlation coefficient of the ranks of all Rout versus the ranks of all Rin data points. An ideal pattern separator will maintain the order of pairwise correlations: If a pair of patterns is more similar than another pair at the input level, it will be also more similar at the output level. Thus, for an ideal pattern separator, ρ will be close to 1. (3) Finally, the gain of pattern separation (γ) was quantified from the maximal slope of the Rout versus Rin curve. In practice, this value was determined from the first derivative of a 5th or 10th-order polynomial function f(x) fit to the Rout versus Rin data points as ; f(x) was constrained to pass through points (0|0) and (1|1). A γ value >> 1 would correspond to an ideal pattern separator. In contrast, γ = 1 would represent pattern identity, whereas γ < 1 would indicate pattern completion7.
Analytical analysis of pattern separation
To describe the pattern separation process in a simple mathematical form (Fig. 1c, d), we obtained an analytical solution for the correlation coefficient of a bivariate Gaussian after dichotomization using Hoeffding’s lemma where cov is the covariance, X and Y are random variables, Fx,y denotes the joint probability function, and Fx, Fy represent the marginal probability functions28,61,62. To simulate finite-size effects (Fig. 1e, f), vectors of real random numbers were drawn from a bivariate Gaussian distribution with defined correlation Rin, converted into vectors of binary numbers by applying a threshold, and subjected to correlation analysis, resulting in the correlation coefficient Rout. The threshold was chosen to reach a previously specified average activity level α, and the size of the vector varied in the range 5,000 to 50,000. Furthermore, in a subset of simulations (Supplementary Figure 7), activity was simulated in ECs, computed into drive patterns in GCs by multiplication with the EC–GC connectivity matrix, and directly converted into binary activity values in GCs by applying a threshold corresponding to the desired activity level α. This simplified approach permitted systematic variation of model parameters (e.g. cell numbers and connection probabilities) over a wide range.
Analysis of input and output patterns by a perception decoder
To test whether the pattern separation process resulted in a gain of function that could be exploited by downstream networks, we analyzed input and output patterns by a perception decoder (Fig. 2f–h)11,37. The perception decoder was trained to categorize 100 input and output patterns into 10 random classes. The decoder was comprised of a single layer, and a backpropagation learning algorithm was used to iteratively adjust the weights. Initially, all weights were arbitrarily set to 0.1. The learning rate was assumed as 5 × 10-4. In each learning iteration, weights were adjusted according to the deviations between predicted and observed classifications. In total, 5,000 learning iterations were run, and the learning speed was quantified as the number of iterations at which the root mean square error reached a value of 0.1 or 0.05.
Conventions
Throughout the paper, model parameters given in Table 1 are referred to as standard parameters. In summary bar graphs, black bars indicate these standard values, light blue bars reduced values, and light red bars increased values in comparison to the default parameter set. Throughout the paper, the term “pattern” is defined as a vector of real numbers (for excitatory drive) or a vector of binary values (for activity, 1 if the cell fires, 0 otherwise). In both cases, the vector length corresponds to the number of cells.
Data and code availability
Original data, analysis programs, and computer code for network simulations will be provided by the corresponding author (P.J.) upon request. Simulation code will be updated according to new experimental information about connectivity (e.g. EC–GC connectivity rules). Furthermore, IF models of GCs and single-compartment models of interneurons will be gradually replaced by more detailed models (conductance-based models and multi-compartmental models, respectively).
Competing interest
The authors declare no conflict of interest.
Author contributions
S.J.G. and P.J. designed the model and the layout of the simulations, A.S. performed large-scale simulations on computer clusters, C.E., X.Z., and B.A.S. provided experimental data, S.J.G. and P.J. analyzed data, and P.J. wrote the paper. All authors jointly revised the paper.
ACKNOWLEDGMENTS
We thank Drs. Ad Aertsen, Arnd Roth, and Federico Stella for critically reading earlier versions of the manuscript. We are grateful to Florian Marr and Christina Altmutter for excellent technical assistance, Eleftheria Kralli-Beller for manuscript editing, and the Scientific Service Units of IST Austria for efficient support. Finally, we thank Drs. Ted Carnevale, Laszlo Erdös, Michael Hines, Nancy Kopell, Duane Nykamp, and Dominik Schröder for useful discussions, and Rainer Friedrich and Simon Wiechert for sharing unpublished data. Parts of the results presented were obtained using the Mach2 Interuniversity Shared Memory Supercomputer (Linz, Austria). This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 692692) and the Fond zur Förderung der Wissenschaftlichen Forschung (Z 312-B27, Wittgenstein award), both to P.J.