## Abstract

Cortical neurons are characterized by irregular firing and a broad distribution of rates. The balanced state model explains these observations with a cancellation of mean excitatory and inhibitory currents, which makes fluctuations drive firing. In networks of neurons with current-based synapses, the balanced state emerges dynamically if coupling is strong, i.e. if the mean number of synapses per neuron *K* is large and synaptic efficacy is of order . When synapses are conductance-based, current fluctuations are suppressed when coupling is strong, questioning the applicability of the balanced state idea to biological neural networks. We analyze networks of strongly coupled conductance-based neurons and show that asynchronous irregular activity and broad distributions of rates emerge if synaptic efficacy is of order 1/log(*K*). In such networks, unlike in the standard balanced state model, current fluctuations are small and firing is maintained by a drift-diffusion balance. This balance emerges dynamically, without fine tuning, if inputs are smaller than a critical value, which depends on synaptic time constants and coupling strength, and is significantly more robust to connection heterogeneities than the classical balanced state model. Our analysis makes experimentally testable predictions of how the network response properties should evolve as input increases.

## I. INTRODUCTION

Each neuron in cortex receives inputs from hundreds to thousands of pre-synaptic neurons. If these inputs were to sum to produce a large net current, the central limit theorem argues that fluctuations should be small compared to the mean, leading to regular firing, as observed during *in vitro* experiments under constant current injection [1, 2]. Cortical activity, however, is highly irregular, with a coefficient of variation of interspike intervals (CV of ISI) close to one [3, 4]. To explain the observed irregularity, it has been proposed that neural networks operate in a balanced state, where strong feed-forward and recurrent excitatory inputs are canceled by recurrent inhibition and firing is driven by fluctuations [5, 6]. At the single neuron level, in order for this state to emerge, input currents must satisfy two constraints. First, excitatory and inhibitory currents must be fine tuned so to produce an average input below threshold. Specifically, if *K* and *J* represent the average number of input connections per neuron and synaptic efficacy, respectively, the difference between excitatory and inhibitory presynaptic inputs must be of order 1/*KJ*. Second, input fluctuations should be large enough to drive firing.

It has been shown that the balanced state emerges dynamically (without fine tuning) in randomly connected networks of binary units [7, 8] and networks of current-based spiking neurons [9, 10], provided that coupling is strong, and recurrent inhibition is powerful enough to counterbalance instabilities due to recurrent excitation. However, these results have all been derived assuming that the firing of a presynaptic neuron produces a fixed amount of synaptic current, hence neglecting the dependence of synaptic current on the membrane potential, a key aspect of neuronal biophysics. In real synapses, synaptic inputs are mediated by changes in conductance, due to opening of synaptic receptor-channels on the membrane, and synaptic currents are proportional to the product of synaptic conductance and a driving force which depends on the membrane potential. Models that incorporate this description are referred to as ‘conductance-based synapses’.

Large synaptic conductances has been shown to have major effects on the stationary [11] and dynamical [12] response of single cells, and form the basis of the ‘high-conductance state’ [13–19] that has been argued to describe well *in vivo* data [20–22] (but see [23] and Discussion). At the network level, conductance modulation plays a role in controlling signal propagation [24], input summation [25], and firing statistics [26]. However, most of the previously mentioned studies rely exclusively on numerical simulations, and in spite of a few attempts at analytical descriptions of networks of conductance-based neurons [17, 27–31], an understanding of the behavior of such networks when coupling is strong is still lacking.

Here, we investigate networks of strongly coupled conductance-based neurons. We find that, for synapses of order , fluctuations are too weak to sustain firing, questioning the relevance of the balanced state idea to cortical dynamics. Our analysis, on the other hand, shows that stronger synapses (of order 1/log(*K*)) generate irregular firing when coupling is strong. We characterize the properties of networks with such a scaling, showing that they match properties observed in cortex, and discuss constraints induced by synaptic time constant. The model generates qualitatively different predictions compared to the current-based model, which could be tested experimentally.

## II. MODELS OF SINGLE NEURON AND NETWORK DYNAMICS

### Membrane potential dynamics

We study the dynamics of networks of leaky integrate-and-fire (LIF) neurons with conductance-based synaptic inputs. The membrane potential *V*_{j} of the *j*-th neuron in the network follows the equation
where is the neuronal capacitance; *E*^{L}, *E*_{E} and *E*_{I} are the reversal potentials of the leak, excitatory and inhibitory currents; while , and are the leak, excitatory and inhibitory conductances. Assuming instantaneous synapses (the case of finite synaptic time constants is discussed at the end of the results section), excitatory and inhibitory conductances are given by

In Eq. (2), is the single neuron membrane time constant, *a*_{jm} are dimensionless measures of synaptic strength between neuron *j* and neuron *m*, represents the sum of all the spikes generated at times by neuron *m*. Every time the membrane potential *V*_{j} reaches the firing threshold *θ*, the *j*th neuron emits a spike, its membrane potential is set to a reset *V*_{r}, and stays at that value for a refractory period *τ*_{rp}; after this time the dynamics resumes, following Eq. (1).

We use *a*_{jm} = *a* (*a g*) for all excitatory (inhibitory) synapses. In the homogeneous case, each neuron receives synaptic inputs from *K*_{E} = *K* (*K*_{I} = *γK*) excitatory (inhibitory) cells. In the network case, each neuron receives additional *K*_{X} = *K* excitatory inputs from an external population firing with Poisson statistics with rate *ν*_{X}. We use excitatory and inhibitory neurons with the same biophysical properties, hence the above assumptions imply that the firing rates of excitatory and inhibitory neurons are equal, *ν* = *ν*_{E} = *ν*_{I}. Models taking into account the biophysical diversity between the excitatory and inhibitory populations are discussed in Appendix D. When heterogeneity is taken into account, the above defined values of *K*_{E,I,X} represent the means of Gaussian distributions. We use the following single neuron parameters: *τ*_{rp} = 2ms, *θ* = −55mV, *V*_{r} = −65mV, *E*_{E} = 0mV, *E*_{I} = −75mV, *E*_{L} = −80mV, *τ*_{j} = *τ*_{L} = 20ms. We explore various scalings of *a* with *K* and, in all cases, we assume that *a* ≪ 1. When *a* ≪ 1, an incoming spike produced by an excitatory presynaptic neuron produces a jump in the membrane potential of amplitude *a*(*E*_{E} - *V*), where *V* is the voltage just before spike arrival. In cortex, *V* ~ −60mV and average amplitudes of post-synaptic potentials are in the order 0.5 − 1.0mV [32–38]. Thus, we expect realistic values of *a* to be in the order of 0.01.

### Diffusion and effective time constant approximations

We assume that each cell receives projections from a large number of cells (*K* ≫ 1), neurons are sparsely connected and fire approximately as Poisson processes, each incoming spike provides a small change in conductance (*a* ≪ 1), and that temporal correlations in synaptic inputs can be neglected. Under these assumptions, we can use the diffusion approximation, and approximate the conductances as
where *r*_{E} and *r*_{I} are the firing rates of pre-synaptic E and I neurons, respectively, and *ζ*_{E} and *ζ*_{I} are independent Gaussian white noise terms with zero mean and unit variance density. In the single neuron case, we take *r*_{E} = *ν*_{X}, *r*_{I} = *ην*_{X} where *η* represents the ratio of I/E input rate. In the network case, *r*_{E} = *ν*_{X} + *ν*, *r*_{I} = *ν* where *ν*_{X} is the external rate, while *ν* is the firing rate of excitatory and inhibitory neurons in the network, determined self-consistently (see below). We point out that, for some activity levels, the assumption of Poisson pre-synaptic firing made in the derivation of Eq. (3) breaks down, as neurons in the network show interspike intervals with CV significantly different from one (e.g. see Fig. 3C). However, comparisons between mean field results and numerical simulations (see Appendix E) show that neglecting non-Poissonianity (as well as other contributions discussed above Eq. (3)) generates quantitative but not qualitative discrepancies, with magnitude that decreases with coupling strength. Moreover, in Appendix B, we show that if *a* ≪ 1 the firing of neurons in the network matches that of a Poisson process with refractory period and hence, when *ν* ≪ 1/*τ*_{rp}, deviations from Poissonianity become negligible.

Using the diffusion approximation, Eq. (1) reduces to
where *ζ* is a white noise term, with zero mean and unit variance density, while

In Eq. (4), *τ* is an effective membrane time constant, while *µ* and *σ*^{2}(*V*) represent the average and the variance of the synaptic current generated by incoming spikes, respectively.

The noise term in Eq. (4) can be decomposed into an additive and a multiplicative component. The latter has an effect on membrane voltage statistics that is of the same order of the contribution coming from synaptic shot noise [39], a factor which has been neglected in deriving Eq. (3). Therefore, for a consistent analysis, we neglect the multiplicative component of the noise in the above derivation; this leads to an equation of the form of Eq. (4) with the substitution

This approach has been termed the effective time constant approximation [39]. Note that the substitution of Eq. (6) greatly simplifies mathematical expressions but it is not a necessary ingredient for the results presented in this paper. In fact, all our results can be obtained without having to resort to this approximation (see Appendix A, B and D).

### Current-based model

The previous definitions and results translate directly to current-based models, with the only exception that the dependency of excitatory and inhibitory synaptic currents on the membrane potential are neglected (see [10] for more details). Therefore, Eq. (1) becomes where represent the excitatory and inhibitory input currents. Starting from Eq. (7), making assumptions analogous to those discussed above and using the diffusion approximation [10], the dynamics of current-based neurons is given by an equation of the form of Eq. (4) with

Note that, unlike what happens in conductance-based models, *τ* is a fixed parameter and does not depend on network firing rate or external drive. Another difference between the current-based and conductance-based models is that in the latter, but not the former, model *σ* depends on *V*; as we discussed above, this difference is neglected in the main text, where we use the effective time constant approximation.

## III. BEHAVIOR OF SINGLE NEURON RESPONSE FOR LARGE *K*

We start our analysis investigating the effects of synaptic conductance on single neuron response. We consider a neuron receiving *K* (*γ*_{K}) excitatory (inhibitory) inputs, each with synaptic efficacy *J* (*g*_{J}), from cells firing with Poisson statistics with a rate
and analyze its membrane potential dynamics in the frameworks of current-based and conductance-based models. In both models, the membrane potential *V* follows a stochastic differential equation of the form of Eq. (4); differences emerge in the dependency of *τ*, *µ* and *σ* on the parameters characterizing the connectivity, *K* and *J*. In particular, in the current-based model, the different terms in Eq. (8) can be writen as
where , , and are independent of *J* and *K*. In the conductance-based model, the efficacy of excitatory and inhibitory synapses depend on the membrane potential as *J* = *a*(*E*_{E,I V} − *V*); the different terms in Eq. (4), under the assumption that *K a* ≫ 1, become of order

Here, all these terms depend on parameters in a completely different way than in the current-based case. As we will show below, these differences drastically modify how the neural response changes as *K* and *J* are varied and hence the size of *J* ensuring finite response for a given value of *K*.

The dynamics of a current-based neuron is shown in Fig. 1Ai, with parameters leading to irregular firing. Because of the chosen parameter values, the mean excitatory and inhibitory inputs approximately cancel each other, generating subthreshold average input and fluctuation-driven spikes, which leads to irregularity of firing. If all parameters are fixed while *K* is increased (*J* ~ *K*^{0}), the response changes drastically (Fig. 1Aii), since the mean input becomes much larger than threshold and firing becomes regular. To understand this effect, we analyze how terms in Eq. (4) are modified as *K* increases. The evolution of the membrane potential in time is determined by two terms: a drift term (−*V* − *µ*)/*τ*, which drives the membrane potential toward its mean value *µ*, and a noise term , which leads to fluctuations around this mean value. Increasing *K* modifies the equilibrium value *µ* of the drift force and the input noise, which increase proportionally to *KJ* (1 − *γgη*) and *KJ*^{2} (*γg*^{2}*η* + 1), respectively (Fig. 1B,C).

This observation suggests that, to preserve irregular firing as *K* is increased, two ingredients are needed. First, the rates of excitatory and inhibitory inputs must be fine tuned to maintain a mean input below threshold; this can be achieved choosing *γgη* − 1 ~ 1/*KJ*. Second, the amplitude of input fluctuations should be preserved; this can be achieved scaling synaptic efficacy as . Once these two conditions are met, irregular firing is restored (Fig. 1Aiii). Importantly, in a network with , irregular firing emerges without fine tuning, since rates dynamically adjust to balance excitatory and inhibitory inputs and maintain mean inputs below threshold [7, 8].

We now show that this solution does not work once synaptic conductance is taken into account. The dynamics of a conductance-based neuron in response to the inputs described above is shown in Fig. 1Di. As in the current-based neuron, it features irregular firing, with mean input below threshold and spiking driven by fluctuations, and firing becomes regular for larger *K*, leaving all other parameters unchanged (Fig. 1Dii). However, unlike the current-based neuron, input remains below threshold at large *K*; regular firing is produced by large fluctuations, which saturate response and produce spikes that are regularly spaced because of the refractory period. These observations can be understood looking at the equation for the membrane potential dynamics: increasing *K* leaves invariant the equilibrium value of the membrane potential *µ* but increases the drift force and the input noise amplitude as *Ka* and , respectively (Fig. 1E,F). Since the equilibrium membrane potential is fixed below threshold, response properties are determined by the interplay between drift force and input noise, which have opposite effects on the probability of spike generation. The response saturation observed in Fig. 1Dii shows that, as *K* increases at fixed *a*, fluctuations dominate over drift*√*force. On the other hand, using the scaling leaves the amplitude of fluctuations unchanged, but generates a restoring force of order (Fig. 1E) which dominates and completely abolishes firing at strong coupling (Fig. 1Diii).

Results in Fig. 1 show that the response of a conductance-based neuron when *K* is large depends on the balance between drift force and input noise. The scalings *a* ~ *O*(1) and leave one of the two contributions dominate; suggesting that an intermediate scaling could keep a balance between them. Below we derive such a scaling, showing that it preserves firing rate and CV of ISI when *K* becomes large.

## IV. A SCALING RELATION THAT PRESERVES SINGLE NEURON RESPONSE FOR LARGE *K*

We analyze under what conditions the response of a single conductance-based neuron is preserved when *K* is large. For a LIF neuron driven described by Eqs. (4, 5, 6), the single cell transfer function, i.e. the dependency of the firing rate *ν* on the external drive *ν*_{X}, is given by [40, 41]
with

In the biologically relevant case of *a* ≪ 1, Eq. (10) simplifies significantly. In fact, the distance between the equilibrium membrane po*√*tential measured in units of noise *u*_{max} is of order (except for inputs *ν*_{X} ≫ 1/*aKτ*_{L}, where it is of order .) Therefore *u*_{max} is large when *a* is small; in this limit, the firing rate is given by Kramers escape rate [42], and Eq. (10) becomes:
where we have defined and motivation to introduce and is that they remain of order 1 in the small *a* limit, provided the external inputs *ν*_{X} are at least of order 1/(*aKτ*_{L}). When the external inputs are such that *ν*_{X} ≫ 1/(*aKτ*_{L}), these quantities become independent of *ν*_{X}, *a* and *K* and are given by

The firing rate given by Eq. (12) remains finite when *a* is small and/or *K* is large if Q remains of order one; this condition leads to the following scaling relationship
i.e. *a* should be of order 1/ log(*K*).

In Appendix C, we show that expressions analogous to Eq. (12) can be derived in integrate-and-fire neuron models which feature additional intrinsic voltage-dependent currents, as long as synapses are conductance-based and input noise is small (*a* ≪ 1). Examples of such models include the exponential integrate-and-fire neurons with its spike-generating exponential current [43], and models with voltage-gated subthreshold currents [23]. Moreover, we show that, in these models, firing remains finite if *a* ~ 1/log(*K*), and voltage-dependent currents generate corrections to the logarithmic scaling which are negligible when coupling is strong.

Since and vary with *ν*_{X}, Eq. (14) can be satisfied, and hence firing can be supported, only if the inputs span a small range of values, such that and are approximately constant, or if *ν*_{X} ≫ 1/*aKτ*_{L}. Note that, while in the strong coupling limit (i.e. when *K* goes infinity), only the second of these two possibilities can be implemented with input rates spanning physiologically relevant values, both are are admissible when coupling is moderate (i.e. when *K* is large but finite, a condition consistent with experimental data on cortical networks [44, 45]). In what follows, with the exception of the section on finite synaptic time constant, we focus on the case *ν*_{X} ≫ 1/*aKτ*_{L}, and investigate how different properties evolve with *K* using the scaling defined by Eq. (14) with and given by Eq. (13). Importantly, all the results discussed below hold for inputs outside the region *ν*_{X} ≫ 1/*aKτ*_{L}, as long as *ν*_{X} is at least of order 1*/aKτ*_{L} (a necessary condition for the derivation of Eq. (12) to be valid), and that inputs span a region small enough for the variations of and to be negligible.

In Fig. 2A, we compare the scaling defined by Eq. (14) with the scaling of current-based neurons. At low values of *K*, the values of *a* obtained with the two scalings are similar; at larger values of *K*, synaptic strength defined by Eq. (14) decays as *a* ~ 1/log(*K*), i.e. synapses are stronger in the conductance-based model than in the current-based model. Examples of single neuron transfer function computed from Eq. (10) for different coupling strength are shown in Fig. 2B,C. Responses are nonlinear at onset and close to saturation. As predicted by the theory, scaling *a* with *K* according to Eq. (14) preserves the firing rate over a region of inputs that increases with coupling strength (Fig. 2C,D), while the average membrane potential remains below threshold (Fig. 2D). The quantity represents the distance from threshold of the equilibrium membrane potential in units of input fluctuations; Eq. (14) implies that this distance increases with coupling strength. When *K* is very large, the effective membrane time constant, which is of order *τ* ~ 1/*aKν*_{X}, becomes small and firing is driven by fluctuations that, on the time scale of this effective membrane time constant, are rare.

We next investigated if the above scaling preserves irregular firing by analyzing the CV of interspike intervals. This quantity is given by [10]
and, for the biologically relevant case of *a* ≪ 1 and *µ* < *θ*, reduces to (see Appendix B for details)
i.e. the CV is close to one at low rates and it decays monotonically as the neuron approaches saturation. Critically, Eq. (16) depends on coupling strength only through *ν*, hence any scaling relation preserving firing rate will also produce CV of order one at low rate. We validated numerically this result in Fig. 2E.

We now investigate how Eq. (14) preserves irregular firing in conductance-based neurons. We have shown that increasing *K* at fixed *a* produces large input and membrane fluctuations, which saturate firing; the scaling preserves input fluctuations but, because of the strong drift force, suppresses membrane potential fluctuations, and hence firing. The scaling of Eq. (14), at every value of *K*, yields the value of a that balances the contribution of drift and input fluctuations, so that membrane fluctuations are of the right size to preserve the rate of threshold crossing. Note that, unlike what happens in the current-based model, both input fluctuations and drift force increase with *K* (Fig. 2F,G) while the membrane potential distribution, which is given by [46]
slowly becomes narrower (Fig. 2H). This result can be understood by noticing that, when *a* ≪ 1 and neglecting the contribution due to the refractory period, Eq. (17) reduces to

Hence, the probability distribution becomes Gaussian when coupling is strong, with a variance proportional to *σ*^{2} ~ *a*. We note that, since *a* is of order 1/ log *K*, the width of the distribution becomes small only for unrealistically large values of *K*.

## V. ASYNCHRONOUS IRREGULAR ACTIVITY IN NETWORK RESPONSE AT STRONG COUPLING

We have so far considered the case of a single neuron subjected to stochastic inputs. We now show how the above results generalize to the network case, where inputs to a neuron are produced by a combination of external and recurrent inputs.

We consider networks of recurrently connected excitatory and inhibitory neurons, firing at rate *ν*, stimulated by an external population firing with Poisson statistics with firing rate *ν*_{X}. Using again the diffusion approximation, the response of a single neuron in the networks is given by Eq. (10) (and hence Eq. (12)) with

Eq (10), if all neurons in a given population are described by the same single cell parameters and the network is in an asynchronous state in which cells fire at a constant firing rate, provides an implicit equation whose solution is the network transfer function. Example solutions are shown in Fig. 3B (numerical validation of the mean field results is provided in Appendix E). In Appendix D, we prove that firing in the network is preserved when coupling is strong if parameters are rescaled according to Eq. (14). Moreover, we show that response nonlinearities are suppressed and the network response in the strong coupling limit (i.e. when *K* goes infinity) is given, up to saturation, by

The parameter *ρ*, which is obtained solving Eq. (12) self-consistently (see Appendix D for details), is the response gain in the strong coupling limit. Finally, our derivation implies that Eq. (14) preserves irregular firing and creates a probability distribution of membrane potential whose width decreases only logarithmically as *K* increases (Fig. 3C,D and numerical validation in Appendix E), as in the single neuron case. While this logarithmic decrease is a qualitative difference with the current-based balanced state in which the width stays finite in the large *K* limit, in practice for realistic values of *K*, realistic fluctuations of membrane potential (a few mV) can be observed in both cases.

We now turn to the question of what happens in networks with different scalings between *a* and *K*. Our analysis of single neuron response described above shows that scalings different from that of Eq. (14) fail to preserve firing for large *K*, as they let either input noise or drift dominate. However, the situation in networks might be different, since recurrent interactions could in principle adjust the statistics of input currents such that irregular firing at low rates is preserved when coupling becomes strong. Thus, we turn to the analysis of the network behavior when a scaling *a* ~ *K*^{−α} is assumed. For *α* ≤ 0, the dominant contribution of input noise at the single neuron level (Figs. 1 and 2) generates saturation of response and regular firing in the network (Fig. 3). This can be understood by noticing that, for large *K*, the factor Q in Eq. (12) becomes negligible and the selfconsistency condition defining the network rate is solved by *ν* = 1/*τ*_{rp}. For *α* > 0, the network response for large *K* is determined by two competing elements. On the one hand, input drift dominates and tends to suppress firing (Figs. 1 and 2). On the other hand, for the network to be stable, inhibition must dominate recurrent interactions [9]. Hence, any suppression in network activity reduces recurrent inhibition and tends to increase neural activity. When these two elements conspire to generate a finite network response, the factor in Eq. (12) must be of order one and . In this scenario, the network activity exhibits the following features (Fig. 3): (i) the mean inputs drive neurons very close to threshold ; (ii) the response of the network to external inputs is linear and, up to corrections of order *K*^{−α}, given by
(iii) firing is irregular (because of Eq. (16)); (iv) the width of the membrane potential distribution is of order *α* ~ *K*^{−α} (because of Eq. (18)). Therefore, scalings different from that of Eq. (14) can produce asynchronous irregular activity in networks of conductance-based neurons, but this leads to networks with membrane potentials narrowly distributed close to threshold, a property which seems at odds with what is observed in cortex [47–52].

## VI. ROBUST LOGNORMAL DISTRIBUTION OF FIRING RATES IN NETWORKS WITH HETEROGENEOUS CONNECTIVITY

Up to this point, we have assumed a number of connections equal for all neurons. In real networks, however, this number fluctuates from cell to cell. The goal of this section is to analyze the effects of heterogeneous connectivity in networks of conductance-based neurons.

We investigated numerically the effects of connection heterogeneity as follows. We chose a Gaussian distribution of the number of connections per neuron, with mean *K* and variance Δ*K*^{2} for excitatory connections, and mean *γK* and variance *γ*^{2}Δ*K*^{2} for inhibitory connections. The connectivity matrix was constructed by drawing first randomly E and I in-degrees from these Gaussian distributions for each neuron, and then selecting at random pre-synaptic neurons. We then simulated network dynamics and measured the distribution of rates and CV of the ISI in the population. Results for different values of *CV*_{K} = Δ*K*/*K* are shown in Fig. 4A-C. For small and moderate values of connection heterogeneity, increasing *CV*_{K} broadens the distribution of rates and CV of the ISI, but both distributions remain peaked around a mean rate that is close to that of homogeneous networks (Fig. 4A,B). For larger *CVK*, on the other hand, the distribution of rates changes its shape, with a large fraction of neurons moving to very low rates, while others increase their rates (Fig. 4A) and the distribution of the CV of ISI becomes bimodal, with a peak at low CV corresponding to the high rate neurons, while the peak at a CV close to 1 corresponds to neurons with very low firing rates (Fig. 4B).

To characterize more systematically the change in the distribution of rates with *CV*_{K}, we measured, for each value of *CV*_{K}, the fraction of quiescent cells, defined as the number of cells that did not spike during 20 seconds of the simulated dynamics (Fig. 4C). This analysis shows that the number of quiescent cells, and hence the distribution of rates, changes abruptly as the *CV*_{K} is above a critical value . Importantly, unlike our definition of the fraction of quiescent cells, this abrupt change is a property of the network that is independent of the duration of the simulation.

To understand these numerical results, we performed a mean field analysis of the effects of connection heterogeneity on the distribution of rates (Appendix F). This analysis captures quantitatively numerical simulations (Fig. 4A) and shows that, in the limit of small *CV*_{K} and *a*, rates in the network are given by
where *ν*_{0} is the population average in the absence of heterogeneity, *z*_{i} is a Gaussian random variable, and the prefactor Ω is independent of *a*, *K* and *ν*_{X}. The exponent in Eq. (22) represents a quenched disorder in the value of *v*^{i}, i.e. in the distance from threshold of the single cell *µ*^{i} in units of input noise. As shown in Appendix F, Eq. (22) implies that the distribution of rates is lognormal, a feature consistent with experimental observations [53–55] and distributions of rates in networks of current-based LIF neurons [56]. It also implies that the variance of the distribution Δ*ν*/*ν* should increase linearly with *CV*_{K}, a prediction which is confirmed by numerical simulations (Fig. 4C). The derivation in Appendix F also provides an explanation for the change in the shape of the distribution for larger *CV*_{K}. In fact, for larger heterogeneity, the small *CV*_{K} approximation is not valid and fluctuations in input connectivity produce cells for which *µ*^{i} far from *θ*, that are either firing at extremely low rate (*µ*^{i} < *θ*) or regularly (*µ*^{i} > *θ*). The latter generates the peak at low values in the CV of the ISI seen for large values *CV*_{K}.

The quantity represents the level of connection heterogeneity above which significant deviations from the asynchronous irregular state emerges, i.e. large fractions of neurons show extremely low or regular firing. Eq. (22) suggests that should increase linearly with *a*. We validated this prediction with our mean field model, by computing the minimal value of *CV*_{K} at which 1% of the cells fire at rate of 10^{−3} spk/s. (Fig. 4D). Note that the derivation of Eq. (22) only assumes *a* to be small and does not depend on the scaling relation between *a* and *K*. On the other hand, the fact that increases linearly with *a* makes the state emerging in networks of conductance-based neurons with *a* ~ 1/log(*K*) significantly more robust to connection fluctuations than that emerging with *a* ~ *K*^{−α}, for which , and with current-based neurons, where [57]. Note that, while in randomly connected networks , a larger degree of heterogeneity has been observed in cortical networks [51, 57–61]. Our results show that networks of conductance-based neurons could potentially be much more robust to such heterogeneities than networks of current-based neurons.

## VII. COMPARISON WITH EXPERIMENTAL DATA

The relation between synaptic efficacy and number of connections per neuron has been recently studied experimentally using a culture preparation [62]. In this paper, it was found that cultures in which *K* was larger had weaker synapses than cultures with smaller *K* (Fig. 5). In what follows we compare this data with the scalings expected in networks of current-based and conductance-based neurons, and discuss implications for *in vivo* networks.

In the current-based model, the strength of excitatory and inhibitory post synaptic potentials as a function of *K* can be written as and *J*_{I} = *gJ*_{E}. In the conductance-based model, these quantities become *J*_{E} = (*V* − *E*_{E})*a* and *J*_{I} = *g*(*V* − *E*_{I})*a*; where is given by Eq. (14) while, for the dataset of [62], *V* ~ −60mV, *J*_{E} ~ *J*_{I}, *E*_{E} ~ 0mV and *E*_{I} ~ −80mV. For each model, we inferred free parameters from the data with a least-squares optimization in logarithmic scale (best fit: *g* = 1.1 and *J*_{0} = 20mV in the current-based model; *g* = 3.4 and in the conductance-based model) and computed the expected synaptic strength as a function of *K* (lines in Fig. 5A). Our analysis shows that the performance of the currentbased and the conductance-based model in describing the data, over the range of *K* explored in the experiment, are similar, with the former being slightly better than the latter (root mean square 2.2mV vs 2.4mV). This result is consistent with the observation made in [62] that, when fitted with a power-law *J* ~ *K*^{−β}, data are best described by *β* = 0.59 but are compatible with a broad range of values (95% confidence interval: [0.47:0.70]). Note that even though both models give similar results for PSP amplitudes in the range of values of *K* present in cultures (~50-1000), they give significantly different predictions for larger values of *K*. For instance, for *K* = 10, 000, *J*_{E} is expected to be ~ 0.2 mV in the current-based model and ~ 0.7 mV in the conductancebased model.

In Fig. 5B, we plot the distance between the equilibrium membrane potential *µ* and threshold *θ* in units of input fluctuations, as a function of *K* using the value of obtained above, and find that the expected value in vivo, where *K* ~ 10^{3} − 10^{4}, is in the range 2-3. In Fig. 5C,D, we plot how total synaptic excitatory conductance, and effective membrane time constant, change as a function of *K*. Both quantities change significantly faster using the conductance-based scaling (*g*_{E}/*g*_{L} ~ *K*/log(*K*; *τ*/*τ*_{L} ~ log(*K*)/*K*) than what expected by the scaling of the current-based model . For *K* in the range 10^{3} − 10^{4} and mean firing rates in the range 1-5 spk/s, the total synaptic conductance is found to be in a range from about 2 to 50 times the leak conductance, while the effective membrane time constant is found to be smaller than the membrane time constant by a factor 2 to 50. We compare these values with available experimental data in the Discussion.

## VIII. EFFECT OF FINITE SYNAPTIC TIME CONSTANTS

Results shown in Fig. 5 beg the question whether the assumption of negligible synaptic time constants we have made in our analysis is reasonable. In fact, synaptic decay time constants of experimentally recorded postsynaptic currents range from a few ms (for AMPA and GABA_{A} receptor-mediated currents) to tens of ms (for GABA_{B} and NMDA receptor-mediated currents, see e.g. [63]), i.e. they are comparable to the membrane time constant already at weak coupling, where *τ* ~ *τ*_{L} is typically in the range 10-30ms [64, 65]. In the strong coupling limit, the effective membrane time constant goes to zero, and so this assumption clearly breaks down. In this section, we investigate the range of validity of this assumption, and what happens once the assumption of negligible time constants is no longer valid.

With finite synaptic time constants, the temporal evolution of conductances of Eq. (2) is replaced by

It follows that the single-neuron membrane potential dynamics is described by Eqs. (1,23). Here, for simplicity, we take excitatory and inhibitory synaptic currents to have the same decay time constant *τ*_{S}. Fig. 6A shows how increasing the synaptic time constant modifies the mean firing rate of single integrate-and-fire neurons in response to *K* (*γ*_{K}) excitatory (inhibitory) inputs with synaptic strength *a* (*ga*) and frequency *ν*_{X}(*ην*_{X}. The figure shows that, though the mean firing rate is close to predictions obtained with instantaneous synapses for small *τ*_{S}/*τ*, deviations emerge for *τ*_{S}/*τ* ߡ 1, and firing is strongly suppressed as *τ*_{S}/*τ* becomes larger. To understand these numerical results, we resort again to the diffusion approximation [67], together with the effective time constant approximation [11, 68] to derive a simplified expression of the single neuron membrane potential dynamics with finite synaptic time constant (details in Appendix G); this is given by
where *τ*, *µ* and *σ* are as in the case of negligible synaptic time constant (Eq. (5)), whilst *z* is an Ornstein-Uhlenbeck process with correlation time *τ*_{S}. It follows that, with respect to Eq. (4), input fluctuations with frequency larger than 1*/τ*_{S} are suppressed and, for large *τ*_{S}/*τ*, the membrane potential dynamics is given by
i.e. the membrane potential is essentially slaved to a time dependent effective reversal potential corresponding to the r.h.s. of Eq. (25) [14]. Note that Eq. (25) is valid only in the subthreshold regime. When the r.h.s. of Eq. (25) exceeds the threshold, the neuron fires a burst of action potentials whose frequency, in the strong coupling limit, is close to the inverse of the refractory period [69]. As *ν*_{X} increases, the equilibrium value *µ* remains constant while *τ* decreases, leading to a suppression of membrane fluctuations (Fig. 6D), and in turn to the suppression of response observed in Fig. 6A.

In Appendix G, we use existing analytical expansions [67, 69, 70], as well as numerical simulations, to shows that neural responses obtained with finite *τ*_{S} are in good agreement with predictions obtained with instantaneous synapses as long as. It follows that the single neuron properties we discussed in the case of instantaneous synapses hold in the region of inputs for which *τ*_{S}/*τ* ≲ 0.1 (i.e.*ν*_{X} ≲ 0.1/*aKτ*_{S}) and the derivation of Eq. (14) is valid (i.e. *ν*_{X} is at least of order 1/*aKτ*_{L}). Thus, there is at best a narrow range of inputs for which these properties carry over to the finite synaptic constant case. Interestingly, when biologically relevant parameters are considered (e.g. Fig. 6), inputs within this region generate firing rates that are in the experimentally observed range in cortical networks [23, 47–55]. The analysis of Appendix G also shows that, when *τ*_{S}/*τ* ~ 1, i.e. once the input rate *ν*_{X} is of order 1/*aKτ*_{S}, firing is suppressed exponentially. The scaling relation of Eq. (14) does not prevent this suppression, which emerges for external rates of order 1/*aKτ*_{S} ~ log(*K*)/*Kτ*_{S}. Scalings of the form *a* ~ *K*^{−α}, with *α* > 0, on the other hand, create a larger region of inputs for which *τ*_{S}/*τ* ≪ 1 but, as we showed when studying the neural dynamics with instantaneous synapses, fail in generating response for large *K*. We next asked if another scaling relation between *a* and *K* could prevent suppression of neural response when *τ*_{S} is finite. The single neuron response computed in Appendix G is a nonlinear function of the input *ν*_{X}, which depends parametrically on *a* and *K*. It follows that, in order to preserve the single neuron response, *a* should scale differently with *K* for different values of *ν*_{X}. Since in cortical networks input rates, i.e. *ν*_{X}, change dynamically on a time scale much shorter than that over which plasticity can modify synaptic connections, we conclude that a biologically realistic scaling between *a* and *K*, which prevents suppression of neural response when *τ*_{S} is finite in a broad range of external inputs, does not exist. Moreover, the membrane potential dynamics for large *K* and *τ*_{S}/*τ* (Eq. (25)) becomes independent of a. This shows that rescaling synaptic efficacy with *K* cannot prevent suppression of response.

We next examined the effect of synaptic time constant on network response. Numerically computed responses in networks of neurons with finite synaptic time constant are shown in Fig. 6B,C. Network response is close to the prediction obtained with instantaneous synapses for small *τ*_{S}/*τ*, and deviations emerge for *τ*_{S}/*τ* ~ 1. Hence, analogously to the single neuron case, the network properties discussed in the case of instantaneous synapses remain valid for low inputs. However, unlike the single neuron case, no suppression appears for larger *τS/τ*. This lack of suppression in the network response, analogously to the one we discussed in networks with instantaneous synapses and *a* ~ *K*^{−α}, is a consequence of the fact that, to have stable dynamics when *K* is large, inhibition must dominate recurrent interactions [9]. In this regime, any change which would produce suppression of single neuron response (e.g. increase of *ν*_{X} or *τ*_{S}) lowers recurrent inhibition and increases the equilibrium value of the membrane potential *µ* (Fig. 6B,C,E). The balance between these two effects determines the network firing rate and, when *τ*_{S}/*τ* ≥ 1, generates a response which (see derivation in Appendix G), up to corrections of order , is given by Eq. (21) (dashed line in Fig. 6B). Similarly to what happens in networks with instantaneous synapses and *a K*^{−α}, this finite response emerges because recurrent interactions set *µ* very close to threshold, at a distance that matches the size of the membrane potential fluctuations (Eq. (25), ). In addition, the network becomes much more sensitive to connection heterogeneity, with . However, here the dynamics of the single neuron membrane potential is correlated over a timescale *τ*_{S} (Fig. 6E) and firing is bursty, with periods of regular spiking randomly interspersed in time. Moreover, the properties discussed here are independent of the scaling of *a* with *K*, since they always emerge once *τ*_{S}/*τ* « 1, a condition that is met for any scaling once *ν*_{X} « 1/*aKτ*_{S}. The specific scaling relation, on the other hand, is important to determine the input strength at which *τ*_{S}/*τ* ∼ 1.

In the previous sections, we have shown that networks of conductance-based neurons with instantaneous synapses present features similar to those observed in cortex if synaptic efficacy is of order *a* ~ 1/ log(*K*), while other scalings generate network properties that are at odds with experimental data (see Tab. I for a summary). In this section, we have found that, when the synaptic time constant is considered, these properties are preserved in the model for low inputs. As the input increases, the structure of the network response evolves gradually and, for large inputs *ν*_{X} « 1/*aKτ*_{S}, significant deviations from the case of instantaneous synapses emerge (see Tab. I for a summary). In particular, as the input to the network increases, our analysis shows that: the membrane potential approaches threshold while its fluctuations become smaller and temporally correlated; firing becomes more bursty; the network becomes more sensitive to heterogeneity in the in-degree and, if the heterogeneity is larger than that of random networks, significant fractions of neurons become quiescent or fire regularly. These features of the model provide a list of predictions which could be tested experimentally by measuring the evolution of the membrane potential dynamics of cells in cortex with the intensity of inputs to the network.

## IX. DISCUSSION

In this work, we analyzed networks of strongly coupled conductance-based neurons. The study of this regime is motivated by the experimental observation that in cortex *K* is large, with single neurons receiving inputs from hundreds or thousands of pre-synaptic cells. We showed that the classical balanced state idea [5, 6], which was developed in the context of current-based models and features synaptic strength of order [7, 8], results in current fluctuations of very small amplitude, which can generate firing in networks only if the mean membrane potential is extremely close to threshold. This seems problematic since intracellular recordings in cortex show large membrane potential fluctuations (see e.g. [47–52]). To overcome this problem, we introduced a new scaling relation which, in the case of instantaneous synaptic currents, maintains firing by preserving the balance of input drift and diffusion at the single neuron level. Assuming this scaling, the network response automatically shows multiple features that are observed in cortex in vivo: irregular firing, wide distribution of rates, membrane potential with non-negligible distance from threshold and fluctuation size. When finite synaptic time constants are included in the model, we showed that these properties are preserved for low inputs, but are gradually modified as inputs increase: the membrane mean approaches threshold while its fluctuations decrease in size and develop non-negligible temporal correlations. These properties, which are summarized in Tab. I, provide a list of predictions that could be tested experimentally by analyzing the membrane potential dynamics as a function of input strength in cortical neurons.

When synaptic time constants are negligible with respect to the membrane time constant, our theory shows properties that are analogous to those of the classical balanced state model: linear transfer function, CV of order one, and distribution of membrane potentials with finite width. However, these properties emerge from a different underlying dynamics than in the current based model. In current-based models, the mean input current is at distance of order one from threshold in units of input fluctuations. In conductance-based models, this distance increases with coupling strength and firing is generated by large fluctuations at strong coupling. The different operating mechanism manifests itself in two ways: the strength of synapses needed to sustain firing and the robustness to connection heterogeneity, as we discuss in the next paragraphs.

The scaling relation determines how strong synapses should be to allow firing at a given firing rate, for a given a value of *K*. In current-based neurons, irregular firing is produced as long as synaptic strengths are of order . In conductance-based neurons, stronger synapses are needed, with a scaling which approaches 1/ log(*K*) for large *K*. We showed that both scaling relations are in agreement with data obtained from culture preparations [62], which are limited to relatively small networks, and argued that differences might be important *in vivo*, where *K* should be larger.

In current-based models, the mean input current must be set at an appropriate level to produce irregular firing; this constraint is realized by recurrent dynamics in networks with random connectivity and strong enough inhibition [7–9]. However, in networks with structural heterogeneity, with connection heterogeneity larger than , the variability in mean input currents produces significant departures from the asynchronous irregular state, with large fractions of neurons that become silent or fire regularly [57]. This problem is relevant in cortical networks [57], where significant heterogeneity of in-degrees as been reported [51, 58–61], and different mechanisms have been proposed to solve it [57]. Here we showed that networks of conductance-based neurons also generate irregular activity without any need for finite tuning, and furthermore can support irregular activity with substantial structural heterogeneity, up to order 1/ log(*K*). Therefore, these networks are more robust to connection heterogeneity than the current-based model, and do not need the introduction of additional mechanism to sustain the asynchronous irregular state.

The strength of coupling in a network, both in the current-based model [71, 72] and in the conductance-based model (e.g. Fig. 3) determines the structure of its response and hence the computations it can implement. Recent theoretical work, analyzing experimental data in the framework of current-based models, has suggested that cortex operates in a regime of moderate coupling [44, 45], where response nonlinearities are prominent. In conductance-based models, the effective membrane time constant can be informative on the strength of coupling in a network, as it decreases with coupling strength. Results from *in vivo* recordings in cat parietal cortex [21] showed evidence that single neuron response is sped up by network interactions. In particular, measurements are compatible with inhibitory conductance approximately 3 times larger than leak conductance and support the idea that cortex operates in a “high-conductance state” [22] (but see [23] and discussion below). This limited increase in conductance supports the idea of moderate coupling in cortical networks, in agreement with what found in previous work [44, 45].

When the synaptic time constant is much larger than the membrane time constant, we showed that, regardless of synaptic strength, the size of membrane potential fluctuations decreases and firing in the network is preserved by a reduction of the distance from threshold of the mean membrane potential. Moreover, the robustness to heterogeneity in connection fluctuations decreases substantially (the maximum supported heterogeneity becomes of order ) and the membrane potential dynamics becomes correlated over a time scale fixed by the synaptic time constant. For really strong coupling, the regime of large synaptic time constant is reached for low input rates. In the case of moderate coupling, which is consistent with experimental data on cortical networks [44, 45], the network response at low rates is well approximated by that of networks with instantaneous synapses, and the regime of large synaptic time constant is reached gradually, as the input to the network increases (Fig. 6). This observation provides a list of prediction on how properties of cortical networks should evolve with input strength (summary in Tab. I), that are testable experimentally.

Experimental evidence suggests that the response to multiple inputs in cortex is non-linear (for an overview, see [73]). Such nonlinearities, which are thought to be fundamental to perform complex computations, cannot be captured by the classical balanced state model, as it features a linear transfer function [7, 8]. Alternative mechanisms have been proposed [71, 73, 74], but their biophysical foundations [71, 73] or their ability to capture experimentally observed nonlinearities [74] are still not fully understood. We have recently shown [72] that, in networks of current-based spiking neurons, nonlinearities compatible with those used in [71, 73] to explain phenomenology of inputs summation in cortex emerge at moderate coupling. Here we have shown that, as in the case of networks of current-base neurons [72], nonlinear responses appears in networks of conductance-based neurons at moderate coupling, both at response onset and close to single neuron saturation. In addition, we have found that synaptic time constants provide an additional source of nonlinearity, with nonlinear responses emerging as the network transitions between the two regimes described above. A full classification of the nonlinearities generated in these networks is outside the scope of this work, but could be performed generalizing the approach developed in [72].

Recent whole cell recording have reported that an intrinsic voltage-gated conductance, whose strength decreases with membrane potential, contributes to the modulation of neuronal conductance of cells in primary visual cortex of awake macaques and anesthetized mice [23]. For spontaneous activity, this intrinsic conductance is the dominant contribution to the cell conductance and drives its (unexpected) decrease with increased depolarization. For activity driven by sensory stimuli, on the other hand, modulations coming from synaptic interactions overcome the effect of the intrinsic conductance and neuronal conductance increases with increased depolarization. Our analysis shows that voltage-dependent currents, such as that produced by the voltage-gated channels [23] or during spike generation [43], affect quantitatively, but not qualitatively, the single neuron response and the scaling relation allowing firing. Therefore, the results we described in this contribution seem to be a general property of networks of strongly coupled integrate-and-fire neurons with conductance-based synapses.

Understanding the dynamical regime of operation of the cortex is an important open question in neuroscience, as it constrains which computations can be performed by a network [71]. Most of the theories of neural networks have been derived using rate models or current-based spiking neurons. Our work provides the first theory of the dynamics of strongly coupled conductance-based neurons, it can be easily related to measurable quantities because of its biological details, and suggests predictions that could be tested experimentally.

## ACKNOWLEDGMENTS

Supported by the NIMH Intramural Research Program and by NIH BRAIN U01 NS108683 (to M.H. and N.B.). We thank Alex Reyes for providing his data, and Vincent Hakim and Magnus Richardson for comments on the manuscript. This work used the computational resources of the NIH HPC Biowulf cluster (http://hpc.nih.gov) and the Duke Compute Cluster (https://rc.duke.edu/dcc).

## Appendix A: Calculations in the multiplicative noise case

In the main text, we analyze the distribution of membrane potential, firing rate and CV using the effective time constant approximation, which neglects the dependence of the noise amplitude on the membrane potential. This approximation is motivated by the fact that corrections to this approximation are of the same order of shot noise corrections to the diffusion approximation used to describe synaptic inputs [75]. In this section, we derive results without resorting to the effective time constant approximation (i.e. keeping the voltage dependence of the noise term), and show that the results derived in the main text remain valid, even though it complicates the calculations. The inclusion of shot noise corrections is outside the scope of this contribution.

### 1. Equations for arbitrary drift and diffusion terms

In this section, we compute the probability distribution of the membrane potential, the firing rate, and the CV of ISI of a neuron whose membrane potential follows the equation

Eq. (4) of the main text is a special form of Eq. (A1) with

The Fokker-Plank equation associated to Eq. (A1), in the Stratonovich regularization scheme, is given by
where *P* is the probability of finding a neuron with membrane potential *V* and *J* is the corresponding probability current given by

We are interested in the stationary behavior of the system in which *P* does not depend on time and the current *J* is piecewise constant. In particular, for *V* between the activation threshold *θ* and the resting potential *V*_{r}, *J* is equal to the neuron firing rate *ν* and the normalization condition implies
where *τ*_{rp} is the refractory period.

To derive the probability distribution of the neuron potential, we introduce in Eq. (A3) the integrating factor and obtain

Using the boundary condition *P* (*θ*) = 0, we find
and

Integrating by parts, we obtain

This solution has been obtained in general form in [40] and for the specific form of Eq. (A2) in [11].

We now compute the coefficient of variation of the interspike interval. The moments *T*_{k} of the interspike intervals of the stochastic process defined by Eq. (A1) are given by (see [42])
with boundary conditions
i.e. *θ* is an absorbing boundary and *b* is a reflective boundary (we will then consider the limit *b* → ℞). The general solution of an equation of the form
is

For *T*_{1}(*x*) we have

For *T*_{2}(*x*) we look for a solution of the form
and find that *R* obeys to an equation of the form of Eq. (A6) with

Combining the previous results, the CV of ISI is obtained as
the explicit expression of the *CV* is given in the following section.

Eqs. (17), (10) and (15) of the main text have been obtained from Eqs. (A4), (A5) and (A7) using Eq. (A2).

### 2. Equations for conductance-based LIF neurons

Starting from Eqs. (4,5) of the main text, we write the different terms as where, to shorten the expressions, we have introduced two auxiliary variables with time dimension as well as two variables with voltage dimensions,

The terms − (*V* − *µ*)/*τ* and of Eq. (4) represent the input drift and noise to the membrane dynamics respectively. The voltage dependence of these terms is sketched in Fig. 7.

In the large *K* limit, the different terms in Eq. (4) scale as
while the values of *ω*, *µ*, , and are independent of *K*. It follows that the noise term and the time constant *τ* in Eq. (4) become small in the strong coupling limit. This result is analogous to what we obtained in the main text with the effective time constant approximation, since this approximation does not change how these terms scale with *a* and *K*.

We now insert the drift and diffusion terms of the conductance-based LIF neuron in Eqs. (A4), (A5), and (A7), and obtain and where

Eqs. (A12) and (A13) are analogous to those derived in [11]. To simplify the following analysis, we will neglect the contribution of the term *a*^{2}*Kτ*/2_{χ}, which derives from the regularization scheme. This assumption is justified by the fact that, for large *K*, *τ* ∼ 1*/aK* and the factor *a*2*Kτ*/2*χ* is of order *a* « 1.

## Appendix B: Calculations in the strong coupling regime - Single neurons

In the main text, we derived a simplified expression for the single neuron response neglecting the dependency of noise on membrane potential. In this section we generalize this result to the case in which the full noise expression is considered. We compute simplified expressions of the single neuron transfer function and CV, both in the subthreshold regime *µ* < *θ*, and the supra-threshold regime *µ* > *θ*. These expressions are validated numerically in Fig. 8 and used in the last part of this section to define a scaling relation between *a* and *K* which preserves single neuron firing in the strong coupling limit.

### 1. Single neuron transfer function at strong coupling

The starting point of our analysis is the observation that the integrand in Eq. (A13) depends exponentially on 1/*a* » 1. This suggests to perform the integration with a perturbative expansion of the exponent. We will show below that, since the exponent has a stationary point at *x* = *v* = *α* (see Fig. 9), the integration gives two qualitatively different results if *α* is larger or smaller than the upper bound of the integral *v*_{max}. Moreover, since the condition *α* ≶ *v*_{max} corresponds to *θ* ≶ *μ*, the two behaviors correspond to supra/sub-threshold regimes, respectively.

#### Supra-threshold regime v_{max} < *α* (*θ* < *µ*)

The exponent in Eq. (A13) is negative for every value of *x*, except for *x* = *v* in which it is zero. The integral in *x* can be written has

With a change of variable *z* = (*x* − *v*)/*a* we obtain

Neglecting all the terms of order *a* we get

Performing the integration in *v* we obtain

Eq. (B4) is the transfer function of a deterministic conductance-based neuron with the addition of the refractory period. This is not surprising since the noise term becomes negligible compared to mean inputs in the small *a* limit. In Fig. 8B we show that Eq. (B4) gives a good description of the transfer function predicted by the mean field theory in the supra-threshold regime.

#### Sub-threshold regime v_{max} > *α* (*θ* > *µ*)

First we consider *α* < *v*_{min} (*µ* < *V*_{r}). For every value of *v*, the integral in *x* in Eq (A13) has a maximum in the integration interval, hence it can be performed through saddle-point method and gives

In the last equation, the exponent in the integrand has a minimum for *v* = *α* and is maximum at *v* = *v*_{max}; we expand the exponent around *v* = *v*_{max} and, keeping term up to the first order, obtain

In the regime *v*_{min} < *α* < *v*_{max}, the integral in *v* of Eq. (A13) can be divided into three parts
the third integral is analogous to case *α* < *v*_{min}, hence it has an exponential dependency on the parameters and dominates the other terms. In Fig. 8A we show that Eq. (B6) gives a good description of the transfer function predicted by the mean field theory for *µ* < *θ*.

### 2. Single neuron *CV* of ISI at strong coupling

In this section we provide details of the derivation the approximated expressions of the response *CV*. Starting from the mean field result of Eq. (A14), we compute integrals using the approach discussed above.

#### Suprathreshold regime *v*_{max} < *α* (*θ* < *µ*)

The inner integral in Eq. (A14) yields in the small *a* limit
from which we obtain
hence the rescaling needed to preserve the deterministic component *a* ~ 1*/K* produces *CV*^{2} ~ *a* « 1. We validated this result numerically in figs. 8H and 10F.

#### Subthresold regime *v*_{max} > *α* (*θ* > *µ*)

The integral defining the *CV*, Eq. (A14), can be expressed as
with

The first integral gives

In the second integral from which we get

Integrating in *z* we obtain

Integrating in *v* we obtain

Using Eq. (B6) we obtain
which corresponds to the CV of the ISIs of a Poisson process with dead time, with rate *ν* and refractory period *τ*_{rp}. We validated this result numerically in figs. 8D and 10C.

### 3. Scaling relations preserving firing in the strong coupling limit

In this section we use the simplified expressions derived above to define scaling relations of *a* with *K* which preserves neural response in the strong coupling limit. Importantly, the scaling defined here depends on the operating regime of the neuron, i.e. on the asymptotic value of *µ*.

In the limit of large *K*, terms in Eq. (A8) can be written as
while *µ*, and the function are independent of *K*, *a* and *ν*_{E}. We have shown in the previous section that the single neuron transfer function is given by
with

For *µ* > *θ*, the parameters *a* and *K* in Eq. (B20) appear only in the combination *aK*. It follows that a rescaling
leaves invariant the neural response for large *K*. For *µ* < *θ*, Eq. (B20), and hence the transfer function, is invariant under the rescaling

In Fig. 10A,D we show neural responses computed for different values of *K* with *a* rescaled according to Eqs (B21) or (B22); as predicted the network transfer function remains invariant as *K* increases. Note that the response remains nonlinear in the limit of large *K*; we will show in the next section that in the network case, because of the self consistency relation, nonlinearities are suppressed by the scaling relation.

Finally, from Fig. 10C,F, we see that the rescaling preserves the *CV* for *µ* < *θ* and suppresses it for *µ > θ. In the case µ < θ*, the

*CV*is given by Eq. (B17). This expression shows that the scaling relation of Eq. (B22) also leaves invariant the

*CV*. Interestingly, in some parameter regime, the

*CV*in figs. 8D and 10C shows a non-monotonic behavior with

*ν*

_{X}which is not captured by Eq. (B17). In particular, a

*CV*above one 1 is observed when

*µ*is below the reset

*V*

_{r}. As pointed out in [76], this supra-Poissonian firing is explained by the fact that, when

*µ*<

*V*

_{r}, spiking probability is higher just after firing that it is afterwards. In agreement with this interpretation, we find that the non-monotonic behavior of the

*CV*is disappears in the large

*K*limit, where the region of inputs for which

*µ*<

*V*

_{r}becomes negligible. Thus, our analysis shows that the irregularity of firing is preserved in the strong coupling limit of a single neuron with

*µ*<

*θ*.

In the case *µ* > *θ*, the *CV* is given by Eq. (B9). This expression shows that the scaling relation of Eq. (B21) produces a *CV* which decreases as 1*/K* in the strong coupling limit. It follows that, in a single neuron with *µ* > *θ*, the strong coupling limit produces finite firing that is regular.

Starting from the next section we will focus our attention to network of conductance-based neurons. Since we are interested in describing the irregular firing observed in the cortex, we will focus our study on networks with *µ* < *θ*.

## Appendix C: Firing rate and scaling relation in leaky integrate-and-fire neuron models with voltage-dependent currents

In the main text, we have shown that, when coupling is strong and *a* « 1, the response of a single LIF neuron with conductance-based synapses is well approximated by Eq. (12), i.e. Kramers escape rate. Using this expression, we have show that the scaling relation of Eq. (14) allows finite firing in single neuron and in networks of neurons. Here, we show that the first order approximation of this scaling, i.e. *a* ~ 1/log(*K*), appears also in neuron models with additional biophysical details, such as spike generating currents [43] and voltage-gated subthreshold currents [23], as long as coupling is strong, *a* is small, and synapses are conductance-based.

We consider integrate-and-fire models featuring voltage-dependent currents, indicated here as *φ*(*V*), and conductance-based synapses. In these models, the membrane potential dynamics can be written as

In the leaky integrate-and-fire model (LIF), *ψ*(*V*) and Eq. (C1) reduces to Eq. (1) analyzed in the main text. In the exponential integrate-and-fire model (EIF) [43], the function *ψ*(*V*) = Δ*Tg*_{L} exp[(*V* − *θ*/Δ*T*] describes the spike generation current; in this model, once the membrane potential crosses the threshold *θ* it diverges to infinity in finite time. The current generated by inward rectifier voltage-gated channels, such as the one recently reported in [23], is captured by an expression of the form *ψ*(*V*) = −*g*_{in} (*V*) (*V* − *E*_{in}, where *g*_{in}(*V*) and *E*_{in} represent the conductance and the reversal potential of the channels, respectively; in the case of [23], 1/*g*_{in}(*V*) was shown to well approximated by a linear increasing function of *V*.

The dynamics Eq. (C1), following an approach analogous to the one we used for the derivation of Eq. (4), can be approximated by
where *ζ* is a white noise term, with zero mean and unit variance density, while *τ*, *µ* and *σ*(*V*) are as in Eq. (5). In what follows, as in the main text, we use the effective time constant approximation [39], i.e. we neglect the multiplicative component of the noise term in Eq. (C2), and make the substitution *σ*(*V*) → *σ*(*μ**), where *µ*^{∗} is the mean value of the membrane potential dynamics.

The firing rate of a neuron following Eq. (C2) can be computed exactly using Eq. (A5) and is given by

In what follows, we provide a more intuitive derivation of the single neuron response, which is valid in the biologically relevant case of *a* « 1. The function in Eq. (C2) can be though of as an energy function which drives the dynamics of the membrane potential. In the case of LIF neurons, is a quadratic function with a minimum at *V* = *µ*. In neuron models with a spike generation current, such as the EIF model [43], the shape of the function far from threshold is qualitatively similar to that of the LIF model (with a minimum at *V* = *µ*^{∗}), but becomes markedly different close to threshold, where the potential energy has a maximum at *V* = *θ*^{∗} and goes to −∞ for *V* > *θ*^{∗}. Here, we focus on the case in which additional subthreshold voltage-gated currents do not lead to additional minima of the energy function, a scenario that can happen with potassium inward-rectifier currents (e.g. see [77] chapter 4.4.3). In models in which has a single minimum in the subthreshold range at *µ*^{∗}, and a maximum at *θ*^{∗}, the firing rate of a neuron when input noise is small (i.e. when *a* « 1) can again be computed using Kramers escape rate, which gives the average time it take for the membrane potential to go from *µ*^{∗} to *θ*^{∗}, (see [42] section 5.5.3)
where
while indicates quantities that remain of order 1 in the small *a* limit, provided the external inputs *ν*_{X} are at least of order 1/(*aKτ*_{L}. Eq. (C4) is the generalization of Eq. (12) to the case of integrate-and-fire neuron models with voltage-dependent currents; it shows that, at the dominant order, finite firing emerges if *a* ~ 1/log(*K*). Moreover, Eq. (C4) shows that corrections to the logarithmic scaling depend on the specific type of voltage-dependent currents used in the model.

## Appendix D: Calculations in the strong coupling regime - Networks

In this section, we show how the results on the strong coupling limit of single neuron response can be generalized to the network case. First, we analyze the problem in the case in which excitatory and inhibitory neurons have the same biophysical properties (model A). In this model we start by discussing the results using the effective time constant approximation, and then discuss the full results. Then we study the case in which excitatory and inhibitory neurons have different biophysical properties (model B).

### 1. Model A, effective time constant approximation

As discussed in the main text, the network response in model A with the effective time constant approximation is obtained solving the self-consistency condition given by (19) and Eq. (10). At strong coupling, this condition can be simplified to the form of Eq. (12). In the strong coupling limit, when *ν* ≪ 1/*aKτ*_{L} and *ν* ≪ 1/*τ*_{rp}, the right hand side of Eq. (10) depends on *ν* and *ν*_{X} only through their ratio. Therefore, we look for solutions of the simplified self-consistency condition with a Taylor expansion

Keeping only terms up to first order in *x*, the self-consistency condition becomes
from which we find

The solution of Eq. (D2) provides the linear component of the network response; this is preserved in the strong coupling limit with an expression analogous to Eq. (14) but with

This uniquely defines a scaling between *a* and *K* (see Fig. 3A for an example of the scaling function). We test the validity of our result in Fig. 3B. The numerical analysis shows that, as *K* increases, the scaling relation prevents saturation and suppression of the network response. However, unlike what happens in the single neuron case, the shape of the transfer function is not preserved and becomes increasingly linear as *K* becomes larger. This is analogous to what happens in the balanced state model [7, 8, 10, 72], where the network transfer function becomes linear in the strong coupling limit. For the case under investigation here, we can understand this suppression of nonlinearities by looking at the second order terms in the expansion of Eq. (D1). Keeping the dominant contribution in *a*, we find

Hence *ρ*_{2} goes to zero as *a* decreases, producing a linear transfer function. This follows directly from the self-consistency relation and is not present in the single neuron case, where in fact a nonlinear transfer function is observed in the large *K* limit. Fig. 3B shows that linearity is reached really slowly with *K*; this follows directly from Eq. (D3) where the suppression of nonlinear terms is controlled by *a*, which slowly goes to zero with *K* (approximately logarithmically).

### 2. Model A, multiplicative noise

In this section, we generalize the approach used above, relaxing the effective time constant approximation. As discussed in Appendix B, Eq. (A13) in the strong coupling limit becomes with and

Here we assumed *ν* ≪ 1/*τ*_{L}*ν*_{X} so that the function depends on *ν* and *ν*_{X} only through the combination *ν*/*ν*_{X}. We will show below that a scaling relation analogous to that of single neurons holds, hence for *K* large enough *ν* ≪ 1/*τ*_{L}*ν*_{X} is automatically implemented. To solve the self consistency condition, we express the firing rate *ν* with a Taylor expansion

Note that in Eq. (D7) we assumed *ρ*_{0} = 0, we will come back to this point at the end of the section. Under this assumption and the function Q depends only on powers of the dimensionless variable *x*. Keeping only terms up to first order in *x*, Eq. (D4) becomes
from which we find

The solution of Eq. (D9) provides the linear component of the network response, i.e. its gain; we will discuss this function in more detail at the end of this section.

From Eq. (D9) we find that the network gain *ρ*1 is preserved in the strong coupling limit if the factor
is constant. Eq. (D10) uniquely defines a scaling between *a* and *K* (see Fig. 11C for an example of the scaling function). We test the validity of the scaling in Fig. 11 as follows: given a set of parameters *a*, *K* and *ρ*_{1}, we compute numerically the transfer function from Eq. (A13), then we increased *K*, determined the corresponding change in *a* using Eq. (D10) and compute again the transfer function; results of this procedure are shown in Fig. 11A. The numerical analysis shows that, as *K* increases our scaling relation prevent saturation and the network response remains finite.

As in the case with diffusion approximation, the shape of the transfer function is not preserved by the scaling and an increasing linear response is observed. We can understand this suppression of nonlinearities by looking at the second order terms in the expansion of Eq. (D4); we find
and, keeping the dominant contribution in 1*/a* at the denominator,

Hence *ρ*_{2} goes to zero as *a* decreases, producing a linear transfer function. The nonlinearities at low rate in Fig. 11A (e.g. see red and yellow lines) show that our assumption *ρ*_{0} = 0 is not valid in general. However it turns out that the above defined scaling relation suppresses also these nonlinearities in the limit of strong coupling (e.g. blue and cyan lines).

We now characterize the dependency of the transfer function gain, i.e. its slope, on network parameters. For fixed network parameters, the network gain *ρ*1 is defined as the solution of Eq. (D9); solutions as a function of *a* and *g* are shown in Fig. 11E. At fixed values of *a*, the gain initially decreases as *g* increases and, for *g* large enough, the opposite trend appears. This behavior is due to two different effects which are produced by the increase of *g*: on one hand, it increases the strength of recurrent inhibition; on the other hand, it decreases the equilibrium membrane potential *µ* and bring it closer to the inhibitory reversal potential *E*_{i}, which in turn weakens inhibition (see Fig. 11F). Fig. 11E shows that the gain is finite only for a finite range of the parameter *g*; divergences appear because recurrent inhibition is not sufficiently strong to balance excitation. At small *g*, the unbalance is produced by week efficacy of inhibitory synapses; at large *g*, inhibition is suppressed by the approach of the membrane potential to the reversal point of inhibitory synapses. Increasing the value of *a* produces an upward shift in the curve and, at the same time, decreases the range of values in which the gain is finite. The observed decrease in gain generated at low values of *g* is observed also networks of current-based neurons [10] where the gain is found to be 1/(*gγ* − 1. Finally, we note that the difference between conductance and current-based model decreases with *a*.

To conclude this analysis, we give an approximated expression of the probability distribution of the membrane potential of Eq. (A12) which, in the strong coupling limit, becomes
where *V*_{max} is the value of the membrane potential *V* which maximizes the integrand of Eq. (A12) while the function *u*() has been defined in Eq. (A15). Examples of the probability distribution and the corresponding approximated expressions are given in Fig. 11D.

### 3. Model B, multiplicative noise

In this section we generalize the results obtained so far to the case of networks with excitatory and inhibitory neurons with different biophysical properties.

#### a. Model definition

Here we take into account the diversity of the two types of neurons with for excitatory neurons and for inhibitory neurons. We use the parametrization and

Eq. (1) becomes

The expressions for excitatory neurons are analogous expressions are valid for inhibitory neurons.

The firing rate is given by solving a system of two equations
with
and analogous expressions for the inhibitory population. The probability distribution of the membrane potential and the *CV* are straightforward generalizations of Eqs. (A12) and (A14).

#### b. Scaling analysis

We parametrize inputs to the two populations as *ν*_{EX} and *ν*_{IX} = *ην*_{EX}. Using an analysis analogous to the one depicted above, we obtain a simplified expression for the self-consistency Eq. (D20) that is
where
and

We investigate the solution in the strong coupling limit using an expansion and obtain

Eq. (D26) defines the gain of the excitatory and inhibitory populations. As for model A, requiring that network gain is preserved in the large *K* limit is equivalent to assuming the products
constant; these constraints defines how synaptic strength should scale with *K* to preserve the response gain. We note that, since is different for the two populations, in the general case there are two different scalings for the two populations; in Fig 12 we verify this prediction.

## Appendix E: Simulations vs theory

All the results showed in the main text are based on the mean field analysis of the network dynamics. in this section we investigate how the predictions of the mean field theory compare to numerical simulations of networks of conductance-based neurons.

Using the simulator Brian2 [66], we simulated the dynamics of networks of spiking neurons defined by Eq. (1). We investigated networks of *N*_{E} excitatory and *N*_{I} inhibitory neurons; the two groups were driven by two populations of Poisson units of size *N*_{EX} and *N*_{IX}, respectively. Simulations were performed for *N*_{E} = *N*_{I} = *N*_{EX} = *N*_{IX} = 10*K* and 100*K*, with no significant differences between the two. We used uniformly distributed delays of excitatory and inhibitory synapses. Delays were drawn randomly and independently at each existing synapse from uniform distributions in the range [0, 10]ms (E synapses) and [0, 1]ms (I synapses). For fixed network parameters, the dynamics was simulated for 10 seconds with a time step of 10*µ*s. We performed simulations for different values of *K*; the values of *a* was rescaled according to the scaling relation of Eq. (D10). From the resulting activity we measured firing rate, *CV* and probability distribution of the membrane potential; results are shown in Fig. 13. Mean field predictions are in qualitative agreement with numerical simulations, and the agreement improves as *a* decreases. Deviations from mean-field are expected to arise potentially from three factors: (1) Finite size of conductance jumps due to pre-synaptic action potentials; (2) Correlations in synaptic inputs to different neurons in the network due to recurrent connectivity; (3) Temporal correlations in synaptic inputs due to non-Poissonian firing behavior. In our simulations, deviations due to (1) and (2) become small when both *a* and the connection probability are small. Deviations due to (3) become small when *ν* ≪ 1/*τ*_{rp}, since as shown in Eq. (B17) of Appendix B, the statistics of presynaptic neurons firing tend to those of a Poisson process. As predicted by the mean field analysis, with increasing *K* (and decreasing *a*) the network response becomes linear and approaches the asymptotic scaling; the firing remains irregular, as shown by the *CV*, and the membrane potential becomes Gaussian distributed.

## Appendix F: Effects of heterogeneity in the connectivity between neurons

In this section, we describe how fluctuations in single cell properties modify the expressions described above; in particular we investigate the effect of heterogeneities in number of connections per neuron in the simplified framework of model A. The formalism described here is a generalization to networks of conductance-based neurons of the analysis done in refs [56, 78] for networks of current-based neurons.

We assume that the *i*-th neuron in the network receives projections from and external, excitatory and inhibitory neurons, respectively. These numbers are drown randomly from Gaussian distributions with mean *K* (*γK*) and variance ∆*K*^{2} (*γ*^{2}∆*K*^{2}) for excitatory (inhibitory) synapses. Note that ∆*K*2 is assumed to be sufficiently small so that the probability to generate a negative number can be neglected. Fluctuations in the number of connections are expected to produce a distribution of rates in the population, characterized by mean and variance *ν* and ∆*ν*^{2}. As a result, the rates of incoming excitatory and inhibitory spikes differ from cell to cell and become
where *r*_{E,I} are the average presynaptic rates and are realizations of a quenched normal noise with zero mean and unit variance, fixed in a given realization of the network connectivity. Starting from Eq. (F1), the rate *ν*^{i} of the cell is derived as in the case without heterogeneities, the main difference is that it is now a function of the particular realizations of and . The quantities *ν* and ∆*ν*^{2} are obtained from population averages through the self consistency relations
where 〉⋅∻ represents the Gaussian average over the variables *z*_{E} and *z*_{I}. Once *ν* and ∆*ν*^{2} are known, the probability distribution of firing rate in the population is given by

As showed in the main text (Fig. 4A), Eq. (F3) captures quantitatively the heterogeneity in rates observed in numerical simulations.

In the large *K* (small *a*) limit, the mathematical expressions derived above simplify significantly. First, as long as the parameter *µ*^{i} of the i-th neuron is below threshold, its rate is given by an expression analogous to Eq. (12) which, for small ∆_{E,I}, can be written
where *z*^{i} is generated from a Gaussian random variable with zero mean and unit variance. Moreover, if responses are far from saturation, the single rate can be written as
where *ν*_{0} is the rate in the absence of quenched noise (i.e. Eq. (20) of the main text). It is easy to show that, in Eq. (F5), Ω^{2} is independent of *a*, *K* and *ν*_{X} in the large *K* (small *a*) limit. Finally, as noted in [56], if the single neuron rate can be expressed as an exponential function of a quenched variable *z*, Eq. (F3) can be integrated exactly and the distribution of rates is lognormal and given by

Therefore, when the derivation of Eq. (F5) is valid, rates in the network should follow a log normal distribution, with parameters given by

For Γ^{2} ≪ 1, we find ∆*ν*/*ν* ≈ Γ/2 which scales linearly with *CV*_{K}, consistent with numerical results shown in Fig. 4C.

## Appendix G: Finite synaptic time constants

In this section, we discuss the effect of synaptic time constant on single neuron and network responses. First, we derive an approximated expression for the single neuron membrane time constant; we then compute approximated expressions which are valid for different values of the ratio *τ*_{S}/*τ*; at the end of the section, we discussing the response of networks of neurons with large *τ*_{S}/*τ*.

The single neuron membrane potential dynamics is given by

Using the effective time constant approximation [39], we have
where *g*_{AF} represents the fluctuating component of the conductance *g*_{A}, i.e.
and

We are interested in stationary response, we introduce the term with derivative

Since we are interested in understanding the effect of an additional time scale, we can simplify the analysis assuming a unique synaptic time scale *τ*_{E} = *τ*_{I} = *τ*_{S} and obtain

To have the correct limit for *τ*_{S} → 0, we impose *a*_{A} = *a*_{A}_{0}*τ*_{L}/*τ*_{S}, where *a*_{A}_{0} is the value of the synaptic efficacy in the limit of instantaneous synaptic time scale. With these assumptions the system equation becomes

One can check that in the limit *τ*_{S} → 0, the equations become analogous to those of the main text with . In what follows, we provide approximated expressions for the single neuron transfer function in three regimes: small time constant [67], large time constant [69], and for intermediate values [70]. We also note that a numerical procedure to compute the firing rate exactly for any value synaptic time constant was introduced recently, using Fredholm theory [79].

### 1. Single neuron transfer function for different values of *τ*_{S}/*τ*

For *τ*_{S}/*τ* ≪ 1, as shown in [67], the firing rate can be computed with a perturbative expansion and is given by
with . As shown in Fig. 14, Eq. (G9) generates small corrections around the prediction obtained with instantaneous synapses, and captures well the response for values *τ*_{S}/*τ* ≳ 0.1.

For *τ*_{S}/*τ* « 1, as shown in [70] using Rice formula [80], the single neuron firing rate is well approximated by the rate of upward threshold crossing of the membrane potential dynamics without reset. Starting from Eq. (G8) and using the results of [70], we obtain

For *τ*_{S}/*τ* ≫ 1, as shown in [69], the neuron fires only when fluctuations of *z* are large enough for *V* to be above threshold; the corresponding rate is given by

As shown in Fig. 14, Eq. (G11) captures the response for values *τ*_{S}/*τ* ≳ 1 and predicts a strong suppression of response at larger *τ*_{S}/*τ*.

Higher order terms in the *τ*_{S}/*τ* expansion could be computed using the approach described in [79]. However, Fig. 14 shows that Eqs. (G9-G11) are sufficient to capture quantitatively responses observed in numerical simulations for different regimes of *τ*_{S}/*τ*. Eqs. (G9-G11) show that the single neuron response is a nonlinear function of input rates, this nonlinearity prevents a scaling relation between *a* and *K* to rescue the suppression observed in Fig. 14 and Fig. 6A.

### 2. Network response for *τ*_{S}/*τ* larger than one

In this section, we study responses in networks of neurons with large *τ*_{S}/*τ*. As in the case of instantaneous synapses, the network response can be obtained solving the self-consistency relation given by the single neuron transfer function using input rates

In particular, solutions of the implicit equation generated by Eq. (G11) give the network response in the region of inputs for which *τ*_{S}/*τ* ≫ 1. In this region of inputs, assuming coupling to be strong, the implicit equation becomes
Eq. (G12), which is validated numerically in Fig. 15, implies that firing is preserved if is of order one, i.e. if

Combining the above equation with the definition of *µ*, we obtained Eq. (21), which captures the behavior of network response observed in numerical simulations for *τ*_{S}/*τ* ≫ 1 (Fig. 6B and Fig. 15).

Eq. (G12) can be used to understand the effect of connection-heterogeneity in networks with large *τ*_{S}/*τ*. In particular, generalizing the analysis of Appendix F, we found that rates in the network, in the limit of small *CV*_{K} and large *K*, are given by
where *ν*_{0} is the population average in the absence of heterogeneity (i.e. the solution of Eq. (G12)), and *z*_{i} is a Gaussian random variable of zero mean and unit variance. The prefactor Ω_{S}, which is independent of *a* and *K*, and is given by

Eq. (G15) is a generalization of Eq. (22) to the case of large*τ*_{S}/*τ*. It shows that, in this limit, the state of the network is preserved with connection fluctuations up to

## References

- [1].↵
- [2].↵
- [3].↵
- [4].↵
- [5].↵
- [6].↵
- [7].↵
- [8].↵
- [9].↵
- [10].↵
- [11].↵
- [12].↵
- [13].↵
- [14].↵
- [15].
- [16].
- [17].↵
- [18].
- [19].↵
- [20].↵
- [21].↵
- [22].↵
- [23].↵
- [24].↵
- [25].↵
- [26].↵
- [27].↵
- [28].
- [29].
- [30].
- [31].↵
- [32].↵
- [33].
- [34].
- [35].
- [36].
- [37].
- [38].↵
- [39].↵
- [40].↵
- [41].↵
- [42].↵
- [43].↵
- [44].↵
- [45].↵
- [46].↵
- [47].↵
- [48].
- [49].
- [50].
- [51].↵
- [52].↵
- [53].↵
- [54].
- [55].↵
- [56].↵
- [57].↵
- [58].↵
- [59].
- [60].
- [61].↵
- [62].↵
- [63].↵
- [64].↵
- [65].↵
- [66].↵
- [67].↵
- [68].↵
- [69].↵
- [70].↵
- [71].↵
- [72].↵
- [73].↵
- [74].↵
- [75].↵
- [76].↵
- [77].↵
- [78].↵
- [79].↵
- [80].↵