Glassy phase in dynamically-balanced neuronal networks

We present a novel mean-field theory for balanced neuronal networks with arbitrary levels of symmetry in the synaptic connectivity. The theory determines the fixed point of the network dynamics and the conditions for its stability. The fixed point becomes unstable by increasing the synaptic gain beyond a critical value that depends on the level of symmetry. Beyond this critical gain, for positive levels of symmetry, we find a previously unreported phase. In this phase, the dynamical landscape is dominated by a large number of marginally-stable fixed points. As a result, the network dynamics exhibit non-exponential relaxation and ergodicity is broken. We discuss the relevance of such a glassy phase for understanding dynamical and computational aspects of cortical operation.

Balanced networks are a key theoretical construct to describe the dynamical regime in which the cortex operates and to understand its computational implications [1]. In these networks, neuronal activity dynamically adjusts so that the average input to a neuron is of the same order as the spatial and/or temporal fluctuations [2][3][4][5]. This results in a dynamical state -the balanced regime -characterized by low and heterogeneous single-neuron firing rates, with temporally-irregular spiking [6][7][8][9] and vanishingly small pairwise correlations in the activity of different neurons [5,10], consistent with experimental observations in the cortex.
The balanced regime is understood in random networks of binary neurons [3,5,10] and of rate-based neurons [11,12], random meaning that there are no correlation in the connectivity matrix. However, local connectivity (i.e., on spatial scales of the order of 100 µm) in the cortex is not random. Reciprocal synaptic connections are both over-represented, as compared to random networks, and stronger than average [13][14][15]. The effects of such partial symmetry on the dynamics are not well understood [16][17][18], because the analytical description is difficult. In fact, fluctuations in the synaptic inputs are not Gaussian, due to the correlations in the connectivity. Currently, there is no mean-field description for partiallysymmetric balanced networks.
In this letter, we show that the non-Gaussianity of the inputs can be conveniently dealt with by a cavity approximation [19], which leads to an exact mean-field theory when correlations are restricted to reciprocal connections. The theory determines the distribution of neuronal activity in the fixed point and its stability. When the mean-field solution is unstable, we study numerically the network dynamics. We find that the dynamics are not ergodic, provided the correlation in the connectivity is positive.
We consider a fully connected network of N inhibitory rate-based neurons. The state of neuron i is described by its synaptic input, h i , which evolves in time (in units of the time constant) according tȯ where the first term on the r.h.s. describes an excitatory external input, w ij > 0 is the efficacy of the synapse connecting neuron j to neuron i, and ν j , the rate of neuron j, is a function of its synaptic input, i.e., ν j = φ(h j ). We choose φ(h j ) ≡ h j Θ(h j ), where Θ(·) is the Heaviside function. The w ij are given by where t ij is normally distributed with mean zero and unit variance. Thus, the synaptic efficacies are log-normally distributed with mean w = 1 and variance (synaptic gain) σ 2 w = e σ 2 t − 1, which is tuned by adjusting σ 2 t . The correlation of reciprocal connections (level of symmetry), ρ w , is tuned by adjusting the correlation ρ t = t ij t ji .
The model includes elements of biological realism lacking in previous studies [16,17]: (i) the synaptic efficacies are sign-constrained, to comply with the so-called Dale's law [20]; (ii) the single-neuron activity is described by a non-negative variable. The network, then, operates in the balanced regime for N → ∞. The other modeling choices, such as the distribution of the w ij and the specific φ(·), have been motivated by considerations of analytical/numerical tractability of the resulting equations. They are immaterial for the theory developed below.
We are interested in a mean-field description of the fixed point(s) of our model network (see [21] for a detailed derivation). For this, we need to determine the distribution of the synaptic inputs in a fixed point. We cannot use the central limit theorem to evaluate the sum in Eq. (1) because all ν j for j = i depend on ν i and, hence, they are not independent. Moreover, when ρ w = 0, w ij and ν j are also correlated because ν j depends on w ji ν i , and w ij and w ji are correlated. To make progress, we assume that the ν j are correlated only through ν i , and rewrite h i as where ν − j is the rate of neuron j when the outgoing connections from neuron i are removed (i.e., w ji = 0 for all j) and, hence, δν j is the change in the rate of neuron j due to neuron i. By construction the ν − j are now independent and the distribution of the cavity input, u i , across realizations of the w ij becomes Gaussian in the limit N → ∞. The corresponding mean, µ, and variance, σ 2 , can be expressed in terms of the means and variances of the w ij and the ν − j . This is because the w ij and the ν − j are also independent by construction. The second sum on the r.h.s. of Eq. (3) -the reaction term -can be computed with linear response theory, because the presence of neuron i induces only a small (i.e., O(1/ √ N )) perturbation in the input to neuron j. In the limit N → ∞, we find with χ ≡ N −1 j χ jj , where χ jj is the static local susceptibility of neuron j. Finally, we can determine selfconsistently h i using Eqs. (3) and (4), i.e., As anticipated, the distribution of the synaptic inputs in the fixed point is non-Gaussian when ρ w = 0. However, Eq. (5) solves the problem. We can express ν i as a function of the cavity input u i , i.e., Using the effective transduction function,φ(·), we then determine the mean and the variance of the distribution of the cavity inputs, as in a random network, by requiring self-consistency. Additionally, however, we impose selfconsistency on χ, which determines in turn the effective transduction function self-consistently.
The mean-field theory is valid only if the fixed point is asymptotically stable, otherwise we cannot use linear response theory to compute the reaction term. The stability of the fixed point can be assessed by computing the leading order corrections to the mean-field theory, i.e., the δν j in Eq. (3). If these adjustments are nonphysical, the mean-field solution, and hence the associated fixed point, is unstable [19]. Usingφ(·) we can write δν j =φ (u j )δu j = χ jj δu j (to leading order), where δu j is the change in the cavity input to neuron j due to neuron i. The key simplification is that the δu j are Gaussian (by construction). Determining self-consistently the δν j reduces to a standard problem [21]. We find where we have defined χ nl ≡ N −1 j χ 2 jj . Clearly, δν 2 must be non-negative and finite, which requires σ 2 w χ nl < 1. This latter condition is satisfied whenever As long as the mean-field solution is stable, there is a unique, stable fixed point of Eq. (1) [21]. Next, we compare the theoretical predictions with the results of numerical simulations. We find a very good agreement as illustrated in Fig. 1 for the mean-field theory, and in Fig. 2 for the leading order corrections to the mean-field theory, i.e., Eqs. (6)- (7). As can be seen in Fig. 1d, the fraction of active neurons (i.e., with ν i > 0) in the fixed point, f , decreases with increasing ρ w . This can be understood as follows. The stability of the fixed point is controlled by the spectrum of the Jacobian, where δ ij is the Kronecker symbol. The average spectrum (i.e., over realizations of the synaptic matrix) is easily evaluated in the limit N → ∞. There is one real, negative eigenvalue  [22]. Their real part satisfies for all k with probability 1 when N → ∞. Thus, decreasing f has a stabilizing effect that counteracts the destabilizing effect of increasing ρ w . We can use Eq. (9) to upper bound f in a stable fixed point, for given ρ w and σ 2 w . It must be f ≤ f * , where In Fig. 1d we plot f * as a function of ρ w (red, dashed line). It is always f < f * , and f = f * = 1/2 at the critical point. What happens when the mean-field solution is unstable? For ρ w = 1, the dynamics are governed by an energy function featuring exponentially many minima (see, e.g., [23]) and the network is multi-stable. By contrast, for ρ w = 0 there are no stable fixed points and the dynamics are chaotic [11,12,[24][25][26]. Hence, between ρ w = 1 and ρ w = 0 there is a transition between an exponential number of stable fixed points and no stable fixed point. We try to locate the transition by numerically integrating Eq. (1) for 2000 time constants while starting from 100 different initial conditions (ICs), chosen randomly and independently. This procedure is repeated for 5 realizations of the connectivity matrix at given ρ w and σ 2 w . We start with small networks, i.e., N ≤ 800. The rationale for this choice should be clear shortly. The results are illustrated in Fig. (3). In Fig. 3a, we plot the number of fixed points found as a function of the rescaled synaptic gain, g w = −1 + σ 2 w (1 + ρ w ) 2 /2, for three values of ρ w > 0. As predicted by the theory, different ICs lead to the same fixed point for g w < 0, i.e., when the meanfield solution is stable (see Eq. (8)). Instead, when g w > 0 (i.e., the mean-field solution is unstable), the network is multi-stable for ρ w ≥ 0.5. For ρ w < 0.5 (and g w > 0), the dynamics rarely converged to a fixed point within the allotted time (data not shown). Obviously, this does not locate the transition. There could be an exponential number of fixed points for N → ∞ and yet none at a given finite N , depending on how fast the logarithm of the typical number of fixed points grows with N .
In Fig. 3b, we plot f averaged over fixed points and connectivity matrices,f , as a function of g w . Consistently with the stability bound Eq. (10),f decreases with g w . However, it is systematically above f * for g w ≥ 0, which is indicative of strong finite-size effects. In fact,f decreases with N at parity of g w . Thus, we performed a finite-size scaling analysis. Interestingly,f extrapolated at N → ∞ is very close to f * , suggesting that the (vast) majority of the fixed points found starting from random ICs are marginally stable for N → ∞. This is confirmed by the finite-size scaling analysis of the spectral gap -the distance of the spectrum of the Jacobian from the imaginary axis -illustrated in Fig. 3c. There, we plot the spectral gap averaged over fixed points as a function N , for ρ w = 0.75 and two values of g w . For g w = −0.5, the average spectral gap rapidly converges to a positive value. By contrast, for g w = 2 the average spectral gap keeps approaching 0. This decay is well fitted by αN −2/3 (α is a free parameter), as expected if the spectrum touches the imaginary axis for N → ∞ [27]. The inset illustrates the variability of the spectral gap across fixed points, confirming that, indeed, most fixed points are at the edge of stability. For purpose of illustration, we plot in Fig. 3d the spectrum of the Jacobian for the fixed point found at g w = −0.5 (top) and for the fixed points found at g w = 2 (bottom). Note that g w = 2 is far from the critical line.
These results are explained naturally if, for ρ w > 0, the number of stable fixed points of Eq. (1) grows exponentially with N . At a given ρ w , the marginally-stable fixed points are exponentially more than the asymptoticallystable ones. This explains why one always finds closeto-marginal fixed points when starting from random ICs. For the same ρ w , however, there must be also an exponential number of asymptotically-stable fixed points. This is needed to explain why the marginality of the dynamics is structurally stable against decreasing ρ w . The number of stable fixed points decreases with decreasing ρ w until only unstable fixed points are left at ρ w = 0 (and σ 2 w > 2). This leads to the chaotic phase [26]. If the above scenario is correct, we expect broken ergodicity for any ρ w > 0 via the same mechanisms at work in mean-field models of glass forming materials  [19,28,29]. Note, however, that one can no longer define an energy function to describe the dynamics when ρ w < 1. Thus, there is no guarantee that the fixed points and their stability alone determine the dynamics of the network. Nevertheless, it seems plausible (to us) that if other attractors existed (e.g., limit cycles), they would have vanishing basins of attraction for N → ∞ because, in the same limit, an exponential number of stable fixed points has to be accommodated in the phase space. The results we present now support this conjecture.
By definition, the network dynamics are ergodic if the average of the h i over a suitably long time interval are independent of the initial conditions. We define whereh a i (T ) is the time-average of h i over a time interval T when the dynamics Eq. (1) start from the IC a (specifying h a j (0) for all j). Thus, the dynamics are ergodic if lim T →∞ D ab (T ) = 0 for all pairs (ab) of different ICs. We numerically estimated the behavior of D ab (T ). The results are shown in Fig. 4. When g w < 0, D ab (T ) rapidly goes to 0, regardless of ρ w . When g w > 0, D ab (T ) goes to 0 for ρ w ≤ 0 (Fig. 4b), and to a finite value for ρ w = 1 (Fig. 4c). Interestingly, for 0 < ρ w < 1, the decay of D ab (T ) is so slow that D ab (2000) > 0 for both ρ w = 0.26 ( Fig. 4(d)) and ρ w = 0.45 (Fig. 4(e)). We have fitted D ab (T ), averaged over (ab), to a stretched exponential with an offset. As can be seen, the stretched exponential provides a good fit to the observed decay (black, dashed lines in Fig. 3(d) and (e)). In all cases, the offset was positive suggesting that the dynamics are indeed not ergodic.
In summary, we have shown that balanced networks of rate-based neurons exhibit three distinct phases. For low synaptic gain, there is a unique, stable fixed point regardless of the level of symmetry. The dynamics are chaotic and ergodic for random or anti-symmetric networks, while they are glassy for symmetric networks. In the glassy phase, settling in a fixed point becomes a very slow process and, accordingly, single neurons' activity keeps fluctuating in time. From this point of view, the glassy phase resembles the chaotic one. The resemblance, however, is only superficial. Unlike in the chaotic phase, the memory of the initial conditions never fades away in the glassy phase.
Is there any evidence that the cortex operates in a glassy phase? Neurons in the cortex feature significant autocorrelation in spike counts separated by lags of seconds (see, e.g., [30]). This is a generic property of the glassy phase, which only requires positive levels of symmetry and large synaptic gains, as observed in the cortex. Alternative accounts [11,12,[31][32][33][34] require significant fine-tuning of some parameter(s) (see [35] for a detailed discussion of this point). More concretely, the glassiness of the cortical dynamics could be directly assessed by looking at multi-point correlation functions [28,29]. This appears feasible, given the availability of long, large-scale recordings of neuronal activity at the single-cell resolution.
Is there any computational advantage in operating in the glassy phase? It seems to us that there is no obvious disadvantage. Whatever benefit there is in operating in the chaotic phase, it can be only amplified by operating in the glassy phase, if only because of the broad spectrum of time scales and its robustness to changes in the synaptic gain and/or the level of symmetry. More interestingly, recent studies have shown that glassy materials can reliably perform useful computations, e.g., storing and retrieving memories, when driven cyclically [36]. These results, we believe, offer a fresh perspective for looking at old data, e.g., oscillations and short-term memory, possibly leading to new theoretical and experimental endeavors.
To conclude, we have shown that glassy dynamics can robustly occur in model systems whose dynamics are not described by an energy function. The underlying mechanism appears to be the same as the one leading to glassy dynamics in Hamiltonian systems, that is, the sudden appearance of a rugged dynamical (rather than energy) landscape as some control parameter is varied [19,28,29]. We note that our results support earlier proposals that, to have structurally-robust glassy behavior, it is sufficient for the system to have an exponential number of fixed points with a broad distribution of spectral gaps [37]. This suggests that glassy behavior could, indeed, be quite common in biology. If so, the theoretical and experimental tools developed in physics to understand glasses could provide a suitable paradigm, not just a useful metaphor, to understand quantitatively complex behavior in biological systems.