A unified theory of E/I synaptic balance, quasicritical neuronal avalanches and asynchronous irregular spiking

Neuronal avalanches and asynchronous irregular (AI) firing patterns have been thought to represent distinct frameworks to understand the brain spontaneous activity. The former is typically present in systems where there is a balance between the slow accumulation of tension and its fast dissipation, whereas the latter is accompanied by the balance between synaptic excitation and inhibition (E/I). Here, we develop a new theory of E/I balance that relies on two homeostatic adaptation mechanisms: the short-term depression of inhibition and the spike-dependent threshold increase. First, we turn off the adaptation and show that the so-called static system has a typical critical point commonly attributed to self-organized critical models. Then, we turn on the adaptation and show that the network evolves to a dynamic regime in which: (I) E/I synapses balance regardless of any parameter choice; (II) an AI firing pattern emerges; and (III) neuronal avalanches display power laws. This is the first time that these three phenomena appear simultaneously in the same network activity. Thus, we show that the once thought opposing frameworks may be unified into a single dynamics, provided that adaptation mechanisms are in place. In our model, the AI firing pattern is a direct consequence of the hovering close to the critical line where external inputs are compensated by threshold growth, creating synaptic balance for any E/I weight ratio. Highlights Asynchronous irregular (AI) firing happens together with power-law neuronal avalanches under self-organized synaptic balance. Self-organization towards the critical and balanced state (with AI and power-law avalanches) occur via short-term inhibition depression and firing threshold adaptation. The avalanche exponents match experimental findings. The adaptation time scales drive the self-organized dynamics towards different firing regimes. Author summary Two competing frameworks are employed to understand the brain spontaneous activity, both of which are backed by computational and experimental evidence: globally asynchronous and locally irregular (AI) activity arises in excitatory/inhibitory balanced networks subjected to external stimuli, whereas avalanche activity emerge in excitable systems on the critical point between active and inactive states. Here, we develop a new theory for E/I networks and show that there is a state where synaptic balance coexists with AI firing and power-law distributed neuronal avalanches. This regime is achieved through the introducing of short-term depression of inhibitory synapses and spike-dependent threshold adaptation. Thus, the system self-organizes towards the balance point, such that its AI activity arises from quasicritical fluctuations. The need for two independent adaptive mechanisms explains why different dynamical states are observed in the brain.


Introduction
Self-organized quasicriticality (SOqC), as defined by Bonachela and Muñoz [1], appears in nonconservative models for forest-fires, earthquakes and neuronal networks [2]. The self-organization consists of a local loaddissipation homeostatic mechanism incorporated in a system that has an underlying critical point. In the

The model
We study the excitatory/inhibitory network of [15], where each neuron is a stochastic leaky integrate-and-fire unit with discrete time step equal to 1 ms, connected in an all-to-all graph. A binary variable indicates if the neuron fired (X[t] = 1) or not (X[t] = 0). The membrane potential of each cell i in either the excitatory (E) or inhibitory (I) population is given by where μ is the leak time constant, and I ext i [t] is an external current. The synaptic parameters are J ij (the excitatory coupling strength), and W ij = gJ ij (the inhibitory coupling strength); g is the inhibition to excitation coupling strength ratio. X is a stochastic variable that turns to 1 with a piecewise linear sigmoidal probability Φ(V), where θ is the firing threshold, Γ is the firing gain constant, V S = 1/Γ + θ is the saturation potential, and Θ(x > 0) = 1 (zero otherwise) is the step function. The total number of neurons in the network is N = N E + N I , where the fractions of excitatory and inhibitory neurons are kept fixed at p = N E /N = 0.8 and q = N I /N = 0.2, respectively, as reported for cortical data [17]-although the values of p and q do not alter the dynamics of the network [15]. Note that the membrane potential is reset to zero at the t + 1 instant if a spike occurred at t because of the factor (1 − X E/I i [t]). In addition, regardless of being part of the excitatory or inhibitory population, the transition to X i (t) = 1 in step t is always followed by X i (t + 1) = 0. This is because at t + 1, V i (t + 1) = 0 and Φ(0) = 0. The X i variables are independently evaluated according to Φ(V i ) for each neuron i at each time step t.
All parameters are assumed to be self-averaging (i.e., single-peaked distributions with small coefficients of variation). Letting the excitatory synaptic current be I E i [t] ≡ (1/N) j J ij X E j [t] and the inhibitory be (all the parameters are positively defined, hence the minus sign in front of the inhibitory inputs), we define the net synaptic current, Equation (1) is the same for both excitatory and inhibitory populations. This happens because the inhibitory self-coupling and the inhibitory coupling to the excitatory population are both given by W ij , whereas the selfcoupling of the excitatory population and the coupling of the excitatory to the inhibitory population are both given by J ij . The average over neurons of equation (1) yields the mean-field of this network. We call this the static version of the model, since the averages W = 1/N 2 ij W ij and θ = 1/N i θ i are kept constant in time. It presents a mean-field directed percolation (MF-DP) critical point [15,18] at g = g c , such that g < g c is the active excitation-dominated (supercritical) phase and g > g c corresponds to the inhibition-dominated absorbing state (subcritical). The synapses in the critical point g c are dynamically balanced: fluctuations in excitation are immediately followed by counter fluctuations in inhibition. The avalanche distributions are the typical ones expected for a mean-field branching process [15, see also figure 1]. Moreover, the critical point can only be reached when the average external stimulus is of the order of the average threshold, I ext ∼ θ. Thus, we define the parameter h = I ext − (1 − μ)θ as the external suprathreshold current; it is analogous to an external field acting on standard models for directed percolation [19,20]. The critical point is then (g c , h c ) with h c = 0. A summary of the phase transitions undergone by the static model and its avalanches is given in figure 1.
While some authors hypothesized that the brain must be tuned to criticality by evolution [21], others claim the it just needs to self-regulate [22]. One of the mechanisms for brain criticality relies on the need for selforganization toward a phase transition [23], or, particularly, a critical point [2][3][4]. The typical critical point of contact-process-like systems requires two constraints [19,20], one for the coupling (in our work identified with the E/I synaptic coupling g) and the other on the external field (here expressed by the balance between external currents and the firing threshold). Previous works have never dealt with two adapting variables, so we want to explore what are the consequences of having two competing homeostatic mechanisms in the same system. We then introduce the homeostatic version of the model by letting W (the inhibitory coupling) and θ vary with time: characterizing inhibitory depression plasticity, and threshold adaptation, The parameters u θ and u w are the fractions of threshold increase and synaptic depression, respectively, τ θ and τ w should be moderately large decay time scales and A is the maximum inhibitory coupling strength (or simply the inhibition amplitude). This type of firing threshold dynamics is known to generate spike frequency adaptation by mimicking the slow inactivation of Na + channels accumulated by successive spikes [24,25]. Instead of inhibition, we could also homeostatically regulate the excitatory synapses [15], J ij , but experimental evidence suggests that it is the inhibitory synapses that tend to adapt to the excitation level of the network, and not the contrary [10]. Notice that the W ij are the weights from I → I and I → E, so depressing them decreases the inhibition over inhibitory neurons. This, in turn, decreases network activity.  (22) with μ = 0, and W and θ constant in time. The axis parameters are given by: The labels refer to the phases where the activity is: asynchronous regular (AR), asynchronous irregular (AI), and synchronous irregular (SI). The ρ = 0 is a phase with zero average firing rate. (C) Avalanches on the critical point of the static model (constant W and θ) correspond to the expected MF-DP phase transition, yielding P(s) ∼ s −3/2 and P(T ) ∼ T −2 (equations (6) and (7)), with s ∼ T 2 , respecting the crackling noise scaling relation (equation (8)). The critical point (g c , h c ) requires two fine-tunings, and it is separated from the AI state. Threshold adaptation will self-organize the system along the h axis and synaptic depression, along the g axis. In the text, we show that the addition of these homeostatic mechanisms are able to keep PL avalanches while generating a synaptically balanced AI state.

Methods
MF-DP systems are known to generate a SOqC state when excitatory synapses are depressing through an equation similar to the one above [2,26]. Notwithstanding, here we have not only one, but two simultaneous homeostatic dynamics acting to self-organize the network. In a previous work, we concluded that this homeostatic version of the model is capable of generating a SOqC state possessing synaptic balance [15], but that conclusion relied only on numerical evidence. In the following sections, we will derive an analytical meanfield approximation for both the static and homeostatic versions of the model, and we will study the conditions for the emergence of the SOqC state in this network. Not only that, but we will also determine that the homeostatic model is always synaptically balanced for large enough recovery time scales. And finally, we will show that the SOqC dynamics generates power law (PL) avalanches when the activity is AI.
The mechanism that generates this behavior works as follows: when inhibition and threshold are constant in time (i.e., the static system), the network presents a true MF-DP critical point with scale-invariant avalanche distributions that obey the crackling noise scaling relation [15, see also figure 1]. The activity in this system needs to be sparked by external stimuli whenever the system falls into the absorbing state, as usual for any contact-process-like dynamics [19]. Now, we let W and θ respond to network activity by growing through negative and positive feedback, respectively. The resulting dynamical system fluctuates homeostatically around the critical point that was present in the static system [3, see also figure 3]. This eliminates the requirement for artificially inducing avalanches by external stimulus. However, the underlying critical point is approached only to the O 1/τ θ , making the self-organizing system always slightly supercritical. As a consequence, the avalanches of the adapting system will have PL shape within a restricted (although large) range of sizes and duration.

Simulations
We ran simulations corresponding to equations (1)- (5), and compared them to the analytical results developed in the forthcoming sections. Unless otherwise stated, we fixed the parameters p = 0.8 (fraction of excitatory neurons), q = 0.2 (fraction of inhibitory neurons), I ext = 1 (external current), Γ = 0.2 (neuronal firing gain), J = 10 (excitatory weight), and u w = u θ = u = 0.1 (the fraction of inhibition depression and threshold adaptation, respectively). The parameter μ is the leakage time scale, and we show in the mean-field approximation section that using μ = 0 does not change the phase transition, and hence this value is used for simulations. The remaining parameters, N (total number of neurons in the network), τ w (recovery time constant of the inhibitory synapses), τ θ (recovery time of the threshold) and A (maximum dynamical inhibition) were systematically explored. Some simulations have τ ≡ τ w = τ θ . Figure 2 shows the average activity of the network simulated as a function of τ for different N, compared to the analytical description derived in the next section.
All simulations were ran for a total of 1000 000 time steps, and averages were calculated after discarding the first 10 000 time steps to avoid transient effects. We use the conversion factor of 1 time step = 1 millisecond, since a spike of a neuron, here, takes 1 time step.

Avalanche computation and distributions
The simulation is the numerical solution of equations (1)-(5), and we measure the instantaneous firing rate of the network with a total of N neurons,

indicates a firing of neuron i at instant t (in any of the E/I populations). The total number of firings is S[t] = Nρ[t], and we choose the threshold S th = r(max t S[t] − min t S[t])
, with r = 0.2 as the fraction of the activity to be discarded. The results presented here are robust for a wide range of r values, provided that r is neither too close to zero nor to one. Then, we calculate the avalanches using the new time series, S [t] = S[t] − S th : an avalanche of size s k and duration T k is defined as where t k and t k+1 are two consecutive moments of silence, i.e.
t=t k > 0. This method generates consistent scaling between s and T in the Ornstein-Uhlenbeck process [27], and also in the MF-DP avalanches (see figure 1).
We compare the distributions to the PLs where we used a prime in the exponents to avoid confusion with the adaptation time scales, τ , τ θ and τ w , and keep consistent with the absorbing state phase transition literature [28]. When the system is critical, s and T are correlated through s ∼ T 1/(σν ⊥ z) , so the exponents τ and τ t must follow the crackling noise scaling relation [28,29]: where σ, ν ⊥ and z are other critical exponents. If the equality in equation (8) still holds after both sides being independently fitted, then it is a compelling evidence of a critical system. The left-hand side comes from the distribution of avalanches, and the right-hand side, from the s (T) plot. We used the censored maximum likelihood estimator (MLE) to obtain the exponents of the avalanches using a discrete power law distribution as a candidate function [30], where α is the power law exponent (corresponding to τ or τ t , depending on whether we are fitting the size s or duration T distribution, respectively), and k min and k max are the indices of the lower and upper limits of the x histogram data points to which the distribution will be fitted (again, for either s or T, independently of one another). For convenience, we evaluated the log-likelihood function for this distribution as follows: where n is the total number of data points in the histogram. The α that maximizes l(α) is the fitted value; we determined it numerically by sweeping α between 1 and 5 in 0.01 increments [31].

Network attractors and the asynchronous irregular state
We define the attractor of the system by the temporal average of the asymptotic network activity after discarding an initial transient, The AI state could be visually identified by a spiking activity in which neither the network is synchronized (i.e., there is no apparent global frequency in the ρ[t] variable), nor the neurons emit spikes regularly (with a constant and homogeneous number of spikes per second) [13]. However, an easier and more quantitative criterion is to assume that the activity is AI when the coefficient of variation (CV) of the interspike interval (ISI) is greater than unity [32]. By dimensional analysis, we assume that the average instantaneous ISI of the network is ISI

Mean-field approximation
The mean-field approximation is exact for our complete graph network with self-averaging parameters. Thus, we define the synaptic couplings as J = J ij and and μ = μ i , where . means average over neurons. We also define the firing rates (the fraction of active sites per The fractions of excitatory and inhibitory neurons are p = N E /N and q = 1 − p = N I /N, respectively. Under these conditions, we can average equation (1): We may omit the E/I superscripts since this equation holds for both populations. From the definition of the firing function, Φ(V) in equation (2), the model may have two types of stationary states: the active state, such that the density of active sites is ρ E = ρ I ≡ ρ * > 0, and the quiescent state, where ρ E = ρ I ≡ ρ 0 = 0. At instant t + 1, the active population is simply given by where Φ(V) is the conditional probability of firing at t + 1 given V at t, and P E/I t (V) is the probability of having a neuron with potential V at time t.
This relation is sufficient for deriving the temporal evolution map for ρ because the reset of the potential causes the kth subpopulation of neurons that fired together to also evolve together until they fire again. This allows us to write P E/I t (V) as a sum of subpopulations, with δ(V) the Dirac's delta function and where U is the membrane potential of the kth population of excitatory or inhibitory neurons that fired k time steps before t and η is the proportion of such neurons with respect to the total excitatory or inhibitory population. The potentials U come from equation (12), and the k = 0 population has U E/I 0 [t] = 0 for any t, since the reset potential is zero. Then, equation (13) is reduced to: A neuron with firing age k at time t can, at time t + 1, either fire with probability Φ(U ) or become part of the population with firing age k + 1 that has density η The stationary state is given by the fixed point of equation (17). Inserting equations (15) and (16) into equation (17) and assuming that the state of the system at t + 1 equals the state at t, we get: where U 0 = 0, and we have used W = pJ − qW because ρ E = ρ I = ρ. Plugging equations (19) and (20) into (18) and shifting the sum index yields: Considering the case where the stationary potentials lie on the linear part of the firing function, Φ, i.e. θ < U 1 < · · · < U ∞ < V S , with U ∞ = (Wρ + I ext )/(1 − μ) being the limiting potential when k → ∞. Then, using the expression from equation (2), we may explicitly write the second Φ(V) in the previous equation to get By using the relation in equation (18) and the normalization ∞ k=0 η k = 1, we obtain: where we define h = I ext − (1 − μ)θ as the suprathreshold external current, which may be regarded as an external field. Using equation (20) to describe U k only outside of the Φ function argument in the sum of the last term, leads to: that holds when all stationary potentials lie in the linear region of the Φ function, i.e., when U 1 > θ and U ∞ < V S , as we have assumed (other solutions are trivial two-cycles or a fixed point equal to zero-the absorbing state).

Critical exponents without homeostasis
The leakage of the cells should not be instantaneous, but rather happen in the time scale given by ∼ 1/μ. Hence, it is reasonable that 0 < μ 1 is the important range for the dynamics of the network. However, we show here that the value of μ does not change the critical exponents, and that μ = 0 is well-defined. Within this range, the sum in equation (21) converges to zero close to the critical point W c , or whenever the network lies in a silent regime (zero firings in average). This happens because, when in silence, the firing rate of the network ρ = k η k Φ(U k ) → 0 (which holds arbitrarily close to W c as well), and η k Φ(U k )μ k η k Φ(U k ) for k 0. The firing rate ρ is the order parameter and, thus, it must go to zero on the critical point. In this case, equation (21) is simplified to: The critical point lies on the null external field, h = h c = 0, yielding the steady solution to equation (22), also known as the order parameter, with β = 1 being its corresponding critical exponent, and These relations can also be written in terms of the synaptic coupling ratio, g = W/J, as: For μ = 0, the expression for g c reduces to the one obtained previously in reference [15]. The field exponent is obtained by isolating h in equation (22), taking W = W c , and expanding for small ρ, with δ h = 2. The susceptibility exponent, χ = ∂ρ/∂h| h=0 , is obtained by differentiating equation (22): with γ = 1. This value, together with β = 1 and δ h = 2 puts this model into the mean-field directed percolation (MF-DP) universality class [20,28]-see figure 1 for a complete phase diagram. Thus, having 0 μ < 1 does not change the nature of the phase transition undergone by the E/I network, as expected [15,18,33]. Also, we see that taking μ = 0 is well-defined, since equations (21)-(27) remain valid. For that reason, we use μ = 0 in all our simulations.

The homeostatic steady state is SOqC
So far, we have seen the static version of the model, where all the synaptic weights J and W, and the thresholds θ, remain constant over time. It has an MF-DP phase transition. Now, we will introduce homeostatic mechanisms to W and θ in the hopes of generating a SOqC state [1,3]. We will derive a mean-field map for the dynamics of the network subjected to homeostatic synaptic inhibition and threshold adaptation. Many studies have shown that a single homeostatic dynamics leads to hovering around the critical point [34][35][36][37][38]. Here, we investigate in details whether two independent homeostatic dynamics are able to do the same. The discussion in this section is systematically explored in terms of the phase diagrams of the E/I network in figures 3 and 4.
Since μ does not change the critical exponents, we take μ = 0 for simplicity, such that all the subpopulations k 1 have the same membrane potential: from equation (16) (34)). The attractors are overlayed on the static phase diagram to show that the network always converges to either: the bifurcation lines (fold, transcritical and flip); or strictly to h ∼ h c = 0 when τ θ 1; but the synaptic depression is not able to lead the system toward g ∼ g c without tuning A = A c , even though PL avalanches are found in many different configurations (see figure 6) with many amplitudes A. (B) A zoomed-in version of the phase space in the left; and (C) an example of a particular parameter configuration that yields AI activity. this probability in equation (13) and averaging equations (4) and (5) over sites, we obtain the following threedimensional mean-field map: Here, we are only interested in the solutions with ρ > 0, since we used only the linear part of the Φ function to derive the ρ[t] map in equation (28). The steady state of the mean-field map is given by the fixed point and yields two different types of solutions: the stable dynamics (when θ * > 0), (29) and the unstable hyper-excited state (θ * = 0). The stability of the asymptotic solution in equation (29) was numerically checked.
To compare the mean-field map with simulations, we run the system given by equations (1)-(5) for a long time (of the order of N), then discard the transient of about 10 4 time steps, then we average over time the remaining time series of the three variables, yielding ρ (to be compared with ρ * ), W (to be compared with W * ), and θ (to be compared with θ * ). We call these averages the attractor of the network. The fixed point in equation (29) is compared with the averages from the simulations in figure 2, yielding a great agreement for all-to-all networks with more than 10 4 neurons.
The critical point of the underlying system that we want to dynamically reach is given by W c = pJ − qW c = 1/Γ (equation (24)) and h c = I ext − θ c = 0, which yields: From equation (29), we see that taking fixed u θ , u w and τ w with u θ τ θ 1 is enough to achieve but that would still leave us with when τ θ = τ w , or when both homeostatic dynamics have equal time constants. We can visualize whether the system has dynamically reached the critical point by plotting the attractor of the network (given by the averages of ρ, W and θ) on top of the phase diagram of the static system from figure 1. This is done in figure 3(A). Each symbol in this plot represents the asymptotic attractor for a different parameter configuration of the system. For example, take the configuration where τ θ = τ w = τ and A = 73.5, which is plotted as blue circles (the other parameters are set with the values given in the methods section). The size of the circle is proportional to τ , so we notice that as τ increases, the closer the attractor gets to h = I ext − θ = 0 (and h c = 0 from the static system). In fact, as we just calculated above, the approach to h = 0 happens as fast as O 1/τ . The same reasoning here applies to all other parameter configurations in figure 3(A), and the size of their corresponding symbols is proportional to the parameter that is indicated in the legend as increasing.
Even though we see in figure 3(A) that the attractors approach θ ≈ I ext , the same does not happen over the g axis, considering g = W /J. We can only meet this last requirement for reaching the critical point, W * ∼ W c (or equivalently, g * ∼ g c = W c /J), by fine-tuning the parameter A = A c , with This particular choice of A = A c is shown in figure 3(A) as green straight triangles, and indeed we see that it goes exactly over the top of the underlying critical point (g, h) = (g c , h c ) = (1.5, 0) as τ is increased. In order to visualize the fine-tuning A = A c , we trace the phase diagrams of the mean-field map for the homeostatic model, equation (28), in figure 3(B) for equal time scales, τ θ = τ w = τ and 3C (for independent time scales). The equal time scale case is simpler, in that A c is a constant that borders a subcritical Neimark-Sacker (N-S) bifurcation for large enough τ (determined numerically). This bifurcation separates a phase where the model presents a stable fixed point (plotted in colors, representing stochastic oscillations in the network simulation) from a regime of large amplitude periodic oscillations. The same separation happens when the time scales τ θ and τ w are independent, and the curve defined by A c still borders the N-S bifurcation. The region painted in shades of red close to the A = A c line, is where the so-called quasicritical fluctuations appear, since in these regions, we get W * = W ∼ W c in addition to equation (31).
To understand why the fine-tuning is necessary, it is useful to isolate W * from the ρ[t] map in equation (28), using ρ * = 1/(u θ τ θ ), Since θ * − I ext ∼ O 1/τ θ , the last term in equation (33) is of O [1], resulting in a displacement from the critical point along the g = W * /J axis that is independent of τ θ (see in figure 3 how the attractors are scattered all over the phase diagram of the underlying static system).
Although the requirements upon ρ * and θ * are met through quasicritical fluctuations when τ θ 1, the system still requires fine-tuning on the inhibitory synaptic amplitude A, as described by equation (32), to be able to dynamically hover close to the critical point (g c , h c ) in figure 3. However, setting A = A c is not required to obtain AI activity, and later we will show that it is not required for getting PL avalanches either. This is because W * ∼ A for τ θ → ∞, instead of tending to W c . This fine-tuning on the amplitude parameters are known to be needed in self-organizing neuronal systems without bulk-conservation of membrane potential [2]. They compensate for the dissipation of electrical activity by invariably forcing the synapses to restore themselves toward A after every spike.
However, an interesting result of having two homeostatic mechanisms is that equation (33) tells us how to overcome the fine-tuning: if θ * − I ext ∼ O 1/τ 2 θ , then all the requirements of homeostatic criticality would be met without any other parameter tuning. Conversely, equation (33) also means that the synaptic homeostasis is constantly being canceled by the threshold dynamics, avoiding the complete dynamical reach of the critical point. Nevertheless, the system is capable of self-organizing toward ρ * ≈ 0 and θ * ≈ I ext .

The homeostatic steady state is balanced
The self-organizing of the system toward the critical point is weak, yielding a SOqC state due to dissipation. But what happens to the synaptic currents during this self-organization process?-here, we show that the SOqC state achieves perfectly balanced synaptic currents. This demonstration had only been done numerically in a previous work [15].
The net synaptic current is given by equation (3). In the mean-field steady state, we have the excitatory current I E = pJρ * , and the inhibitory one, I I = −qW * ρ * , where ρ * and W * are given by equation (29), resulting in the net contribution taking the limit u θ τ θ 1 for fixed u θ , u w and τ w . In fact, our model achieves synaptic balance whenever the parameters obey the conditions u θ τ θ u w τ w 1, and u θ τ θ pJ − qA. This analytical equation is in very good agreement with simulation results for the synaptic currents, I E , I I and ΔI E/I (figure 5). As τ and N grow, both the excitatory and inhibitory currents and their amplitudes tend to zero.
This shows that the homeostatic system is balanced up to O 1/τ θ . This means that although we may have different ratios of E/I currents (as expressed by A/J), and different regimes of steady activity, as evidenced by each attractor in figure 3, the system will still manage to dynamically balance its synaptic currents due to the homeostatic mechanisms.

Avalanches and active cluster scaling
We showed in the previous sections that the fundamental requirement to obtain a quasicritical dynamics and synaptic balance is τ θ 1. Here, we show how this condition changes the distribution of avalanches of the homeostatic system when we have either equal or different time scales for inhibition and threshold adaptation. In figure 6(A), we show the avalanches for equal time scales τ = τ θ = τ w , both with and without the finetuning of A = A c given in equation (32). The configuration with A > A c lies in the colored region of the phase diagram in figure 4. PL emerges for large τ , as expected, although a small bump at the tail of the distributions is  = τ w = τ , with two values of τ . Notice two things: first, for τ = 500 (red circles and pink squares), the distributions are messy and do not follow any PL, but for τ = 10 4 , a PL is observed for small to intermediary avalanches, having a peak for large avalanches typical of supercritical systems. Secondly, having the fine-tune A = A c = 30 (equation (32)) does not improve or worsens the PL shape. (B) Different time scales, τ θ = 10 4 is fixed, with τ w = 10 5 (red circles and pink squares, A c = 165) and τ w = 3000 (blue triangles, A c = 19.5). Notice that the PL shape remains for any configuration of parameters here, regardless of fine-tuning A = A c or the value of τ w . (A) and (B) The avalanche exponents that were obtained by MLE, τ = 1.3 and τ t = 2.33 for P(s) and P(T ), respectively, are shown as dashed lines in the left and middle panels; in solid lines, we plot the exponents of the static system, τ = 1.5 and τ t = 2, to show that the PLs are visually compatible with the underlying MF-DP dynamics. In the scaling law plot (right panels), the underlying MF-DP exponent is also shown as a solid line [1/(σν ⊥ z) = 2], and we plot a curve with slope = 1 as dashed line to give the reader a margin of error for the simulation data. The PL in (A) for τ = 10 4 and A = 73.5 corresponds to the filled circle attractor in figure 3. This particular configuration has all the features of E/I networks together: AI activity, synaptic balance and PL avalanches.
unavoidable. This bump is expected because ρ > O 1/τ and true criticality requires ρ ≡ 0 (which is only present on the critical point of the underlying static system, when W and θ are constant parameters).
Allowing the two time scales to be different, we also have PL avalanches with a bump for large τ θ 1 and large τ w 1 ( figure 6) irrespective of the relative values of these quantities. The PLs appear both with and without the fine-tuning of the inhibition amplitude A. In fact, the PLs are visually better when A > A c for all time scale configurations shown in the figure.
The PL was adjusted by an MLE to yield τ = 1.3 and τ t = 2.33 for the size and duration distributions, respectively. In figures 6(A) and (B) left and middle panels, however, the PLs are also visually compatible with the exponents of the underlying critical point (τ = 1.5 and τ t = 2). This is expected, since the mechanism that generates these PLs comes from a homeostatic adaptation around an MF-DP critical point. Conversely, the average size versus duration, s (T) ∼ T 1/(σν ⊥ z) , does not generally obey the MF-DP scaling law with 1/(σν ⊥ z) = 2. Although this is expected (remember the system is always slightly supercritical because rho 0), the scaling law seems to be loosely obeyed for longer avalanches.
The configuration with A = 73.5 and τ = 10 4 in figure 6(A) corresponds to the filled circle in figure 3(A), hence it displays AI activity (since CV[1/(Nρ)] > 1), PL avalanches with a bump and synaptic balance. The configuration with A = A c , however, displays PL avalanches without AI activity, since it is located in the periodic activity region of the phase diagram of figure 4(A).
Beyond avalanche distributions, we also checked for the distribution P(ρ) sampling ρ[t] over the whole simulation time. The advantage of this distribution is that we do not need to threshold the ρ[t] time series to calculate avalanches. Although not standard in the brain-related literature, some percolation models can be shown to obey a PL distribution for its instantaneous cluster sizes P cluster (ρ) ∼ ρ −φ , where φ is known as the Fisher exponent [39,40]. This distribution is at times confused with the avalanche distribution, and has been shown experimentally to obey a PL in EEG and fMRI experiments [8,41]. The static model has φ = 1 on the critical point g = g c with h = 0, and no PL in the super/subcritical regimes (see figure 7(A), top inset). On the other hand, the homeostatic version of the model has φ = 1/2, and presents perfect collapse for different network sizes N. The same PL holds irrespective of A (figure 7)-a reminiscent evidence of the critical dynamics.

Discussion
In this paper, we introduced a system with a quasicritical state (generated by the homeostatic mechanisms of synaptic depression and threshold adaptation) that presents an AI firing pattern typical of E/I networks (defined by CV > 1 [32], see figures 3 and 6). Simultaneously, this state presents PL avalanche distributions with exponents that are compatible with some experiments [6,7,42]. Namely, the exponents are τ = 1.3 and τ t = 2.33 for size and duration, respectively, and both PLs are indistinguishable from the MF-DP avalanche exponents (τ = 1.5 and τ t = 2). However, the full system is always slightly supercritical and the ensuing avalanches do not present scale-invariance irrespective of parameter tuning [1,2]. Consequently, the typical supercritical bump appears at the tail of the avalanche distributions [36], and the crackling noise scaling relation is not generally valid anymore.
In previous works, the bump tended to disappear by increasing the time scale τ of the adaptation variable [2,26,34,36,38,43]. Here, this does not seem to be the case. The bump persists because, in our system, there are two competing self-organizing dynamics, causing a shift from the critical point in the inhibition weight variable W (equation (33)). Previously, a recurrent neural network with Hebbian plasticity and adaptive thresholds has been shown to give rise to PL avalanches when the network is driven by random inputs [44]. Here, although we employ another neuron model under a different synaptic adaptation, we were able to provide a deep understanding of how these PLs develop. In fact, the coexistence of the two mechanisms sometimes leads our system to also hover around a first order phase transition (notice that some attractors converge close to the fold bifurcation in figure 3). This could be related to self-organized bistability [23]-a mechanism known to generate avalanches with characteristic supercritical bumps.
The only way to fix the shift in the inhibition strength W is by forcing the system to hover around the critical point with A = A c given by equation (32) (a fine-tuning procedure). However, we showed in figure 6 many configurations for which a PL appeared in the distribution of avalanches even without the need to fix A = A c . Moreover, although the crackling noise relation (equation (8)) is not valid, we can see that the homeostatic system still has a trend to follow the scaling relation s (T) ∼ T 1/(σν ⊥ z) with 1/(σν ⊥ z) = 2. This exponent is only vaguely visible for long avalanches in systems with large time scales and adjusted inside the stable fixed point region of the phase diagrams in figure 4-a reminiscent signature of the underlying critical point.
In summary, the only fine-tuning our system needs is to force it to formally hover around the MF-DP phase transition. No fine-tuning is required for obtaining AI activity with PL avalanches and synaptic balance, as we showed in figures 3 and 6. Synaptic balance is always guaranteed (equation (34)). The only requirement is that the adaptation time scales must be large. We did not do an extensive search on the parameter space for configurations where the system presented both AI activity and PL avalanches. However, we found a few configurations that did present these two features simultaneously among the many we studied. This indicates that these findings are neither rare nor exceptions.
There are two ways to try and improve the self-organization toward the critical point in the system that we studied here. One is by having a threshold dynamics that converges to the external current as fast as 1/τ 2 θ . This is because W is shifted from W c due to θ * − I ext ∼ O 1/τ θ . A second mechanism would be metaplasticity [45], by letting the intensity A of the inhibitory synapses adapt too. Other solutions could involve altering the firing function, Φ(V), such that we get an equation for ρ that depends on W and θ through different types of non-linearities.
The homeostatic adaptation of inhibitory synapses is inspired by experimental evidence. For example, inhibitory plasticity mechanisms enforces E/I balance in the auditory cortex, with significant consequences for learning and behavior; and in many sensory systems, sharp increases in excitation are followed by sharp increases in inhibition a few milliseconds later [10, and references therein]. Additionally, threshold adaptation has been observed in different types of cells, such as in the CA3 pyramidal cells and in hilar Mossy cells [46]. Here, we put these two important mechanisms together and obtain generic synaptic balance (to the O 1/τ θ ), independently of any of the parameters of the model. This is another unifying aspect of our theory, since the balance happens concomitantly with the self-organized quasicritical fluctuations.
Looking for power laws in avalanche distributions is not the most reliable way of looking for a critical point [42]. In absorbing state systems, such as branching processes, the theory gives us many other functions that are potentially observable (see [20], table 4.1, and the review in [42]). In some models, the distribution of active clusters, P cluster (ρ), is known to present a PL in the critical point [39,40]. Here, we showed that this is true both for the static and the homeostatic versions of the our E/I network. Nevertheless, the cluster size exponent decreases from 1 to 0.5 with the addition of the adaptation mechanisms. PL cluster size distributions are also found experimentally in the brain [8,41], although sometimes they are mistakenly referred to as avalanche distributions.
The quest put forward by the brain criticality hypothesis continues. Our model displays synaptic balance (independently of parameters) with AI firing patterns, under a two-variables self-organizing dynamic that drives the system close to the critical field, h c = 0; at the same time that this seemingly stochastic AI state takes place, the activity is complex: neuronal avalanches and activity clusters are distributed according to power laws-a reminiscent signature of the criticality that underlies adaptation mechanism. These features are typical of different fields in theoretical neuroscience, and our model is, then, a step toward putting these different frameworks together.