Neural oligarchy: how synaptic plasticity breeds neurons with extreme influence

Synapses between cortical neurons are subject to constant modifications through synaptic plasticity mechanisms, which are believed to underlie learning and memory formation. The strengths of excitatory and inhibitory synapses in the cortex follow a right-skewed long-tailed distribution. Similarly, the firing rates of excitatory and inhibitory neurons also follow a right-skewed long-tailed distribution. How these distributions come about and how they maintain their shape over time is currently not well understood. Here we propose a spiking neural network model that explains the origin of these distributions as a consequence of the interaction of spike-timing dependent plasticity (STDP) of excitatory and inhibitory synapses and a multiplicative form of synaptic normalisation. Specifically, we show that the combination of additive STDP and multiplicative normalisation leads to lognormal-like distributions of excitatory and inhibitory synaptic efficacies as observed experimentally. The shape of these distributions remains stable even if spontaneous fluctuations of synaptic efficacies are added. In the same network, lognormal-like distributions of the firing rates of excitatory and inhibitory neurons result from small variability in the spiking thresholds of individual neurons. Interestingly, we find that variation in firing rates is strongly coupled to variation in synaptic efficacies: neurons with the highest firing rates develop very strong connections onto other neurons. Finally, we define an impact measure for individual neurons and demonstrate the existence of a small group of neurons with an exceptionally strong impact on the network, that arise as a result of synaptic plasticity. In summary, synaptic plasticity and small variability in neuronal parameters underlie a neural oligarchy in recurrent neural networks. Author summary Our brain’s neural networks are composed of billions of neurons that exchange signals via trillions of synapses. Are these neurons created equal, or do they contribute in similar ways to the network dynamics? Or do some neurons wield much more power than others? Recent experiments have shown that some neurons are much more active than the average neuron and that some synaptic connections are much stronger than the average synaptic connection. However, it is still unclear how these properties come about in the brain. Here we present a neural network model that explains these findings as a result of the interaction of synaptic plasticity mechanisms that modify synapses’ efficacies. The model reproduces recent findings on the statistics of neuronal firing rates and synaptic efficacies and predicts a small class of neurons with exceptionally high impact on the network dynamics. Such neurons may play a key role in brain disorders such as epilepsy.


Author summary
Our brain's neural networks are composed of billions of neurons that exchange signals via trillions of synapses. Are these neurons created equal, or do they contribute in similar ways to the network dynamics? Or do some neurons wield much more power than others? Recent experiments have shown that some neurons are much more active than the average neuron and that some synaptic connections are much stronger than the average synaptic connection. However, it is still unclear how these properties come about in the brain. Here we present a neural network model that explains these findings as a result of the interaction of synaptic plasticity mechanisms that modify synapses' efficacies. The model reproduces recent findings on the statistics of neuronal firing rates Introduction 1 Are cortical networks "democratic" structures in which the voice of every neuron has 2 about the same weight? Or do some neurons wield much more influence than others? 3 And, if so, what mechanisms give rise to this concentration of power? Recent 4 experiments have suggested large inequalities between neurons and synapses. 5 Specifically, a persistent observation is the presence of skewed, long-tailed distributions 6 in firing rates [1][2][3] and synaptic efficacies [2, [4][5][6][7][8][9]. In most cases, distributions of 7 neuronal variables take on approximately lognormal [3] or power-law [10,11] shapes. 8 Such strongly skewed distributions may have important implications for computation in 9 neuronal networks, for instance effective signal transmission [12], population burst 10 propagation [13] and enlarging the dynamical range of the network [14]. Since the 11 strength of individual synapses in networks of neurons is under constant change due to 12 activity-dependent and homeostatic plasticity [15,16], as well as spontaneous connection probability from excitatory to excitatory neuron 0.02 p e,i connection probability from excitatory to inhibitory neuron 0.1 p i,e connection probability from inhibitory to excitatory neuron 0.1 p i,i connection probability from inhibitory to inhibitory neuron 0.5 d e,e axonal delay from excitatory to excitatory neuron 1.5 ms d e,i axonal delay from excitatory to inhibitory neuron 0.5 ms d i,e axonal delay from inhibitory to excitatory neuron 1.0 ms d i,i axonal delay from inhibitory to inhibitory neuron 1.0 ms Table legend: Network parameters.
The LIF membrane equation for a single excitatory or inhibitory point neuron in the 60 network is: where V is the membrane potential in volt, E l the leak reversal potential, and E e and 62 E i are the reversal potentials for excitatory and inhibitory synaptic inputs, respectively. 63 The synaptic conductances are g e for excitation and g i for inhibition. The values of τ e 64 and τ i are chosen based on kinetics of AMPA and GABA receptors [40]; see Table 2 for 65 all neuronal parameters. The term ξ ext provides an external input of 1 mV to the 66 membrane voltage of the excitatory and inhibitory neurons, at random, Poisson 67 distributed times. Concretely, the external input times for each neuron are sampled 68 from an exponential distribution with τ ext = 3 ms, at a resolution of 0.1 ms, resulting in 69 on average 333 input events per second. 70 For every incoming spike, the conductance of the associated synapse is increased by 71 the value of the weight w e or w i : where t k and t l are spike times from an excitatory or inhibitory input respectively, and 74 w e is the synaptic weight of the excitatory connection and w i that of the inhibitory 75 connection. When the membrane potential reaches the threshold V θ , a spike is 76 generated and the membrane potential is reset to the resting potential V reset The spiking thresholds V θ are sampled for each neuron from a Gaussian distribution 78 with mean µ e V θ and µ i V θ , and standard deviation σ e V θ and σ i V θ for excitatory and 79 inhibitory neurons, respectively. Doing so creates heterogeneous firing rates in the  Fig. 1C). Both E-E 88 and I-E synapses are subject to homeostatic normalisation of the weights, described in 89 detail below.

90
Contrary to previous works (e.g. [18,20]) there is no intrinsic homeostatic plasticity 91 and no structural plasticity in this model. This choice was made in order to allow a 92 broad distribution of firing rates to emerge from Gaussian variability in spiking 93 thresholds, and to demonstrate that the STDP rules and the synaptic normalisation are 94 sufficient to account for stable activity levels.

95
The eSTDP rule is as follows: For the temporal difference between a presynaptic and 96 a postsynaptic spike ∆t = t pre − t post , the change in synaptic efficacy is given by: with A LTP > 0 and A LTD < 0. In this model, eSTDP and iSTDP favour LTP over 98 LTD [41]. The depression area within the eSTDP window is set to 80 percent of the 99 potentiation area, creating here a small advantage for LTP. In synapses from inhibitory 100 to excitatory neurons, additive iSTDP is applied: We consider by default τ pre = τ post = τ LTP , so that the time constants for iSTDP match 102 the LTP time constant in eSTDP. The values of A i pre and A i post are identical in the 103 symmetric iSTDP learning rule (Fig. 1C). The symmetric iSTDP window, first 104 employed in a recurrent neural network that self-organises into a balanced state [42] has 105 been shown recently to exist in auditory cortex [26]. Besides the symmetric iSTDP 106 (Fig. 1C), we also test a pre-LTP and a post-LTP iSTDP (Suppl. Fig. 4A).

107
To include a form of depression in the iSTDP, a fixed value LTD α is subtracted for 108 every pre-synaptic spike [42] 109 The LTD part of iSTDP can be included by setting LTD α to nonzero values. We the postsynaptic neuron at any time, which leads to a competition for these building 118 blocks between synapses of the same type [44]. Although there exists evidence for SN in 119 glutamatergic and GABAergic synapses [27,28,43] as well as slow scaling [9,45] within 120 dendritic branches [46], we make the assumption that the SN in these synapses is fast, 121 and that the homeostatic change depends on the weight in a multiplicative fashion [44]. 122 The SN works as follows: all excitatory (inhibitory) weights onto a neuron are regularly 123 updated as where T e,e is the target sum of E-E weights, and T i,e the target sum of I-E weights 125 onto one excitatory neuron (see Table 3 for plasticity parameters). The values K e and 126

5/31
K i are the total number of excitatory and inhibitory synapses onto that particular 127 neuron in the network. Since connectivity is initialised at random between neuron pairs, 128 K e and K i follow a binomial distribution over the population. The E-E and I-E weights 129 are all initialised at 0.0015. The E-I and I-I weights, which are not subject to eSTDP or 130 iSTDP in our model, are initialised by full rescaling so that the weights onto each 131 neuron sum exactly to T e,i and T i,i , respectively (see Table 3 for values). The SN for 132 E-E and I-E weights (equations (8) and (9)) is implemented every second, to save 133 computation time.  Table listing all plasticity parameters. Although E-I and I-I synapses are not subject to plasticity during the simulation, T e,i and T i,i are used at network initialisation to set the weights for these connections.

134
The simulations of the network run on custom-built code in Python and the neural 135 simulator Brian [47]. To compute the spike-time correlations between neurons, we bin the spikes in 100 ms 138 bins. Since spike-time correlations depend on the bin size for the spikes [48], we choose 139 the bin size large enough to allow for large enough correlations, but so as not to exceed 140 the mean inter-spike interval for excitatory and inhibitory neurons [48]. To obtain the 141 firing rates for each neuron, the total number of spikes in each neuron is simply divided 142 by the length of the entire simulation. The histograms of firing rates that are shown on 143 a semilog scale contain exponentially distributed bins, resulting in equally spaced bins 144 in the figure. These histograms are normalised by dividing each bin value by the bin 145 width, as was done in previous work highlighting lognormal distributions [5]. A 146 lognormal curve is fitted to the distributions using the SciPy package curve fit. The 147 histograms on linear axes are not normalised unless stated explicitly. We investigate if 148 the firing rate distribution in the population is a direct result of the interaction between 149 the distribution of membrane potentials of the neurons, and their spiking thresholds. 150 For this we acquire the distribution of membrane potentials visited over 50 seconds by a 151 neuron in which spiking is disabled, and average over N e of such distributions. The 152 expected number of spikes for each neuron is then computed as the area of the mean 153 membrane potential distribution for a given spiking threshold sampled from the 154 Gaussian distribution. This process is repeated for 10000 putative neurons.

155
Analysis of synaptic efficacy distributions 156 We record the weights in the network at every second, just before and just after SN is 157 applied. We verify if the weight dynamics reaches an equilibrium by checking if the 158 mean and standard deviation of the weight distribution reach a stable point over time. 159 We then create histograms of the weight distribution at the final time point of the

179
Stochastic models of weight dynamics 180 We hypothesise that the weight distributions found in the network simulation can be 181 explained by simple stochastic processes. We consider three such processes: a uniform 182 stochastic model (USM), a nonuniform stochastic model (NSM), and a Kesten 183 model [21,49]. In these models, a synaptic weight is reduced to a stochastic variable w 184 that evolves under random additive and multiplicative changes over a number of steps. 185 First, in the USM we let w change according to SN and STDP, where the STDP 186 changes are random and any spike-time combination is assumed to be equally probable. 187 Therefore, the USM disregards the likelihood of specific correlations between pre-and 188 post spike times at a synapse. We start with a random variable w t that represents a 189 synaptic weight at a time step t. We keep I-E weights and E-E weights in the stochastic 190 models between 0 ≤ w t ≤ T i,e , and 0 ≤ w t ≤ T e,e , respectively. The weight w t is 191 modified N step = 5000 times in an additive way by randomly sampling an STDPupdate 192 A rand t from the STDP windows shown in Fig. 1B give some weights an advantage over others when competing for limited postsynaptic 195 resources. We therefore assume that some weights benefit from more frequent STDP  Third, we consider that the weights might behave according to a Kesten process [49]. 228 A variable w that follows a Kesten process is defined by the following iteration: where a and b are independent, stochastic variables. Although the Kesten process can 230 be seen as analogous to the combination of additive STDP and multiplicative SN [50], 231 in our model STDP and SN are not strictly independent nor stochastic, since both 232 terms depend on the collective behaviour of groups of weights and their current state. 233 We can however quantify the difference of our results from a Kesten process by 234 assuming weighted random STDP sampling as in the NSM, setting b t = n m LTP A rand t , and 235 gleaning the distribution for a from the SN factors that were obtained from the network 236 simulation. In the network model, the USM and the NSM, the SN is stabilising the 237 weights in a non-random manner that depends on the collective weight state (Equation 238 (8) and (9)) and therefore also the state of the weight itself. Because of the independent 239 sampling of a in the Kesten process, the SN factor is not directly related to the exact

257
The network maintains asynchronous irregular activity and low 258 correlations 259 The LIF-SORN with iSTDP displays a number of properties commonly seen in cortical 260 networks (Fig. 2). Specifically, the spike-correlations are low [51], and spike patterns are 261 irregular for excitatory and inhibitory neurons [52,53]. A sample of population spiking 262 is shown in Fig. 2A, with excitatory (inhibitory) spikes shown in green (purple).

263
Evidently, some neurons spike often and others are relatively silent. In Fig. 2B, a 264 random excitatory neuron receiving excitatory and inhibitory inputs is shown with its 265 membrane potential, excitatory and inhibitory conductances, and currents. To between excitatory neurons, between inhibitory neurons, and between excitatory and 273 inhibitory neurons are low, indicating a network with low synchrony (Fig. 2D). Though 274 the network activity is not perfectly decorrelated, based on these observations we 275 conclude that the network is close to the asynchronous state [54,55]. heterogeneous across neurons. Importantly, the distributions of firing rates of excitatory 283 and inhibitory neurons follow a lognormal-like shape (Fig. 3A-B)  between the spiking threshold sampling distribution (Fig. 3C, red curve) and the values 288 that the membrane potential in an average neuron typically visits over time, when its 289 spiking mechanism is disabled (Fig. 3C, blue curve). The expected number of spikes can 290 then be computed by iteratively sampling from the spiking threshold curve and 291 determining how many times the average neuron's membrane voltage crosses the 292 threshold. The resulting distributions of expected number of spikes per neuron follow a 293 lognormal-like shape (Fig. 3D), suggesting that lognormal-like distributed firing rates 294 can already arise from the interaction of near-Gaussian membrane dynamics and small 295 Gaussian variability in the spiking thresholds. distributions are not exactly lognormal (compare lognormal fit to data points in Fig. 5), 313 in line with experimental findings [5] .

314
For E-E weights, a lognormal-like distribution is found when combining eSTDP with 315 SN (Fig. 5A-B), as was shown before in [20] and [18] , in which eSTDP and SN were 316 complemented with structural plasticity. We here use a different approach, leaving out 317 structural plasticity, but applying a small bias in the eSTDP rule, in favour of 318 potentiation. The result is a lognormal-like distribution of E-E weights (Fig. 5A-B).

319
The I-E weights also follow this distribution when LTD α = 0 (Fig. 5C-D), meaning the 320 iSTDP is purely potentiating. Thus, the combination of an additive potentiating STDP 321 rule, together with SN results in lognormal-like distributions of E-E and I-E weights.

322
What happens if the STDP rule contains more LTD than LTP, or when SN is left out? 323 When LTD α is increased to 0.02, the I-E weight distribution changes and loses its 324 lognormal-like shape (Fig. 5C, blue curve), indicating that strong LTD in the iSTDP 325 rule can move the shape of the weight distribution away from the unimodal shape, 326 pushing most weights to the zero boundary and a small number of weights to the 327 maximum weight. A graphical representation of how LTP and LTD shape the 328 distribution of E-E and I-E weights is shown in Fig. 6. The multiplicative SN pushes 329 competing weights toward each other when LTP is dominant (Fig. 6A), which for a 330 population of weights will lead to a unimodal distribution. Conversely, SN pushes the 331 weights away from each other when LTD is dominant (Fig. 6B), which for a population 332 of weights will lead to a bimodal distribution. Therefore, the lognormal-like shape of the 333 I-E weight distribution observed in Fig. 4 and 5 is a specific result of the interaction of 334 potentiating iSTDP and multiplicative SN. On the other hand, when SN is removed, the 335 weights are allowed to move freely between the zero boundary and the maximal weight 336 boundary through STDP. However, in this case, the distribution becomes distorted and 337 weights heap up at the maximal weight boundary (Suppl. Fig. 2E, purple curve). 338 Likewise, when LTD is added, the distribution shifts leftward again (shown for I-E 339 weights, Suppl. Fig. 2E, blue curve). In other words, when SN is left out, or LTD is too 340 strong, the weight distribution loses its lognormal-like shape, and becomes bimodal. deviations from a lognormal shape (see also [18,23,56]). We conclude that I-E and E-E 344 weights in our network may form through the combination of potentiating STDP and 345 SN a right-skewed long-tailed distribution that resembles a lognormal distribution, but 346 as in the experimental observations, is not in fact precisely and entirely lognormal. To understand how the changes in STDP can lead to such lognormal-like distributions, 350 we investigated how changes in synaptic efficacies depend on the current synaptic 351 efficacy. For the E-E weights, eSTDP contributes both LTP and to a lesser degree, LTD 352 (Fig. 7A, top), following the eSTDP learning rule (Fig. 1B). For the I-E weights, there is 353 only potentiation through iSTDP as we set here LTD α = 0 (Fig. 7B, top). For both 354 E-E and I-E weights, the SN provides the downscaling to compensate for the 355 potentiation through eSTDP and iSTDP ( Fig. 7A-B, middle). Interestingly, one can see 356 that although the magnitude of downscaling is larger for some large weights, other large 357 weights are not scaled down as much despite their size (observe the blue points that lie 358 close to the zero change line in Fig. 7A-B, middle figures). If all weights in a population 359 are subject to multiplicative weight-dependent depression with the same factor, weights 360 settle into a symmetric distribution [57,58], but a sublinear dependence of depression on 361 weights can lead to asymmetry in the weight population, through symmetry dendritic spines compared to small spines [6,9,61,62].

372
Emergence of excitatory and inhibitory hub neurons 373 Next, we attempt to find out whether hub neurons are present in the network. We 374 hypothesized that neurons with high firing rates may develop strong outgoing weights 375 due to STDP. Since neurons with high firing rates give rise to more STDP events, their 376 outgoing weights will have an advantage when competing with synapses from less active 377 neurons onto the same target neuron. This principle works well for classical eSTDP 378 rules, but is even more likely to apply when the STDP rule is in itself mostly 379 potentiating (see Fig. 7 and Methods). To investigate the relation between individual 380 neuron's firing rates and their outgoing weights in our network, we separate the neurons 381 into two groups: the 10 percent highest firing neurons (high-firing), and the remainder 382 of the neurons (control), and observe the mean outgoing weight for each category. Fig. 8 10 −121 , Fig. 8B). As all weights start at the same 388 value, the split in the outgoing weight populations happens over time due to plasticity. 389 A neuron with a high firing rate therefore has a higher probability of making its 390 outgoing weights larger, exploiting STDP to achieve a stronger influence on the network. 391 With the above, we replicate findings by Effenberger and colleagues ('driver 392 neurons', [19]), and extend them to inhibitory neurons. Moreover, the impact of a 393 neuron on the rest of the network should be a function of both its firing rate and its 394 outgoing weights (see Methods). The distribution of impacts is more strongly skewed 395 and close to exponential, for both excitatory and inhibitory neurons (Fig. 8C-D). In Since previous studies suggested that a stochastic process can be a good model for the 402 dynamics of synapses [21,50], we investigate whether simplified stochastic models can 403 account for the lognormal-like weight distributions we have shown. In these models, a 404 weight is treated as a stochastic variable w that evolves over a number of time steps. 405 We consider stochastic processes which assume the variable w is subject to random 406 additive updates sampled from the STDP window ( Fig. 1B-C), and collective 407 multiplicative SN (see Methods). We test three such processes, a nonuniform stochastic 408 model (NSM), a uniform stochastic model (USM), and a Kesten model [21,49].

409
First we consider the NSM, in which w is updated by randomly sampling the STDP 410 window in a nonuniform manner. Sampling probabilities in the STDP window are 411 determined by the average correlation coefficient between pre-and postsynaptic spike 412 times, ranging over the relevant time lags for STDP, which is shown in Fig. 9A and B. 413 The distribution of the random variable w from the NSM (orange curve), that uses this 414 cross-correlation, closely resembles the mean distribution of E-E weights (Fig. 9C, green 415 curve) and I-E weights (Fig. 9D, purple curve) in the network. This means that the Based on the above results, we find that the NSM is a more accurate description of the 433 dynamics of E-E and I-E weights in the network than the Kesten model. 434 One may argue that the lognormal-like weight distributions obtained in the NSM are 435 simply a result of the sampling of different numbers of STDP updates for different 436 neurons from another lognormal distribution, but even if the firing rates are equal for 437 all inhibitory and for all excitatory neurons in the network, the distributions of weights 438 are still close to lognormal (Suppl. Fig. 3A-B, purple and green curves), as are the 439 corresponding NSMs (Suppl. Fig. 3A-B, orange curves). 440 Since the NSM uses the spike cross-correlations to bias sampling from the iSTDP 441 and eSTDP windows, we also wish to ask whether these correlations are strictly 442 necessary to obtain the logormal-like distribution from STDP sampling. We therefore 443 consider uniform sampling of the STDP window in the USM. The resulting distribution 444 is similar to the distribution from the network simulation for I-E weights, but not for 445 E-E weights (Suppl. Fig. 3C-D). Since the main difference between eSTDP and iSTDP 446 in our model is the presence of LTD for eSTDP, we verified the distributions resulting 447 from the USM and the NSM with various amounts of LTD (Suppl. Fig. 3E-F) . The  Fig. 4). Conversely, the USM does not resemble the distribution from the 455 network when LTD is 80 percent of LTP or larger (Suppl. Fig. 3E). The NSM however 456 yields a lognormal-like distribution regardless of the amount of LTD (Suppl. Fig. 3F).

457
Since the peak in cross-correlation lies in the range for LTP in eSTDP, the spike 458 correlations sample the LTP area of the STDP window more than the LTD area. the contribution from the activity-dependent changes. Moreover, synapses continue to 471 change in the presence of TTX, which abolishes spiking activity [6,23].

472
To see whether the weight distributions we found can still exist in the presence of 473 such strong random fluctuations in the weight, we add independent changes to each 474 weight ("weight noise") throughout the simulation. The weight noise is sampled from a 475 Gaussian distribution for both E-E and I-E weights. When the means of the Gaussians 476 µ we and µ wi are positive, thereby favouring random potentiation in the weights, limiting 477 distributions with lognormal-like shapes are found for both E-E and I-E weights 478 (Fig. 10A-B). Importantly, these distributions persist in the absence of activity and 479 activity-dependent changes (Fig. 10A-B, light coloured curves relating to TTX), in line 480 with experimental findings [6,23]. However, when µ we and µ wi are negative, weights 481 move away from their lognormal-like shape (Suppl. Fig. 5). Similarly to our results in are thought to be multiplicative [9,45]. We consider it a plausible assumption that fast 523 normalization also works multiplicatively [44]. A recent model by Tosi and Beggs [64] 524 has also produced lognormal-like firing rates and synaptic efficacies. Their model relies 525 on a hypothesized metaplasticity mechanism, for which experimental evidence seems to 526 be lacking so far. In contrast, our model is capable of producing these distributions with 527 only well-established plasticity mechanisms [15,26,28,43,45].

528
The key prediction from our model is the existence of a class of highly influential 529 hub neurons that possess both high firing rates and strong efferent connections. This 530 prediction could be tested using modern connectomics approaches in the following way. 531 First, highly active neurons are identified through an appropriate recording technique. 532 Second, the axon is reconstructed and the sizes of efferent synapses are measured. We 533 predict that in such an experiment, highly active neurons will have stronger efferent 534 synapses on average. The presence of strongly interconnected neurons in terms of 535 synapse numbers has a strong effect on network functioning, affecting neuronal 536 synchrony [65] and criticality [66]. Subgroups of strongly interconnected hub neurons 537 have also been implicated in pathological states, namely hyperexcitability in the 538 epileptic hippocampus [67]. It must be noted that in our study, the hub neurons do not 539 possess on average larger numbers of synapses than the remainder of the neurons, since 540 the network is connected randomly. However the hub neurons do have stronger outgoing 541 weights and their impact on the network is highly pronounced, which may parallel the 542 functional properties of neurons with more numerous synapses. The neural oligarchy 543 that emerges from local processes at the neuron level may therefore be a critical feature 544 of cortical functioning in health and disease.

545
The current network model is strongly simplified in a number of aspects. First, the 546 synapses onto inhibitory neurons are not modified by plasticity, while experimental 547 results in the visual cortex suggest these synapses do exhibit activity-dependent 548 changes [68]. It will be necessary to address fully plastic networks to study the 549 co-evolution of all synapses in an excitatory-inhibitory recurrent network. Moreover, 550 although normalisation is implemented for each second in the current model, there is 551 evidence for homeostatic mechanisms that come into effect over much longer periods [69]. 552 We do predict that as long as the plasticity events in between the normalisation events 553 are mostly instances of potentiation, and of small amplitude, the lognormal-like 554 distribution of weights is guaranteed even over long timescales of normalisation. We also 555 do not address here the pruning and creation of synaptic connections with multi-synapse 556 connections per neuron pair [70][71][72]. How the distributions of inhibitory weights are 557 affected by the appearance of multi-synapse connections is a subject for future study. 558 Also, eSTDP and iSTDP are assumed to act equally fast, something that is problematic 559 if inhibition itself acts as the major homeostatic mechanism by [69]. However if one 560 assumes that synaptic normalisation in excitatory synapses acts rapidly, [43], the 561 network need not rely on inhibitory plasticity to prevent runaway excitation. Although 562 a comprehensive theoretical foundation for fully understanding the effects of inhibitory 563 plasticity in recurrent neuronal networks may still be lacking [73], the various possible 564 functions and consequences of iSTDP are beginning to be elucidated [74].

566
In conclusion, our model suggests that cortical networks may resemble oligarchies, 567 where a few neurons with high firing rates and strong efferent connections may exert a 568 powerful influence over the rest of the network. On the one hand, this may have 569 important implications for information processing in these networks [12,13]. On the 570 other hand, this may also be important for network disorders such as epilepsy, where 571 such highly influential neurons are believed to play a key role in the initiation and 572 spreading of aberrant activity [67].  . The dark green curve shows the amount LTD for a weight at w max,e . The synapses are also equipped with synaptic normalisation (equation (8) and (9)). C: The STDP learning rule in the I-E synapses (iSTDP).  )). Bottom, excitatory (green) and inhibitory (red) currents. C: Coefficient of variation of the inter-spike intervals of excitatory (left) and inhibitory (right) neurons, during a 500 second simulation. D: Distributions of spike cross-correlations between neurons (left, excitatory-excitatory; middle, inhibitory-inhibitory; right, excitatory-inhibitory) taken from a 500 second simulation. Correlations were computed using 100 ms bins for the spikes.   The orange curve is a lognormal fit to the mean distribution. B: Mean distribution of E-E weights plotted with uniform bin spacing on a logscale, in which bin values are corrected for their corresponding bin width. C: Mean distributions of I-E weights, for LTD α = 0 (purple points), and LTD α = 0.02 (blue points). A line is drawn between the blue points for visibility. D: As in B but for I-E weights, with LTD α = 0. In all subfigures, the weights are recorded just before SN.  Distribution of synaptic weight changes, as a function of current weight. A: Weight changes for E-E weights. Top, the changes in E-E weight due to eSTDP. Each point represents a separate weight that changed. The dotted line indicates zero change. Changes are recorded during the last two seconds of a 10000 second simulation. Middle, changes in the same E-E weights due to SN. Bottom, combined effects of eSTDP and SN for the same weights. B: Same as in A but for I-E weights and iSTDP with LTD α = 0. Connection between firing rates and mean outgoing nonzero weight in excitatory and inhibitory neurons. A: Mean outgoing weight for excitatory neurons, sorted into high-and low firing categories (high and control, respectively). The errorbar shows the standard deviation over all weights within the high group and the control group. Data is from a single trial. B: Same as in A but for inhibitory neurons. C: Distribution of impact values for excitatory neurons. The histogram contains neurons from 10 trials of 200 seconds each, recorded after weight stabilisation. The yellow line shows an exponential fit. D: as in C but for inhibitory neurons. Comparison of E-E and I-E weight distributions to a nonuniform stochastic model (NSM) and a Kesten model. A: mean cross-correlations of spike times over all connected excitatory-to-excitatory neuron pairs. The mean correlation over 10 trials is shown in a solid line, while the shaded area indicates the standard error of the mean over 10 trials. Cross-correlations are recorded over a 200 second interval after the weights have reached equilibrium, with a spike bin of 5 ms. B: As in A but for all connected inhibitory-to-excitatory neuron pairs. C: weight distributions of E-E weights from the network simulation (green) and from two stochastic processes, the NSM (orange) and the Kesten model (blue). The distributions are averaged over 10 trials. D: as in C but for I-E weights. The distribution of the I-E weights is shown (purple) as well as the corresponding distribution from the NSM (orange) and the Kesten model (blue). Here, LTD α = 0.