Long-range coordination patterns in cortex change with behavioral context

Cortical connectivity mostly stems from local axonal arborizations, suggesting coordination is strongest between nearby neurons in the range of a few hundred micrometers. Yet multi-electrode recordings of resting-state activity in macaque motor cortex show strong positive and negative spike-count covariances between neurons that are millimeters apart. Here we show that such covariance patterns naturally arise in balanced network models operating close to an instability where neurons interact via indirect connections, giving rise to long-range correlations despite short-range connections. A quantitative theory explains the observed shallow exponential decay of the width of the covariance distribution at long distances. Long-range cooperation via this mechanism is not imprinted in specific connectivity structures but rather results dynamically from the network state. As a consequence, neuronal coordination patterns are not static but can change in a state-dependent manner, which we demonstrate by comparing different behavioral epochs of a reach-to-grasp experiment.


Introduction
The cerebral cortex is a network of subnetworks that is organized on various spatial scales (1,2). Understanding how neurons communicate across these different scales is crucial for understanding brain dynamics and function. On the largest, macroscopic scale, cortical regions communicate via white matter connections with short path lengths between any two cortical areas (3,4,5,6). On the smallest, microscopic scale (< 100 µm), neurons communicate via direct gray matter connections (2,7). But, functional areas of cortex typically extend across an intermediate scale, where the distance between neurons exceeds the range of most direct connections. Yet neighboring cortical areas need to coordinate their activity for performing their tasks. For example, grasping for an object needs coordination of fingers, hand and arm, which are controlled by widely distributed representations in motor cortex (8,9). So how does communication work across these mesoscopic scales? In particular, does such coordination rely on specific, targeted connections?
To answer these questions, we combine analysis of massively parallel spiking data from macaque motor cortex with the analytical investigation of a spatially structured neuronal network model. We here quantify coordination by using covariances, which measure how temporal departures of the neurons' activities away from their mean firing rate are correlated. Using methods from statistical physics of disordered systems, we show that, even with only short-range connections, strong covariances across distances of several millimeters emerge naturally in balanced networks if their dynamical state is close to an instability, known as a "critical regime". While mean covariances have been shown to be small in balanced networks (10,11), our theory predicts large covariances on the level of individual pairs of cells at all distances if the network is close to the critical point. These predictions are confirmed by recordings of macaque motor cortex activity. The long-range coordination found in this study is not merely determined by the anatomical connectivity but depends substantially on the network state, which is characterized by the individual neurons' mean firing rates. This allows the network to adjust the neuronal coordination pattern in a dynamic fashion, which we demonstrate through simulations and by comparing two behavioral epochs of a reach-to-grasp experiment.
Unraveling mechanisms of neuronal coordination is a core ingredient towards answering the long-standing question of how neuronal activity represents information. Population coding is one classical paradigm (12) in which entire populations of neurons behave coherently, thus leading to positive correlations among their members. The emergence and dynamical control of such population-averaged correlations has been studied intensely (13,10,14,15). But more recently, the idea that neuronal activity evolves within manifolds proposes even more involved ways of neuronal activity coordination (16): A small number of collective modes are thought to explain most variability of neuronal activity. Simulations of recurrent network models indeed indicate that networks trained for a task exhibit activity organized in manifolds (17). The temporal and spatial structure of pairwise correlations is tightly linked to the complexity of a task. These correlations provide an upper bound on the dimensionality of the manifolds that embed neuronal activation patterns (18). In this case, individual neurons do not necessarily follow a stereotypical activity pattern that is identical across all neurons contributing to a representation; the coordination among the members is instead determined by more complex relations. How these relations arise, how they depend on the underlying connectivity, and how they can be controlled dynamically is still a widely open question. The mechanism described in the current manuscript is a promising and robust candidate.

Macaque motor cortex shows long-range coordination patterns
We first analyze data from motor cortex of macaques during rest recorded with 4 × 4 mm 2 , 100-electrode Utah arrays with 400 µm inter-electrode distance (figure 1a). The resting condition of motor cortex in monkeys is ideal to assess intrinsic coordination between neurons during ongoing activity. In particular, our analyses focus on true resting state data devoid of movement-related transients in neuronal firing (see section 4.1.1). Massivelyparallel single-unit spiking activity of ≈ 130 neurons per recording session, sorted into putative excitatory and inhibitory cells, shows strong spike-count covariances across the whole Utah array, well beyond the typical scale of the underlying short-range connectivity profiles (figure 1b,d). Positive and negative covariances form patterns in space that are furthermore seemingly unrelated to the neuron types. All populations show a large dispersion of both positive and negative covariance values (figure 1c). Given previous investigations of pairwise covariances in balanced networks (13,19,20,11,14), this result comes unexpected: A common finding is that covariances averaged across many pairs of cells are small if the network dynamics is stabilized by an excess of inhibitory feedback; dynamics known as the 'balanced state' arise (21,22,23). The explanation is simple: Negative feedback counteracts any coherent increase or decrease of the population-averaged activity, preventing the neurons from (d) Pairwise spike-count covariances with respect to the neuron marked by black triangle in one recording (session E2, see Methods). Grid indicates electrodes of a Utah array, triangles and circles correspond to putative excitatory and inhibitory neurons, respectively. Size as well as color of markers represent covariance. Neurons within the same square were recorded on the same electrode.  Figure 2: Covariances from direct and indirect connections. (a) Positive covariance (green neuron i) follows from direct excitatory connection (top) or shared input (middle). Negative covariance (magenta, bottom) between two excitatory neurons cannot be explained by direct connections. (b) Neuronal interactions are not only mediated via direct connections (n = 1; sign uniquely determined by presynaptic neuron type) but also via indirect paths of different length n > 1. The latter may have any sign (positive: green; negative: purple) due to intermediate neurons of arbitrary type (triangle: excitatory, circle: inhibitory). In the dynamically balanced critical state contributions from paths n > 1 dominate. fluctuating in unison (11). Breaking this balance in different ways leads to large covariances (15,24,25). Does the finding of strongly correlated individual neurons indicate that motor cortex is operating out of balance then? In the following, we show that this is not the case.

Multi-synaptic connections determine covariances
Connections mediate interactions between neurons. Many studies therefore directly relate connectivity and covariances (19,20,26,27,28). From direct connectivity, one would expect positive covariances between excitatory neurons and negative covariances between inhibitory neurons and a mix of negative and positive covariances only for excitatory-inhibitory pairs. Likewise, a shared input from inside or outside the network only imposes positive covariances between any two neurons (figure 2a). The observation that excitatory neurons may have negative covariances (figure 1d), as well as the broad distribution of covariances covering both positive and negative values (figure 1c), are not compatible with this view. In fact, the sign of covariances appears to be independent of the neuron types. So how do negative covariances between excitatory neurons arise?
In (29) some of us show that broad distributions of covariances with low mean arise in the dynamically balanced critical state. This state combines the above mentioned balance with criticality: The former leads to suppressed population-level fluctuations and small average pair-wise covariances. The latter causes coordinated strong and slow fluctuations of activity within a low-dimensional manifold. Expressed mathematically, such a network has a linear approximation that is almost unstable, as explained in the supplement S3. In the dynamically balanced critical state, connections are altogether strong enough such that interactions are not only mediated via direct connections, but also via indirect paths through the network (figure 2b). What is the consequence of these effective multi-synaptic connections? Direct connections yield covariances of a predefined sign, leading to covariance distributions with multiple peaks, e.g. a positive peak for excitatory neurons that are connected and a peak at zero for neurons that are not connected. Multi-synaptic paths, however, involve both excitatory and inhibitory intermediate neurons, which contribute to the interaction with different signs (figure 2b). Hence, a single indirect path can contribute to the total interaction with arbitrary sign (19). If indirect paths dominate the interaction between two neurons, the sign of the resulting covariance becomes independent of their type (29). Given that the connecting paths in the network are different for any two neurons, the resulting covariances can fall in a wide range of both positive and negative values, giving rise to the broad distributions for all combinations of neurons types in figure 1c. This explains why there is no qualitative difference between the distribution of covariances for excitatory and inhibitory neurons. In particular, their widths are similar and their mean is close to zero; the latter being the hallmark of the negative feedback that characterizes the balanced state.
Another consequence of the dominance of multi-synaptic connections in the dynamically balanced critical state is that covariances are not restricted to the spatial range of direct connectivity. Through interactions via indirect paths the reach of a single neuron is effectively increased. But the details of the spatial profile of the covariances depend on the interplay of two antagonistic effects: On the one hand, neuronal transmission becomes weaker with distance, as the signal has to pass several synaptic connections. On the other hand, the number of contributing indirect paths between any pair of neurons grows with their distance. Can we predict the resulting, effective range on which neurons may coordinate their activities by this mechanism? Which parameters does it depend on?

Networks close to instability show shallow exponential decay of covariances
To gain an analytic understanding of the spatial features of intrinsically generated covariances in balanced critical networks, we consider a model network of excitatory and inhibitory neurons on a two-dimensional sheet (figure 3a). The probability of two neurons being connected decays with distance on a characteristic length scale d. Previous studies have used linear response theory in combination with methods from statistical physics to explain both mean covariances (13,11,19,30) and the width of the distribution of covariances (29) of random networks. To quantify the relation between the spatial ranges of covariances and connections, we here extend these theories to spatially organized random networks with multiple populations. The model captures the simplest type of randomness in cortical networks, namely the sparseness of connections, and neglects other potential sources of heterogeneity such as variability in synaptic strengths (31,32), or anisotropic connectivity profiles (33). Placing neurons on a regular grid (figure 3a) and employing periodic boundary conditions for the connections imposes translation and permutation symmetries that enable the derivation of closed-form solutions for the distance-dependent mean and variance of the covariance distribution (see supplement S7). These simplifying assumptions are common practice and do not alter the results qualitatively.
A distance-resolved histogram of the covariances in the spatially organized E-I network shows that the mean covariance is close to zero but the width or variance of the covariance distribution stays large, even for large distances (figure 3c). Analytically, we derive that both the mean and the variance of covariances decay exponentially in the long-distance limit (see supplement S3-S11). Our theory shows that their respective length scales depend on the reach of direct connections and on the eigenvalues of the effective connectivity matrix. In particular, for the mean covariancec we find that the dominant behavior is an exponential decay c ∼ exp(−x/d) on a length scale d that is determined by the population eigenvalue (figure 3b). Its position depends on the ratio between excitation and inhibition in the network and becomes more negative in more strongly inhibition-dominated networks. We show in supplement S8.4 that this leads to a steep decay of mean covariances with distance. The variance of covariances,  Table T2 in supplement. however, predominantly decays exponentially on a length scale d eff that is determined by the spectral bound R, the largest real part among all eigenvalues (figure 3b,d). In inhibition-dominated networks, R is determined by the heterogeneity of connections. For R 1 we obtain the effective length scale What this means is that the length scale d eff on which covariances decay exceeds the reach of direct connections by a large factor (figure 3d). As the network approaches instability, which corresponds to the spectral bound R going to one, the effective decay constant diverges (figure 3d inset) and so does the range of covariances. Our population-resolved theoretical analysis, furthermore, shows that the larger the spectral bound the more similar the decay constants between different populations, with only marginal differences for R 1 (figure 3e). This holds strictly if connection weights only depend on the type of the presynaptic neuron but not on the type of the postsynaptic neuron. Moreover, we find a relation between the squared effective decay constants and the squared anatomical decay constants of the form This relation is independent of the eigenvalues of the effective connectivity matrix, as the constant of order O(1) does only depend on the choice of the connectivity profile. For R 1, this means that even though the absolute value of both effective length scales on the left hand side is large, their relative difference is small because it equals the small difference on the right hand side.

Pairwise covariances in motor cortex decay on a millimeter scale
To check if these predictions are confirmed by the data from macaque motor cortex, we first observe that, indeed, covariances in the resting state show a large dispersion over almost all distances on the Utah array (figure 4).
Performing an exponential fit to the variance of covariances reveals length constants above one millimeter. These large length constants have to be compared to the spatial reach of direct connections, which is about an order of magnitude shorter, in the range of 100 − 400 µm (34), so below the 400 µm inter-electrode distance of the Utah array. The shallow decay of the variance of covariances is, next to the broad distribution of covariances, a second indication that the network is in the dynamically balanced critical regime, in line with the prediction by Eq. (1). In fitting the exponential functions, we employed weighting factors that account for the uncertainty at large distances, where variance estimates are based on fewer samples. The population-resolved fits to the data show a larger length constant for excitatory covariances than for inhibitory ones (figure 4a). This is qualitatively in line with the prediction of Eq. (2) given the typically longer reach of excitatory connections compared to inhibitory ones (35,9). In the dynamically balanced critical regime, however, the predicted difference in slope for all three fits is practically negligible. Therefore, we performed a second fit where the slope of the three exponentials is constrained to be identical (figure 4b). The error of this fit is only marginally larger than the ones of fitting individual slopes (figure 4c). This shows that differences in slopes are hardly detectable given the empirical evidence, thus confirming the predictions of the theory given by Eq. (1) and (2).

Firing rates alter connectivity-dependent covariance patterns
Since covariances measure the coordination of temporal fluctuations around the individual neurons' mean firing rates, they are determined by how strong a neuron transmits such fluctuations from input to output (1). To leading order this is explained by linear response theory (13,30,19,11): How strongly a neuron reacts to a small change in its input depends on its dynamical state, foremost the mean and variance of its total input, called "working point" in the following. If a neuron receives almost no input, a small perturbation in the input will not be able to make the neuron fire. If the neuron receives a large input, a small perturbation will not change the firing rate either,    as the neuron is already saturated. Only in the intermediate regime, the neuron is susceptible to small deviations of the input. Mathematically, this behavior is described by the gain of the neuron, which is the derivative of the input-output relation (1). Due to the non-linearity of the input-output relation, the gain is vanishing for very small and very large inputs and non-zero in the intermediate regime.
How strong a perturbation in the input to one neuron affects one of the subsequent neurons therefore not only depends on the synaptic weight J but also on the gain S and thereby the working point. This relation is captured by the effective connectivity W = S · J . What is the consequence of the dynamical interaction among neurons depending on the working point?
The first part of this study finds that long-range coordination can be achieved in a network with short-range random connections if effective connections are sufficiently strong. Alteration of the working point, for example by a different external input, can affect the covariance structure: The pattern of coordination between individual neurons can change, even though the anatomical connectivity remains the same (figure 5a). In this way, routing of information through the network can be adapted dynamically on a mesoscopic scale. This is a crucial difference of such coordination as opposed to coordination imprinted by complex but static connection patterns.
We first illustrate this concept by simulations of a network of nonlinear rate model neurons. For independent and stationary external inputs covariances between neurons are solely generated inside the network via the recurrent connectivity. External inputs only have an indirect impact on the covariance structure by setting the working point of the neurons.
We create two networks with identical structural connectivity and identical external input fluctuations (figure 5a). Small differences in mean external inputs between corresponding neurons in the two networks create slightly different gains and firing rates and thereby differences in effective connectivity and covariances. Since mean external inputs are drawn from the same distribution in both networks (figure 5b), the overall distributions of firing rates and covariances across all neurons are very similar (figure 5e1,f2). But individual neurons' firing rates do differ (figure 5e2). The resulting change of the neurons' working points substantially affects the covariance patterns (figure 5f2): Differences in firing rates and covariances between the two networks are significantly larger than the differences within the same network across two different time periods (figure 5c). The larger the spectral bound, the more sensitive are the intrinsically generated covariances to the changes in firing rates (figure 5d). Thus, a small offset of individual firing rates is an effective parameter to control network-wide coordination among neurons. As the input to the local network can be changed momentarily, we predict that in the dynamically balanced critical regime coordination patterns should be highly dynamic.

Coordination patterns in motor cortex depend on behavioral context
In order to test the model prediction in experimental data, we analyze massively-parallel spiking activity from macaque motor cortex, recorded during a reach-to-grasp experiment (36,8). In contrast to the resting state, where the animal was in an idling state, here the animal is involved in a complex task with periods of different cognitive and behavioral conditions (figure 6a). We compare two epochs in which the animal is requested to wait and is sitting still but which differ in cognitive conditions. The first epoch is a starting period (S), where the monkey has self-initiated the behavioral trial, and is attentive because it is expecting a cue. The second epoch is a preparatory period (P), where the animal has just received partial information about the upcoming trial, and is waiting for the missing information and the GO signal to initiate the movement. In both epochs the neuronal firing rates are stationary, likely due to the absence of arm movement (see supplement S2). The overall distributions of the firing rates in the different epochs are comparable (figure 6c), but are distributed differently across the individual neurons. By comparing the firing rates across neurons between disjoint sub-periods of any epoch (e.g. S1-S2 or S1-P2), we observe that the firing rate compositions across neurons change more strongly across the two epochs (Sx-Py) than within each of them (figure 6e). This holds for data from five/eight different recording sessions from different recording days for monkey E/N. Similarly, the covariance values change, though here the changes are even more pronounced. This is in line with our prediction for a network whose effective connectivity has a large spectral bound, in the critically balanced state. In particular, the theory predicts different coordination patterns between neurons on the mesoscopic scale (range of a Utah array), which is indeed observed in the two states S and P (figure 6b).

Discussion
In this study, we investigate coordination patterns of many neurons across mesoscopic distances in macaque motor cortex. We show that these patterns have a salt-and-pepper structure, which can be naturally explained based on the dynamically balanced critical state of networks. In this state, cross-covariances are shaped by a large number of parallel, multi-synaptic pathways, leading to interactions reaching far beyond the range of direct connections. Strikingly, this coordination on the millimeter scale is only visible if covariances are resolved on the level of individual neurons; the population mean of covariances quickly decays with distance and is overall very small. In contrast, the variance of covariances is large and predominantly decreases exponentially on length scales of up to several millimeters, even though direct connections typically only reach a few hundred micrometers.
Since the observed coordination patterns are determined by the effective connectivity of the network, they are dynamically changing with network state; for example, due to modulations of neuronal firing rates. Massively parallel recordings in macaque motor cortex during resting state and in different epochs of a reach-to-grasp task confirm this prediction. Simulations indeed exhibit a high sensitivity of coordination patterns to weak modulations of the individual neurons' firing rates, providing a plausible mechanism for these dynamic changes.
Models of balanced networks have been investigated for more than two decades (21, 37, 10, 11) and experimental evidence for cortical networks operating in the balanced state is overwhelming (38,39,40). In contrast, the dynamically balanced critical state was only recently introduced and related to motor cortex activity (29): It is a specific type of balanced network that operates close to an instability as a result of large heterogeneity in the network connectivity. The excess of inhibition in such networks yields stable and balanced population activities as well as low average covariances, while the heterogeneity generates disperse covariance structures between individual neurons, including the long-range salt-and-pepper structure studied here.
Spatially organized balanced network models have been investigated before in the limit of infinite network size as well as under strong and potentially correlated external drive (15,25). In this limit, intrinsically generated covariances can be neglected and population-averaged covariances fulfill a linear equation, called the "balance condition", that predicts a non-monotonous change of mean covariances with distance (41). In contrast, we here consider covariances on the level of individual cells in finite-size networks with weak inputs or in resting state conditions. In such a scenario, covariances have been shown to be predominantly generated locally rather than from external inputs (14,29). Analyses on the single-neuron level go beyond the balance condition and require the use of field-theoretical techniques to capture the heterogeneity in the network (29).
We here generalize these analyses to excitatory-inhibitory random networks on a two-dimensional sheet with spatially decaying connection probabilities. This allows us to derive expressions for the spatial decay of the variance of covariances. We primarily evaluate these expressions in the long-range limit, which agrees well with simulations for distances x > 2d ∼ O(1 mm), which is fulfilled for most distances on the Utah array (cf. figure 3, figure S4). For these distances we find a simple relation between the decay constant of covariances, the spectral bound of the effective connectivity, and the length scale of direct connections. The length scale of covariances diverges when approaching the breakdown of linear stability. In this regime, differences in covariances induced by differences in length scales of excitatory and inhibitory connections become negligible. The predicted emergence of a single length scale of covariances is consistent with our data.
This study focuses on local and isotropic connection profiles to show that long-range coordination does not rely on specific connection patterns but can purely result from the network state. Alternative explanations for longrange coordination are based on specifically imprinted network structures. Anisotropic local connection profiles have been studied and shown to create spatio-temporal sequences (42) and long-range covariance patterns that provide the seed for tuning maps in the visual cortex (33). Likewise, embedded excitatory feed-forward motifs and cell assemblies via long-range patchy connections can create positive covariances at long distances (43,44). These structures are, however, imprinted in the connectivity: A change in gain of the neurons will either strengthen or weaken the specific activity propagation, but it will not lead to new pathways of propagation within the network and therefore not cause significantly different coordination patterns that we see in our data. The static impact of these connectivity structures on covariances could in principle be included in the presented theoretical formalism. But, we here show that heterogeneity through sparsity is enough to generate the dynamically balanced critical state, which naturally provides a simple explanation for the broad distribution of covariances, the salt-and-pepper structure of coordination, its long spatial range, and its sensitive dependence on the network state.
What are possible functional implications of the coordination on mesoscopic scales? There are currently controversial discussions about the dimensionality of neuronal network dynamics (18,45). Dimensionality reduction techniques, such as PCA or GPFA (46), employ covariances to expose a dynamical repertoire of motor cortex that is comprised of neuronal modes (47): A small number of task-independent modes are found to be dynamically combined to generate desired motor outputs for different tasks. It has been speculated that these modes are linked to connectivity. The here described balanced critical state is a likely candidate for the emergence of these modes: A subset of modes close to the point of instability dominates the variability of the network activity (29). The shallow decay of pairwise covariances with distance suggests that these modes are highly non-local and not necessarily limited to the reach of direct connections. Such a distributed representation of information may be beneficial, because the same information may be integrated into different computations that take place in parallel at various locations. Linking the statistics of modes observed in resting state activity to their targeted and selective activation within different tasks or observed behavior of the animal presents a challenging, but exciting endeavor for future work.

Experimental Design and Statistical Analysis
Two adult macaque monkeys (monkey E -female, and monkey N -male) are recorded in behavioral experiments of two types: resting state and reach-to-grasp. The recordings of neuronal activity in motor and pre-motor cortex (hand/arm region) are performed with a chronically implanted 4x4 mm 2 Utah array (Blackrock Microsystems). Details on surgery, recordings, spike sorting and classification of behavioral states can be found in (8,48,36,49).

Resting state data
During the resting state experiment, the monkey is seated in a primate chair without any task or stimulation. Registration of electrophysiological activity is synchronized with a video recording of the monkey's behavior. Based on this, periods of "true resting state" (RS), defined as no movements and eyes open, are chosen for the analysis. Eye movements and minor head movements are included. Each monkey is recorded twice, with a session lasting approximately 15 and 20 min for monkeys E (sessions E1 and E2) and N (sessions N1 and N2), respectively, and the behavior is classified by visual inspection with single second precision, resulting in 643 and 652 s of RS data for monkey E and 493 and 502 s of RS data for monkey N.

Reach-to-grasp data
In the reach-to-grasp experiment, the monkeys are trained to perform an instructed delayed reach-to-grasp task to obtain a reward. Trials are initiated by a monkey closing a switch (TS, trial start). After 400 ms a diode is illuminated (WS, warning signal), followed by a cue after another 400 ms(CUE-ON), which provides partial information about the upcoming trial. The cue lasts 300 ms and its removal (CUE-OFF) initiates a 1 s preparatory period, followed by a second cue, which also serves as GO signal. Two epochs, divided into 200 ms sub-periods, within such defined trials are chosen for analysis: the first 400 ms after TS (starting period, S1 and S2), and the 400 ms directly following CUE-OFF (preparatory period, P1 and P2) (cf. figure 6a). Five selected sessions for monkey E and eight for monkey N provide a total of 510 and 1111 correct trials, respectively.

Separation of putative excitatory and inhibitory neurons
Offline spike-sorted single units (SUs) are separated into putative excitatory (broad-spiking) and putative inhibitory (narrow-spiking) based on their spike waveform width (50,51,52,53,54). The width is defined as the time (num-ber of data samples) between the trough and peak of the waveform. Widths of all average waveforms from all selected sessions (both resting state and reach-to-grasp) per monkey are collected. Thresholds for "broadness" and "narrowness" are chosen based on the monkey-specific distribution of widths, such that intermediate values stay unclassified. For monkey E the thresholds are 0.33 ms and 0.34 ms, and for monkey N 0.40 ms and 0.41 ms. Next, a two step classification is performed session by session. Firstly, the thresholds are applied to average SU waveforms. Secondly, the thresholds are applied to SU single waveforms and a percentage of single waveforms pre-classified as the same type as the average waveform is calculated. SU for which this percentage is high enough are marked classified. All remaining SUs are grouped as unclassified. We verify the robustness of our results with respect to changes in the spike sorting procedure in supplement S1. In addition, only SUs with a signal-to-noise ratio (55) of at least 2.5 and a minimal average firing rate of 1 Hz are considered for the analysis, to ensure enough and clean data for valid statistics.

Statistical analysis
All RS periods per resting state recording are concatenated and binned into 1 s bins. Next, pairwise covariances of all pairs of SUs are calculated according to the following formula: with b i , b j -binned spike trains, µ i , µ j being their mean values, l the number of bins, and x, y the scalar product of vectors x and y. To explore the dependence of covariance on the distance between the considered neurons, the obtained values are grouped according to distances between electrodes on which the neurons are recorded. For each distance the average and variance of the obtained distribution of cross-covariances is calculated. The variance is additionally corrected for bias due to a finite number of measurements (29). In most of cases, the correction does not exceed 0.01%. In the following step, exponential functions y = ae −x d are fitted to the obtained distance-resolved variances of cross-covariances (y corresponding to the variance and x to distance between neurons). The least squares method implemented in the Python scipy.optimize module (SciPy v.1.2.1) is used. Firstly, three independent fits are performed to the data for excitatory-excitatory, excitatory-inhibitory and inhibitory-inhibitory pairs. Secondly, analogous fits are performed, with the constraint that the decay constant d should be the same for all three curves.
Covariances in the reach-to-grasp data are calculated analogously but with different time resolution. For each chosen sub-period of a trial, data are concatenated and binned into 200 ms bins, meaning that the number of spikes in a single bin corresponds to a single trial. The mean of these counts normalized to the bin width gives the average firing rate per SU per sub-period. The pairwise covariances are calculated according to Eq. (3). To assess the similarity of neuronal activity in different periods of a trial, Pearson product-moment correlation coefficients are calculated on vectors of SU-resolved rates and pair-resolved covariances. Correlation coefficients from all recording sessions per monkey are separated into two groups: using sub-periods of the same epoch ,and using sub-periods of different epochs of a trial. A Student t-test, comparing these groups, reveals highly significant differences for both rates and covariances (p 0.001).

Mean and Variance of Covariances for a Two-Dimensional Network Model with Excitatory and Inhibitory Populations
The mean and variance of covariances are calculated for a two-dimensional network consisting of one excitatory and one inhibitory population of neurons. The connectivity profile p(x), describing the probability of a neuron having a connection to another neuron at distance x, decays with distance. We assume periodic boundary conditions, which renders the network structure translation invariant (figure 3a). Our aim is to find an expression for the mean and variance of covariances as functions of distance between two neurons. By extending results of (29) to more complex connectivities, we obtain expressions for the mean covariance c and variance of covariance δc 2 with identity matrix 1, mean M and variance S of connectivity matrix W , input noise strength D, and spectral bound R. Since M and S have a similar structure, the mean and variance can be derived in the same way, which is why we only consider variances in the following.
To simplify equation (4), we need to find a basis in which S, and therefore also A = 1 − S, is diagonal. Due to invariance under translation, the translation operators T and the matrix S have common eigenvectors, which can be derived using that translation operators satisfy T N = 1, where N is the number of lattice sites in xor y-direction (see supplement). Projecting onto a basis of these eigenvectors shows that the eigenvalues s k of S are given by a discrete two-dimensional Fourier transform of the connectivity profile Expressing A −1 in the eigenvector basis yields A −1 (x) = 1 + B(x), where B(x) is a discrete inverse Fourier transform of the kernel s k /(1 − s k ). Assuming a large network with respect to the connectivity profiles allows us to take the continuum limit As we are only interested in the long-range behavior, which corresponds to |x| → ∞, or |k| → 0, respectively, we can approximate the Fourier kernel around |k| ≈ 0 by a rational function, quadratic in the denominator, using a Padé approximation. This allows us to calculate the integral which yields where K 0 (x) denotes the modified Bessel function of second kind and zeroth order (56), and the effective decay constant d eff is given by equation (1). In the long-range limit the modified Bessel function behaves like (56, 10.25.3) |x| .
Writing equation (4)  Hence, the effective decay constant of the variances is given by d eff . Note that further details of the above derivation can be found in the supplement S3-S11.