Sparse clustered inhibition projects sequential activity onto unique neural subspaces

Neural activity in the brain traces sequential trajectories on low dimensional subspaces. For flexible behavior, these neural subspaces must be manipulated and reoriented within tens of milliseconds. Using mathematical analysis and simulation of a recurrently connected neural circuit for sequence generation, we report that incorporating a subtype of interneurons that provides sparse but clustered inhibition enables the projection of sequential activity onto task- or context-specific neural subspaces. Depending on the sparsity of inhibitory projections, neural subspaces could be arbitrarily rotated with respect to the intrinsic subspace, without altering the key aspects of sequence generation. Thus, we propose a circuit motif and mechanism by which inhibitory interneurons can enable flexible switching between neural subspaces on a fast timescale of milliseconds, controlled by top down signals. Significance Statement Cognitive faculties like memory, movement, and decision-making depend on the reliable sequential activation of neurons in the brain. These activity sequences reside on low dimensional neural subspaces, with different tasks or behaviors represented on different subspaces. Flexible cognition therefore requires changes in subspace orientation on timescales as fast as tens of milliseconds. Here we propose a mechanism for fast timescale control of the neural subspace in a recurrently connected neural network model for sequence generation. Subspace manipulation is implemented by inhibitory neurons that form sparse, clustered axonal projections onto the neurons responsible for sequence generation. Ensembles of these inhibitory neurons rapidly rotate the neural subspace and support storage of and dynamical switching between neural activity sequences on low dimensional manifolds.

Sequential activity of neurons has been observed throughout the central nervous system as a reliable neural correlate of behavior, for example, in spinal circuits and motor areas during movement (Lindén et al., 2022;Gallego et al., 2018), in higher cortical areas during decision making (Harvey et al., 2012), and within the hippocampus supporting navigation and memory (Skaggs et al., 1996;Pastalkova et al., 2008;Buzsáki and Tingley, 2018).
In neural networks, sequential activity has been shown to live on low dimensional neural subspaces, which arise due to the correlation structure of the participating neurons (for review, see e.g.Cunningham and Yu, 2014;Ebitz and Hayden, 2021;Chung and Abbott, 2021).Moreover, different tasks or behaviors are represented as a change in the orientation of the neural subspaces (Gallego et al., 2018;Elsayed et al., 2016;Tang et al., 2020).Computational studies have demonstrated that reorienting the neural subspace generated by a recurrently connected network requires either incremental learning over days (Feulner and Clopath, 2021) or full rewiring of the network (Wärnberg and Kumar, 2019).Thus, it remains unclear as to how neural subspaces can be flexibly and dynamically reoriented on fast (behaviorally relevant) time scales, in particular without disturbing the underlying sequential dynamics.
In this setting, the functional role of inhibitory interneurons is to balance the recurrent excitation.A number of studies have observed broad non-specific inhibition (Fino and Yuste, 2011;Packer and Yuste, 2011;Packer et al., 2013) consistent with the role of interneurons in modulating gain (Chance et al., 2002) and upholding excitation-inhibition (EI) balance (Isaacson and Scanziani, 2011;Vogels et al., 2011;Fino et al., 2013;Hu et al., 2014).However, given the diversity of interneurons in the brain (Gupta et al., 2000;Somogyi and Klausberger, 2005;Klausberger and Somogyi, 2008;Gelman and Marín, 2010;Tremblay et al., 2016;Pelkey et al., 2017;Lim et al., 2018), while certain interneuron subtypes may balance excitation, it is likely that others play a different role.Recent studies have found nonrandom inhibitory connectivity motifs (Rieubland et al., 2014;Espinoza et al., 2018;Peng et al., 2021) and shown that inhibitory synapses can be spatially clustered (Chen et al., 2012), selectively connect to specific dendritic branch types (Bloss et al., 2016), and coordinate their plasticity with excitatory synapses in close spatial proximity (Ravasenga et al., 2022).Such structured connectivity could support more sophisticated computational functions beyond maintaining a global EI balance.
Here we exploit this interneuron diversity and show that while one subtype of interneurons preserves EI balance, a second subtype of interneurons can form the necessary circuit for fast, dynamic, and flexible manipulation of the neural subspace.In a network model of sequence generation based on spatially asymmetric local connectivity, we report that sparse but clustered feedforward inhibition can rapidly manipulate the neural subspace while preserving sequence generation.Thus, in this circuit, one inhibitory population is crucial for balancing excitation during sequence propagation and a second inhibitory population -which we refer to as selective inhibition -shapes the network activity, selecting unique neural subspaces and, by this, endows the network with the computational benefit of sequence selection.The proposed circuit motif assigns complimentary roles to multiple interneuron subtypes and provides a mechanism for storing and dynamically selecting between sequences in one recurrent circuit.

Selectively inhibiting neurons can rotate the neural subspace
Neural activity of a subset of N neurons recorded for T time steps can be described as a matrix A ∈ R N xT with each column of the matrix containing the activity vector, e.g.firing rates, r t ∈ R N at time t.Recent research has uncovered that neural dynamics, though they exist in an N dimensional space (one axis per neuron), tend to be of much lower dimensionality (see e.g.Cunningham and Yu, 2014;Gao et al., 2017;Ebitz and Hayden, 2021).For example, applying principal component analysis to the activity matrix A from either neural data or a mathematical model for sequence generation reveals a K << N dimensional subspace, often in experimental data with K ∼ 10.
Here we investigate how this low dimensional subspace can be dynamically manipulated on a fast timescale (tens of milliseconds).Starting with a recurrently connected network of excitatory and inhibitory neurons and a connectivity structure that enables sequence generation (Fig. 1a), we consider a second population of interneurons that provides sparse, functionally clustered inhibition.Functionally clustered here means that a co-active ensemble of interneurons cluster their synapses onto a subset of the excitatory and inhibitory neurons responsible for sequence generation, with the remaining neurons receiving negligible input.The degree of sparsity refers to the fraction of neurons that receive this inhibitory input.
To understand the effect of sparse clustered inhibition on a neural subspace we can consider this selective inhibition as a mathematical rotation of the subspace of neural activity.To illustrate this, we start with a simple example ignoring recurrent connections and consider three excitatory neurons that form part of a sequence (Fig. 1b), one of which receives an inhibitory input.Under baseline conditions, the activity sequence of the three neurons lies on a neural subspace, in this case the activity traces an ellipse on a 2D plane (Fig. 1d, for detailed description see Supplementary Text S1).Given feedforward inhibition, the neuron's gain is reduced (case i, Fig. 1c), which results in a rotation of the neural subspace (Fig. 1d).If the selective inhibitory input is strong enough, the excitatory neuron is silenced (case ii, Fig. 1c) and the activity is projected onto the plane spanned by the remaining active neurons (Fig. 1e), also resulting in a rotation of the neural subspace.
With this we see that in principle, selective inhibition can project neural activity onto a different subspace.A key point is that the old and new subspace can be differentiated by an angle (Fig. 1d,e; see Supplementary Text S1), which can be used by downstream brain regions to decode activity on each subspace as a different neural activity sequence.Next we extend this concept in a recurrent neural network model with which we can test the stability of sequence generation under manipulation of the neural subspace with selective inhibition.

A network model for inhibition-driven projection onto neural subspaces
To better understand the projection of neural activity by selective inhibition onto different subspaces, we considered a rate-based recurrent network for sequence generation.We use a standard rate-based model (see Methods) with time constant τ = 1 and first order numerical integration with step size ∆t = 1.The firing rates of the neurons are then given by r(t + 1) = W r(t) + I E (t) + I I (t) where r ∈ R N is the vector of firing rates of all neurons in the recurrent network and W ∈ R N xN is the connectivity matrix.Input from the second inhibitory population is given by I I ∈ R N in addition to the feedforward excitation by The connectivity matrix W includes excitation and inhibition responsible for sequence generation on a ring with distance-dependent spatially asymmetric excitation and global inhibition (Fig 2a, see Methods for derivation).This type of connectivity structure gives rise to a circulant matrix for the recurrent weight matrix (Fig 2b, see Supplementary Fig. S1), which results in a localized bump of activity forming in the network.For symmetric distance-dependent excitatory projections following a Gaussian distribution, the activity bump remains stationary.However, if the excitatory projections are made asymmetric, for instance by shifting the center of the Gaussian kernel in one direction, the bump moves around the ring structure, generating a sequence (Fig 2c, Supplementary Fig. S1).
To this sequence generation network we add a second inhibitory subpopulation, which forms sparse clustered projections onto the ring network (Fig. 2a).Neurons receiving inhibition from the second population are uniformly distributed on the ring, meaning the connectivity is uncorrelated with the spatially local connectivity within the sequence generation network.
In order for this selective inhibitory input to generate a subspace projection, we make two assumptions.First, we assume that the inhibitory input remains constant over a time interval t ∈ (t 1 , t 2 ), which could range from milliseconds to seconds or longer.This means for a subset of neurons i ∈ S we have I I i (t) = −J, while for the rest i ∈ S we have I I i (t) = 0. We can rewrite the above equation for the firing rate of neuron i as where the sum is over all inputs j to neuron i.
The second assumption is that the total inhibitory input is strong enough to silence the neurons receiving selective inhibition.For J large enough, i.e.
neurons in S remain inactive, that is By defining we can rewrite the firing rate update equation as This specifies a projection matrix with the p i 's along its diagonal, i.e. zeros in rows of neurons that receive inhibition and ones in rows of neurons that remain active.Thus, Equation 1becomes which describes a projection of the circuit activity at each time step onto the subspace spanned by neurons i ∈ S.
Here we consider the scenario where a transient input triggers bump formation and subsequent sequence progression is intrinsic to the recurrent network.This means that after the bump is initialized, excitatory input ceases (I E = 0) and the recurrent network locally propagates the sequence.Thus, for Figures 2 and 3 our rate model further simplifies to r(t + 1) = P W r(t) + I E (t) with r initialized as a bump of activity at t = 0 (see Methods).
Taken together, the sparse clustered inhibition can be described as a mathematical projection (P ) acting on the sequence generation network (W r).For this type of subspace manipulation to be useful, it is crucial that the inhibitory projections from the second interneuron population do not significantly impede the network's ability to generate and propagate sequential activity, which we investigate next.

Sparse clustered inhibition preserves sequence generation
Without selective inhibition, the recurrent model described above robustly generates sequential activity (see Supplementary Fig. S1; also see Amari, 1977;Zhang, 1996;Pinto and Ermentrout, 2001;Lu et al., 2011;Spreizer et al., 2019).Sequence generation requires the stable progression between subsequent activity states.In the ring network, this is modeled as the formation of stable bumps of activity and their movement through the network.Bump formation arises due to the distancedependent excitatory projections and global inhibition, which make the connectivity matrix a circulant matrix.The eigenvectors of a circulant matrix are Fourier modes, i.e. complex exponentials, which are spatially periodic (Davis, 1979;Gray, 2006).The activity bump arising in the network is determined by the Fourier mode corresponding to the dominant eigenvalue.If the distancedependent excitatory projections are spatially symmetric, then the connectivity matrix W is also symmetric and the eigenvalues are real, which leads to a stationary bump.For asymmetric excitatory projections (i.e.stronger projections in one direction, a shifted Gaussian kernel), positive complex eigenvalues emerge and the bump begins to move through the network (see Supplementary Fig. S1).The magnitude of the imaginary part of the largest eigenvalue is proportional to the "amount of asymmetry", i.e. shift of the Gaussian kernel, as well as to the speed of bump movement (see Supplementary Fig. S1; for similar observations in 2D networks see Spreizer et al., 2019).
While the recurrent dynamics without selective inhibition enable sequence generation, we wanted to know whether the projection matrix P preserves this property.For sequence generation to be preserved, applying the projection P to W should not significantly change its eigendecomposition; in particular the dominant eigenvalue and eigenvector should be robust to this projection.To evaluate this, we generated projection matrices P to silence a fraction of the neurons from p inh = 0 up to p inh = 0.95, in each case for 10 different projection matrices P .As expected, when an increasing fraction of neurons was silenced, the eigenvalues λ scale linearly with (1 − p inh )λ, yielding a disappearance of sequence propagation.Thus for stable sequence generation the recurrent weights must be rescaled by 1 (1−p inh ) .After accounting for this, we found that the eigenspectrum of P W was nearly identical to W , even when up to 95% of neurons were silenced (Fig. 2d,e).Furthermore, the principal eigenvector of P W was also highly preserved (Fig. 2f,g).The normalized principal eigenvector of W lies on the unit circle in the complex plane and applying P to W only slightly distorted this form (Fig. 2f).We repeated the analysis for a range of asymmetry values and found that the results held, with the largest changes in eigendecomposition for small shifts close to the symmetric case, corresponding to very slow moving or nearly stationary bumps (Supplementary Fig. S2).
Applying the projection P is equivalent to setting a subset of rows of W to zero.Setting all entries in row i to zero corresponds to removing the inputs to neuron i, which results in silencing its activity.Setting a row of W to zero further implies that the corresponding column of W also becomes zero.This follows logically as a column of W represents the outputs of the now silenced neuron.We constructed the reduced matrix W formed by removing the rows and columns of W corresponding to the silenced neurons and found that its eigendecomposition is equivalent to that of P W (Supplementary Fig. S3).Visually inspecting these reduced matrices makes it clear that due to the uniformly random silencing of neurons, they remain close to circulant (Supplementary Fig. S3), giving further intuition into the preserved eigendecomposition of P W .
To confirm the results comparing the eigendecomposition of P W and W , we ran simulations to assess sequence progression under subspace projections for fractions silenced from p inh = 0 to p inh = 0.95, in each case for 10 different projection matrices P .We initialized a bump on the ring at t = 0 (see Methods) and the firing rates were evolved according to Equation 9. To evaluate sequence progression, for each time step we fit a Gaussian to the activity (see Methods).We computed the amplitude and width (standard deviation) of the fitted Gaussian, as well as its center position, which allowed us to estimate the bump's speed.In the case without selective inhibition (p inh = 0), activity bumps have a constant width and speed (Fig. 2h).We adjusted the recurrent weights such that for p inh = 0 the activity bump decayed in amplitude to half its original height by the end of the simulation.For each fraction silenced p inh we computed the instantaneous speed, width, and amplitude of the bump at each time step relative to the values for p inh = 0. We computed the mean and standard deviation across time steps to quantify how variable the bump's size and movement is along the ring.On average, the width and speed of the bump remained stable even for a large fraction of silenced neurons, with the variability increasing with fraction silenced.The amplitude was similar for a large range of fraction silenced, decreasing faster on average and becoming more variable for higher fractions of silenced neurons (Fig. 2h).Similar results were found for different shift magnitudes (Supplementary Fig. S4), with sequence progression becoming unstable only for low asymmetry and high fraction silenced.Thus, sequence progression remains stable even when clustered inhibitory inputs densely innervate the sequence generation network, provided the spatial asymmetry is large enough for robust bump movement.
Increased variability in the Gaussian fit to the activity bump along the ring is expected, as despite neurons being silenced based on a uniform distribution on the ring, for each particular projection matrix P there is variability in the number of silenced neurons at each spatial location.Variability in neural activity, in this case in the number of active neurons (i.e.population firing rate), is a natural feature of activity in biological brains and not inherently bad in a model, provided that it does not render the network unstable leading to the activity dying out or exploding.Here we have shown that the network dynamics remains stable as the eigendecomposition as well as sequence generation and progression remained preserved even for very large fractions of neurons silenced by selective inhibition.

Sparse clustered inhibition projects sequential activity onto unique neural subspaces
Next we tested our hypothesis that sparse clustered inhibition can reorient the neural subspace.For this, we quantified the difference between projections onto different subspaces as a function of the inhibitory sparsity, that is the fraction of neurons silenced.For each fraction silenced, again from p inh = 0 to p inh = 0.95, we considered 10 inhibitory ensembles, which as described above, equates to 10 different projection matrices.Neurons receiving inhibition from an inhibitory ensemble were again chosen uniformly at random.
As in the example shown in Fig. 1, to quantify differences in subspace orientation we first measured the angle between pairs of subspaces.As before, for each fraction of neurons silenced p inh , we generated 10 different projection matrices P .We ran simulations for each of these cases and, for each activity raster A ∈ R N xT , we computed the first three principal components and calculated the first principal angle between pairs of subspaces (Fig. 3c, see Methods).In agreement with our hypothesis, the angle between the subspaces increases with the fraction of neurons silenced, with nearly 90°angles for 95% silencing.Thus, depending on the level of sparsity, clustered inhibition can result in reorientation of the neural subspace from nearly aligned to completely orthogonal, exhibiting a large range of possible behavior.
As a second measure, we considered the projection magnitude of neural activity in each of the neural sub- spaces (Fig. 3b,c).Each neural activity raster A ∈ R N xT has its own neural subspace in which its projection is maximal.When activity from another activity raster, which lives in a different subspace, is projected onto the subspace belonging to A, the relative projection magnitude depends on the alignment of the two subspaces.This measure provides a method for visualization (Fig. 3b) and quantifies the possibility of making an error in decoding, since large magnitude projections in other subspaces makes the trajectories harder to separate.We observed that the relative projection magnitude decreased as the fraction of silenced neurons was increased, ranging from 100%, when no neurons were silenced, to around 5%, when 95% of neurons were silenced.This measure shows that the sparsity of clustered inhibition can determine a full range of behavior, from highly similar trajec-tories to highly separable trajectories living in orthogonal subspaces.Example subspace projections for different levels of inhibitory sparsity are shown in Fig. 3b.As a final measure, we considered the amount of overlap in the subpopulation of neurons recruited by different ensembles of sparse clustered inhibition.For each inhibitory ensemble delivering sparse clustered inhibition, and in turn projecting neural activity onto its corresponding subspace, there is a subset of neurons that are silenced and a subset that remain active.We computed the amount of overlap in recruited neurons between pairs of subspaces as a function of the sparsity of inhibitory projections, i.e. the fraction of neurons silenced.Average proportion overlap decreased linearly with the fraction of neurons silenced by selective inhibition with ρ ≈ 1 − p inh (black line, Fig. 3d).Thus, the denser the innervation from selective inhibitory ensembles (p inh → 1), the less overlap in recruited subpopulations (ρ → 0), paralleling the results for projection magnitude and principal angle.
Next we considered how the overlap depended on the total number of stored subspaces or inhibitory ensembles.Here, the expectation is that as more subspaces are stored, there should be more overlap/alignment between pairs of subspaces.To quantify this, we considered the maximum proportion overlap as a function of the number of stored subspaces.We considered simultaneous storage of 10, 20, 50, 100, 200, 500, and 1000 subspaces.Again, each subspace corresponds to the sequential activity of the neurons in the ring network that remain active under uniformly random silencing of p inh • N neurons by an ensemble of sparse clustered inhibition.As expected, we observed that maximum overlap increases with the number of neural subspaces (colored lines, Fig. 3d).However, even for n = 1000 subspaces the proportion overlap remains low and still decreases linearly with fraction of neurons silenced.Even with a large number of subspaces, the proportion of neurons shared by any two subspaces remains low when enough neurons are silenced, e.g.maximum overlap is less than 50% when 60% of neurons are silenced by selective inhibition (Fig. 3d).Thus, the large range of behavior in subspace orientation, from aligned to orthogonal, induced by sparse clustered inhibitory ensembles is preserved even when many subspaces are stored simultaneously.
To explain this effect analytically, we computed the complementary cumulative distribution function (ccdf ), which describes the probability that the overlap between two subspaces is more than proportion ρ ∈ [0, 1] (see Methods).The ccdf is given by where S 1 , S 2 are the set of neurons in subspaces #1 and #2, • N is the number of active neurons in a subspace (N , number of neurons in the network), and |S 1 ∩ S 2 | is the number of neurons that are active in both subspaces (overlap).For subspaces to be distinguishable, the probability of substantially overlapping should be low, though exactly how similar different subspaces can be, would depend on brain region, task, and neural coding scheme.This means the ccdf should approach zero above some threshold ρ thr for small positive constant and overlap threshold ρ thr ∈ [0, 1].To verify this, we plotted the ccdf for different sparsity values from fraction silenced p inh = 0.1 to p inh = 0.9 (Fig. 3e).As expected, independent of sparsity, the probability of overlapping goes to zero as ρ → 1.With denser inhibitory innervation, meaning a larger fraction of silenced neurons, the ccdf goes to zero increasingly fast.For example, when 60% of neurons are silenced by selective inhibition, i.e. p inh = 0.6, then the chance of two subspaces overlapping more than 50%, i.e. ρ = 0.5, is approximately 10 −7 (note log scale of y-axis, Fig. 3e).A nice feature of the ccdf curves is that, for a given ρ thr and , they clearly show which level of sparsity is required from the selective inhibitory ensembles.Taken together, by looking at three measures, namely the angle between pairs of subspaces, the relative projection magnitude of trajectories onto other subspaces, and the amount of overlap in recruited subpopulations by different subspaces, we clearly see that a full range of behavior in subspace projections is possible.The important point is that subspace similarity is controlled by the sparsity of projections from the selective inhibitory ensembles onto the neurons that participate in sequence generation.When storing many subspaces, provided the inhibitory inputs cluster on a large enough subset of sequence generation neurons, selective inhibitory ensembles can innervate a random subset of neurons with very low probability of the subspace aligning with another subspace.

A neural circuit to dynamically select and maintain subspaces
So far we have shown that sparse clustered inhibition preserves sequence generation, while at the same time projecting the neural activity onto unique neural subspaces.Next we turn to our final hypothesis, namely that inhibitory ensembles with sparse clustered axonal projections onto neurons supporting sequence generation provide a mechanism for dynamically selecting and switching between neural subspaces.Since projections onto different subspaces in our model are being performed by the activity of the inhibitory ensembles, we anticipated that switching between subspaces should be possible on the timescale of this neural activity, namely on the order of tens of milliseconds.
To test this hypothesis, we extended the model to include inhibitory ensembles connected reciprocally with the sequence generation network, with sequence selection controlled by a top down signal, possibly from a higher brain region (Fig. 4a).In the extended model, ensembles in the selective inhibition subpopulation inhibit one another, enforcing winner-take-all dynamics, meaning that only one ensemble is active at a time.Global excitatory projections from sequence generation neurons to the selective inhibition subpopulation keeps the winning ensemble active.The underlying connectivity motif is shown in Fig. 4b.
We model the firing rate r E i of individual neurons in the sequence generation network as a function of their synaptic currents I E i (see Methods for more detail).In particular, we take the following well-established formulation for the rate dynamics (Dayan and Abbott, 2005) with total synaptic current I E i depending on the firing rate of ring network neurons r E j weighted by recurrent weights w EE ij and the firing rate of selective inhibition ensembles r I k weighted by their inhibitory weights w EI ik .The synaptic time constant is now set to τ = 10ms, extending the results above where τ was 1. Recurrent weights w EE ij incorporate the local asymmetric excitation and global inhibition required for sequence generation, again forming a circulant connectivity matrix W = (w EE ij ) as before.The output firing rate of neuron r E i is a function of the total synaptic current I E i , with a piecewise linear activation function F keeping the firing rate between 0 and 1.The firing rate of each ensemble of selective inhibitory neurons is modelled similarly as , ( 14) where the total synaptic current I I i depends on the firing rate of ring network neurons r E j weighted by w IE , the firing rate of the other selective inhibition ensembles r I k , k = i, weighted by w II , and top down input I ext,i (t).The top down input models excitatory input from an ensemble of neurons as a Gaussian function in time where the firing rate of input ensemble i has its peak at t ext,i with standard deviation σ ext , weighted by excitatory weight w ext .Within this neural circuit for sequence generation and selection, we then tested the ability to dynamically switch between subspaces.We defined two selective inhibitory ensembles (Fig. 4a) that each project to a random subset of the sequence generation neurons, in this case with p inh = 0.8, thus leaving 20%, or 200 of the N = 1000 neurons active per subspace.Each inhibitory ensemble receives top down input, modeled to resemble the activation of an assembly of input neurons, as described above.
The sequence generation network was initialized with a bump of activity and both inhibitory ensembles set to be quiescent.At t = 0 a top down input arrives at the first inhibitory ensemble (Fig. 4d), resulting in it becoming active and suppressing the activity of the second ensemble (Fig. 4e).Activity of the first inhibitory ensemble projects the neural activity sequence onto the according subspace, which is visible in the neural activity (Fig. 4f) as well as in PC space (Fig. 4c).The projection magnitude onto the first subspace is high (blue, Fig. 4g), while the projection magnitude onto the non-selected subspace is low (red, Fig. 4g).
The network remains stable in subspace #1 (blue) and the neural activity sequence unfolds as it should, until a second top down input arrives to the other inhibitory ensemble.This top down input activates the second ensemble, which silences the previously active inhibitory ensemble (Fig. 4e) and projects the activity onto subspace #2 (red, Fig. 4c,f,g).This can be nicely observed in the projections of the activity onto subspace #1 and #2, where the top down input (onset depicted as a red dot, Fig. 4c) forces the neural trajectory off its course in subspace #1, shown in blue, and onto subspace #2, shown in red (Fig. 4c).This sequence selection and dynamic switching is also clearly visible in the projection magnitudes (Fig. 4g).
These results show that this neural circuit motif is capable of dynamically selecting and maintaining sequential activity on different neural subspaces, providing a mechanism for fast timescale manipulation of the neural manifold.Notably, this connectivity motif is very stable since the neurons that are active in a given subspace, excite the selective inhibition ensemble that in turn silences out-of-subspace neurons.This means that, while the connectivity is reciprocal, it is not a recurrent loop but instead neurons participating in the sequence recruit lateral inhibition of out-of-subspace neurons via the selective inhibition subpopulation.

Discussion
Given the sequential dynamics of neural activity on taskdependent low dimensional manifolds and the diverse connectivity structures made by the different neuron subtypes within the underlying networks, two important questions arise: (1) how can neural activity manifolds be altered on fast timescales and ( 2) what kind of computational functions may arise due to different types of inhibitory neurons.In our work we address both of these questions.We have shown that in a recurrently connected network for sequence generation, a second inhibitory neuron subtype providing selective inhibition can project activity onto task-or context-specific neural subspaces.The sparsity of the clustered inhibitory projections controls the angle between subspaces, with a full range of possible behavior from aligned to orthogonal.Importantly, these neural subspaces preserve sequential dynamics, meaning that in each subspace, sequence generation and progression remain intact.Since projections onto subspaces are driven by neural activity of inhibitory cell ensembles, selection and switching can occur flexibly on behaviorally relevant fast timescales.Based on this, we have proposed a neural circuit motif that enables dynamic switching between activity sequences on unique subspaces.

Predictions for connectivity motifs and neural activity patterns
Our model makes specific experimentally verifiable predictions about connectivity motifs and their relationship with neural activity correlations.For example, training an animal on two tasks that require rotation of the neural subspace (as in the task used by Sadtler et al. (2014)) should result in recruitment of different subsets of interneurons in each task.One subpopulation may be active in all tasks in order to maintain EI balance while another subpopulation/subtype should be selective.The selective inhibitory ensembles should inhibit different but partially overlapping excitatory populations.Further, our model predicts specific second/third order correlations in II, IE, and EE activity patterns (see Supplementary Fig. S5).For instance, the co-active (co-tuned) inhibitory neurons (correlated in II correlation matrix) should inhibit the same subset of excitatory neurons (correlated in IE correlation matrix) and these excitatory neurons should be distributed throughout the sequence (uncorrelated in EE correlation matrix).
The proposed circuit also predicts an overrepresentation of non-random connectivity motifs.For example, co-active inhibitory neurons should cluster their projections onto a subset of excitatory neurons, which translates to an increase in convergent motifs where multiple inhibitory neurons target the same postsynaptic neuron.There should be more lateral inhibition recruited via the selective inhibition population (i.e.E 1 → I 1 → E 2 but not E 1 ↔ I 1 ) and more reciprocal connections between excitatory neurons and the inhibitory subpopulation re-sponsible for "EI balance" (i.e.E 1 ↔ I 1 ).
In our model different groups of inhibitory neurons underlie sequence generation and sequence selection.Identifying these different subpopulations and disrupting their activity should differentially affect the generation of sequential dynamics vs. the selection of a particular sequence/subspace.As such, we predict the separation of sequence generation and sequence selection at different synapses and/or neural subpopulations.
This work is motivated by sequential activity in the motor system, hippocampus, parietal cortex, as well as spinal circuits, amongst others.While the model introduced here is not specific to a particular brain region, it could be extended to more closely match a region of interest.Further comparison and integration with experimental results from these regions, like connectivity and activity statistics, and interneuron subtypes, are key steps to verify our findings.

Fast timescale dynamic switching between activity sequences on unique subspaces
Many situations demand us to dynamically switch between multiple behaviors.This means that dynamically reorienting the neural subspace to enable performance of the appropriate behavior is crucial.So far, computational models have reoriented the intrinsic manifold of a recurrent network using learning (Wärnberg and Kumar, 2019;Feulner and Clopath, 2021).In this setting, reorienting the manifold was not trivial and required either large changes to synaptic weights (Wärnberg and Kumar, 2019) or incremental learning (Feulner and Clopath, 2021).However, this type of manipulation of the manifold is slow, and while perhaps feasible for learning new dynamics across many trials or days, cannot explain fast timescale switching between manifolds needed for flexible behavior.In our model, the projection of neural activity sequences onto neural manifolds is realised by the activity of inhibitory ensembles.Switching between subspaces is therefore fast and dynamic, taking place on the timescale of this activity.

Clustered inhibition for storage of multiple sequences in one recurrent network
The network model proposed here supports the storage of and selection amongst many neural activity sequences.When clustered inhibitory projections are dense enough, subspace projections have large differences in orientation giving rise to unique subspaces.Critically, we could show that sequence progression is preserved in each of these subspaces.We determined the relationship between inhibitory sparsity and subspace alignment as well as overlap between neuronal subpopulations both in simulation and analytically.Thus for experimental neural recordings of multiple neural subspaces, the model predicts the range of inhibitory sparsity based on the relative orientation of the measured subspaces.
Notably, other computational models based on sequential dynamics in cortex, hippocampus, striatum, and motor circuits have also considered storing multiple sequences.Previous studies have investigated storage of multiple sequences using non-local learning rules (Sussillo and Abbott, 2009;Laje and Buonomano, 2013), learning at recurrent synapses (Liu and Buonomano, 2009), different feedforward excitatory inputs (Murray and Escola, 2017), gain modulation (Stroud et al., 2018;Tsuda et al., 2021;Lindén et al., 2022), thalamo-cortical loops (Kao et al., 2021;Logiaco et al., 2021), multi-chart continuous attractors (Azizi et al., 2013;Spalla et al., 2021), or a common sequence that drives different readout networks (Maes et al., 2020).For the models employing thalamo-cortical loops as well as gain modulation via neuromodulatory inputs, the overarching principal is to generate different activity patterns via modulation of the dynamics within a recurrent network by way of a second external mechanism.The sparse clustered inhibitory motif proposed here draws on a similar conceptual argument.

The order of sequential activation in different sequences
In our model, given the local asymmetric connectivity, even when a subset of neurons are silenced by selective inhibition, sequential order of neuron activation remains preserved (i.e. the bump moves around the ring).Preserved co-activation of neurons across conditions (context, decision, behavior) has been observed in a number of experimental settings.Like in our model, in turtle spinal motor circuits two different motor programs give rise to two distinct sequences of neural activity with a highly preserved sequential order of participating neurons (see Fig. 5i,j in Lindén et al., 2022).The magnitude of subspace projections across behaviors for these spinal cord sequences also agrees well with our results (Fig. 5m in Lindén et al., 2022).In posterior parietal cortex of mice, when making decisions to turn left or right, a subset of neurons are recruited into decision-specific sequences while others are active regardless of the decision with preserved sequential order (see Supplementary Fig. 7 in Harvey et al., 2012).
In other studies, partial or complete reshuffling of neuron positions in different sequences has also been observed.In hippocampus, place field arrangements, and hence the order of neuronal activation in hippocampal activity sequences, show a mixture of behavior being partially preserved and partially reshuffling depending on contextual variables (Kinsky et al., 2018;Muller and Kubie, 1987;Skaggs and McNaughton, 1998;Leutgeb et al., 2004;Gauthier and Tank, 2018;Spiers et al., 2015;Sanders et al., 2020).One factor that would result in some reordering of neurons when different subsets are silenced is having the local asymmetric recurrent connectivity be sparse instead of fully connected.Randomness induced by the sparse local recurrent connections should lead to partial reordering of neural activation while the shift of the kernel should preserve overall bump movement around the ring.Also, while only neuron-specific inhibition was considered here, we expect that with synapse-specific dendritic inhibition from selective inhibitory ensembles it should be possible to extend the model to shuffle sequence order while preserving sequence generation.With this, a mixture of somatic and dendritic inhibition would control the extent to which neuron order is preserved or shuffled across sequences.

Conclusion
We considered sequential dynamics in terms of low dimensional manifolds.By leveraging inhibitory diversity, we devised a mechanism by which an inhibitory subpopulation with functionally clustered dynamically orients the neural manifold, thereby selecting between activity sequences.

Figure 1 .
Figure 1.Selective inhibition as a subspace rotation.(a) Schematic of a recurrent network of excitatory (red) and inhibitory neurons (green) for sequence generation.A second population of inhibitory neurons (blue) are responsible for sequence selection.Black arrows denote excitatory input.(b) Simplified circuit diagram of three excitatory neurons, with neuron 2 receiving inhibition.For this example, recurrent interactions are neglected.(c) Schematics of firing rates of the selective inhibitory input affecting the excitatory neuron 2, top, and the response of the three excitatory neurons below, as shown in b.Three cases for the inhibitory input onto neuron 2 are shown.A baseline condition as well as strong and weak inhibition.Strong inhibitory input silences neuron 2 and weak input lowers the firing rate.(d,e) Joint activity of the neurons resides on a two dimensional subspace.Colors and numbering correspond to cases in c.(d) Inhibition rotates the subspace.(e) Silencing neuron 2 rotates the circuit subspace onto the plane spanned by neurons 1 and 2.

Figure 2 .
Figure 2. Sparse clustered inhibition preserves sequence generation.(a) Schematic of network connectivity.Excitatory connections (red) on the ring are local and asymmetric while inhibition (green) is global, leading to sequence propagation.Sparse clustered inhibition (blue) targets neurons randomly along the ring.(b) Connectivity matrix W , left, and an example projection matrix P , right.Only 50x50 neurons are shown for visualization.(c) Activity bump moving on ring for no projection (W ), left, and a subspace projection (P W ), right.Example shows fraction silenced p inh = 0.6.(d) Example eigenvalue spectrum for W , red, and P W , blue, with faction silenced p inh = 0.6.(e) Maximum eigenvalue as a function of fraction silenced for W and P W . (f ) Principal eigenvector for W and P W for same example, p inh = 0.6.(g) Radius of real part and imaginary part of normalized principal eigenvector as a function of fraction silenced for P W , blue lines.Normalized principal eigenvector for W lies on the unit circle, shown for reference in red.(h) Percent change in instantaneous speed, width, and amplitude of the activity bump as a function of fraction silenced.In e,g,h line shows mean and shaded region shows standard deviation over 10 different projection matrices.

Figure 3 .
Figure 3. Sparse clustered inhibition projects activity onto unique neural subspaces.(a) Schematic of ring network for sequence generation and two inhibitory ensembles for sequence selection via projections onto specific subspaces.(b) Projections onto PC space for four examples of p inh .Dark grey line shows in-subspace and colored lines show out-of-subspace projections.(c) Projection magnitude of out-of-subspace projections and principal angle between subspaces as a function of fraction silenced.Pairwise comparisons for 10 different inhibitory ensembles, i.e. projection matrices P .Lines show mean and shaded regions standard deviation over the 45 possible pairs for each fraction silenced.(d) Proportion overlap between active neurons in pairs of subspaces as a function of fraction silenced, when n = 10 to n = 1000 subspaces were stored.Black line shows mean for n = 1000 and shaded region standard deviation.Colored lines show maximum overlap over all pairs of subspaces for each number of stored subspaces.(e) Probability distribution P of two subspaces overlapping by ρ for different fractions of silenced neurons.

Figure 4 .
Figure 4.A neural circuit for dynamic subspace selection with sparse clustered inhibition.(a) Schematic of network structure for sequence selection via dynamic rotation of subspaces with sparse clustered inhibition.Colors same as before, top down signal, black arrows.Two inhibitory ensembles compete via winnertake-all dynamics to project activity onto their selected subspace.(b) Circuit motif underlying connectivity shown in (a).(c) Projections of activity from (f) onto neural subspaces.Color code distinguishes periods of time when inhibitory ensemble #1 was active, blue, vs. inhibitory ensemble #2, red.(d) Top down signal to selective inhibitory assemblies.(e) Activity of selective inhibitory assembles.(f ) Activity of neurons in the ring network.(g) Projection magnitude in subspace #1 and #2, in blue and red, respectively.