Abstract
Brain function is defined by the interactions between the neurons of the brain. But these neurons exist in tremendous numbers, are continuously active and densely interconnected. Thereby they form one of the most complex dynamical systems known and there is a lack of approaches to characterize the functional properties of such biological neuronal networks. Here we introduce an approach to describe these functional properties by using its constituents, the weights of the synaptic connections and the current activity of its neurons. We show how a high-dimensional vector field, which describes how the activity of each individual neuron is impacted at each instant of time, naturally emerges from these constituents. We show the factors that impact the structural richness of that vector field, including how rapid changes in neuron activity continually reshapes its structure. We argue that this structural richness is the foundation of the functional diversity and thereby the adaptability that characterizes biological behavior.
1 Introduction
Understanding the collaboration between multiple neurons remains a challenging issue within neuroscience. In particular where the neuronal networks are recursive, which is the case for example in the neocortex [1, 2, 6, 9] as well as in the spinal cord circuitry, including its feedback loops via the body (muscle activation and ensuing sensory activation providing feedback into the spinal cord) [3]. Such networks can generally not be understood as a feed-forward network, because the function of one neuron will depend on the current activity in other neurons. Hence the specific function of any given neuron will vary, i.e. it will be context-dependent. In such a case, it may be more accurate to think of the network activity as having a state space, and that each individual neuron is contributing to the current location that the network has within that state space. But how to analyze the structure of such a network state space is not known. Hence, there is a need for a conceptual framework to quantify neuron population level interactions, while minimizing arbitrary impacts from the particular frame of reference used to approximate the nature of those interactions.
Here we aimed for a conceptual framework to analyze multi-neuron interactions that is as assumption-free as possible. Brain tissue consists of many individual neurons, but the only way the relationship between the neuronal activations is controlled, and therefore how the brain can generate the types of functions we associate with it, is by means of the synaptic connections between those neurons. Synaptic connections make the activity of the postsynaptic neuron dependent on the activity of the presynaptic neuron(s). If the presynaptic neuron is excitatory, its activity level and the weight of the synapse it makes on the postsynaptic neuron is a ‘force’ that will push the activity of the postsynaptic neuron upwards, to higher levels, as long as the presynaptic neuron is active. The opposite applies if the presynaptic neuron is inhibitory. The postsynaptic neuron typically receives synapses from multiple presynaptic neurons, each of which has a given momentary activity level and a synaptic weight, and hence the postsynaptic neuron is in each time instant impacted by multiple forces that attempt to increase or decrease its activity level. In a neuronal network, this becomes an inevitable fact. Subsequently, further details, such as the neuronal input-output functions, may also impact the the network behavior, but the core behavior is dictated by how the neurons are synaptically connected to each other. The idea that the neurons through their synaptic connections become part of a force field is useful here, because then we can think of the neurons impacting each other through this force field.
We introduce a framework to quantify neuron population level interactions without resorting to dimensionality reduction, as such methods are at risk of discarding critical aspects of the functional properties of the network [5]. The framework allows for a comparable quantification of the properties of any given network, regardless of its dimensionality (i.e. regardless of the number of neurons within the network).
2 Methods
We used a previously published neuron model [7], which is an emulation of a conductance-level neuron model with a static leak component representing the leak channels of the neuron membrane (1A). The original model also has a membrane time constant (‘dynamic leak’), which is omitted here where we were only interested in modelling the impact of the synaptic activity in the static setting (Equation 1). The impact exerted by one (presynaptic) neuron on another (postsynaptic) neuron was given by the activity in (the presynaptic neuron of) that synapse multiplied with the weight of the synapse. Synaptic weights were set to (0…1) for excitatory synapses and to (−1…0) for inhibitory neurons; autapses and parallel synapses were not allowed. The impact of one neuron on another neuron was defined as the vector component with which the activity of the receiving neuron would change as the result of that impact.
The activity of individual neurons is bounded between 0 (no activity) and 1 (saturation or epilepsy). We can define a (bounded) state space where each dimension represents an individual neurons activity level. Such a state space can be constructed for a network of arbitrary dimensionality. Essentially, we obtain a hypercube with the dimensionality being equal to the number of neurons in the network.
We can consider a two-dimensional, planar state space, where the dimensions represent the activity of the neurons: either two individual neurons at a time or a population of neurons divided into two groups, one per dimension. The impact that the two (groups of neurons) would have on each other at a given combination of activity levels (or state) constitutes a vector. This vector is what would impact the network activity at the given activity level. All the vectors present in a given plane (calculated at different activity resolutions) gives the vector field of that plane. This vector field reflects the forces dictated purely by the synaptic connections that act upon that planar state space.
To analyze different properties of the vector field as we manipulate its generating components, we tracked characteristic points such as attractor points. For each vector field, we calculated a critical point, where the vector length was close to zero. This corresponds to a point where the network would have a high probability of being drawn to in terms of its activity distribution across the neuron population.
The complexity of a vector field is comprised of the vector angle and length properties. We calculated the angle of each vector in the range of 0 to 2π. We also calculated the length of each vector. The standard deviation of the vector lengths across the vector field was used to identify the shortest vectors defined as those being between 0 and 1 standard deviation long. Then we used the proportion of the angles (binned at 1 degree) covered by these shortest vectors to obtain our complexity measure. Note that vector fields which had no short vectors would be likely to end up close to zero with this complexity measure. Hence, the complexity measure was high for vector fields with extensive areas of short vectors of different angles, which in turn would be indicative of a vector field with a higher number of potential solutions (or critical points) in that plane of the network.
3 Results
Recursive connections between excitatory neurons is common in the neocortex and allows for a simple basis on which we can introduce the concept of how the interactions within a network of neurons can be quantified as vector fields (Figure 1). Basically, if a neuron is connected to another neuron with a synapse, then that means that the activity of the second neuron is dependent on the activity of the first neuron. Hence, the first neuron will tend to push the activity of the second neuron towards a higher value, if the synapse is excitatory. But that impact will depend on the activity level of the first neuron - if that activity is very low or zero, then it will have no or very low impact on the second neuron, and vice versa. This can be seen in Figure 1C, for example along the x axis when the y axis value is zero, the y-component of the vector gradually increases the higher the x axis value.
However, the cortex is rich in recurrent axon collaterals that create recurrent connectivity, and we started out with the assumption of the most complex scenario, i.e. that nearby excitatory cortical neurons are likely to be connected to each other. Figure 1B illustrates a network with only two excitatory neurons, that are reciprocally connected to each other. The interaction between them creates a vector component that illustrates with what magnitude the two neurons impact each other at the given activity level. The resulting vector field across all activity levels is shown in Figure 1C. As the two neurons form a positive feedback loop, they will tend to push each others activities up towards upper right corner of Figure 1C. In cases where one or both neurons are instead inhibitory, the structure of the vector field will reorganize accordingly (Figure 1D). Notably, the only neutral position within these vector fields, where the vector lengths approach zero, is when both neurons have zero activity (‘attractor point’).
We can increase the dimensionality of the purely excitatory system by adding a neuron to obtain a 3-neuronal network (Figure 1E). In this case, one neuron will be impacted by the synaptic connections and activities of the other two neurons. Using the same approach as before, we can calculate the now 3-dimensional vector field as seen in Figure 1F. Analogously to the 2-dimensional network, we can select a ‘Euclidean plane’ from the full, 3-dimensional state space to compare the impact of two selected neurons (Figure 1G).
However, to select an Euclidean plane we must specify or fix the activity of the neuron that is not displayed on the axes, which we will denote ‘perpendicular neuron’. We can fix the activity of the perpendicular neuron at various levels which will result in different vector fields for the various activity levels resulting from the interconnectivity. For each vector field we can compute the location of the attractor point which can be located within the state space, but it can also be outside of the domain of activity level values.
As seen is the neuron model definition (Equation 1), the impact of one neuron has on another neuron that it is synaptically connected to depends on the weight of the synapse and the activity of the pre-synaptic neuron. Any change in these parameters causes the attractor point to shift locations. These shifts of the attractor point through activity manipulation is what we call attractor trajectories. The attractor point locations and the attractor trajectories follow patterns which can be inferred from the synaptic weight and neuron activity manipulation. Considering a 3 neuronal network consisting of 2 excitatory and 1 inhibitory neuron (Figure 2A left), we can investigate the impact of neurons N1 and N2 have on each other with respect to the inhibitory (perpendicular) neuron synaptic weights and activity through tracking the attractor point location. When we have outgoing synaptic connections of equal weight from the inhibitory neuron to the two excitatory neurons, the impact on the two will also be equal which results in the attractor point location being on the diagonal for any given fixed activity value. By modifying the inhibitory neuron activity in a continuously increasing manner, we can observe that the attractor point location also moves upward towards the upper right along the diagonal (Figure 2A right). However, if the synaptic weights are not uniform, the attractor point location will be displaced from the diagonal because of the greater impact on one of the excitatory neurons relative to the other. The magnitude of the displacement from the diagonal depends on the relative size of the outgoing synapse weights: the larger the difference of the weights, the larger the displacement from the diagonal (Figure 2B). If we fix the difference in synaptic weights but we modify the activity, then the attractor point follows the same scheme as before: increased activity in the inhibitory (perpendicular) neuron results in the attractor point being located further from the origin (the (0,0) point). With this, we can see that as the weights become more skewed, more of the stat space is reachable and manipulable with the help of the attractor point.
By skewing the synaptic weights we are able exert more independent, or fine-tuned, impacts than if we had uniform weight. Now if we consider a slightly larger network (Figure 2C left) where we instead have 2 perpendicular neurons, both inhibitory, but they have their synaptic weights skewed in favor of different excitatory neurons (depicted on the axis), then by manipulating the activities of the inhibitory neurons (Figure 2C right), we can generate a complex attractor trajectory (Figure 2D). At each activity level setting we obtain a vector field that governs the dynamics of the state space (Figure 2E).
As we can manipulate the vector field through our choice of synaptic weights, configured activity levels, and the choice of neuron populations depicted on the axes, we wanted a metric to quantify the complexity. To this end, we defined the complexity of a single vector field to be dependent on the lengths and directions of the vectors in the vector field as defined in Section 2. Some example vector fields and their complexity measure are shown in Figure 3. We can also quantify the complexity of a network through the complexity of the vector fields that we can draw from it. Therefore, this approach can be extended to a network of any dimensionality. Note that the number of vector fields needed to reliably reflect the complexity of the entire network increases with the number of dimensions.
4 Discussion
Here we introduced a conceptual framework to interpret and analyze neuron population level interactions within recurrent networks. We showed that the synaptic weights and neuron activity will create a set of force vectors that defines how the activity of any one neuron will be impacted, depending on the activity level of other neurons. This can be used as a starting point to analyze the properties of a high-dimensional network.
We showed that in networks where all the weights are uniform, the attractor point of the vector field will be moved between the two extremities of all neurons having zero activity and all neurons having maximal activity. The attractor point location can be shifted away from the identity diagonal be skewing the synaptic weights. By moving the attractor point away from the diagonal the network can utilize a larger subspace of the entire state space which translates to more possibility of control, resulting in a more richly variable network state.
To be able to control the state of the network (where the state corresponds to the activity distribution across the neuron population at one time instant) is essential for the brain in order to avoid saturation, or epilepsy.
The brain controls the activity of the neuron population through the landscape of the synaptic connectivity of its network. The trajectories through the synaptic connectivity-defined state space are the solutions, or behaviors, with which we can respond to sensory inputs, at the cortical level [2] as well as in the spinal cord circuitry [3, 4, 8]. Therefore, the plurality of solutions [6, 2] depends on the complexity of the network state space. A higher complexity of trajectories within the network state space translates to a better capability of behavioral diversity, more options of behavioral choices in each given context as defined by the sensory information, and more diverse behavioral output in terms of muscle activation patterns.
The complexity of the network state space increases with the number of critical points. Near these critical points, the vector directions (angles) reflect the possible displacement variations, or the range of different force directions that the vector field can exert on the network state. If all of the vectors pointed uniformly in the same direction, any displacement will only be in that one direction, which would result in a very simple effect. Therefore, the complexity of a vector field and thereby the entire network increases with the number of critical points (attractors, sources, saddles) in the vector field.