Abstract
In neurological networks, the emergence of various causal interactions and information flows among nodes is governed by the structural connectivity in conjunction with the node dynamics. The information flow describes the direction and the magnitude of an excitatory neuron’s influence to the neighbouring neurons. However, the intricate relationship between network dynamics and information flows is not well understood. Here, we address this challenge by first identifying a generic mechanism that defines the evolution of various information routing patterns in response to modifications in the underlying network dynamics. Moreover, with emerging techniques in brain stimulation, designing optimal stimulation directed towards a target region with an acceptable magnitude remains an ongoing and significant challenge. In this work, we also introduce techniques for computing optimal inputs that follow a desired stimulation routing path towards the target brain region. This optimization problem can be efficiently resolved using non-linear programming tools and permits the simultaneous assignment of multiple desired patterns at different instances. We establish the algebraic and graph-theoretic conditions necessary to ensure the feasibility and stability of information routing patterns (IRPs). We illustrate the routing mechanisms and control methods for attaining desired patterns in biological oscillatory dynamics.
Author Summary A complex network is described by collection of subsystems or nodes, often exchanging information among themselves via fixed interconnection pattern or structure of the network. This combination of nodes, interconnection structure and the information exchange enables the overall network system to function. These information exchange patterns change over time and switch patterns whenever a node or set of nodes are subject to external perturbations or stimulations. In many cases one would want to drive the system to desired information patterns, resulting in desired network system behaviour, by appropriately designing the perturbating signals. We present mathematical framework to design perturbation signals that drive the system to the desired behaviour. We demonstrate the applicability of our framework in the context of brain stimulation and in modifying causal interactions in gene regulatory networks.
1 Introduction
Recent advancements in brain stimulation techniques offer new possibilities for diagnosing [1], monitoring [2], and treating neurological and psychological disorders [3]. For instance, non-invasive techniques such as transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) are utilized to treat conditions like epilepsy [4], attention deficit hyperactivity disorder (ADHD) [5], schizophrenia [6], and tinnitus [7]. In contrast, invasive methods such as deep brain stimulation (DBS) are employed to treat dystonia, essential tremor, medically resistant epilepsy, Parkinson’s disease, and medication-resistant obsessive-compulsive disorder (OCD) [8].
Despite its broad applications, various key challenges remain on the optimal stimulation parameters, identification of target areas that maximize clinical utility [9] and minimizing stimulation-induced side effects [10]. To determine the optimal parameters and target areas, methods have been developed to use the information from imaging techniques (CT scan, fMRI, fast optial imaging) and recording devices (PET, MEG, EEG etc) to monitor the effects of brain stimulation. Moreover, concepts of average controllability and modal controllability from network control theory, have been used to predict whether the effects of stimulation remain focal or spread globally [11, 12]. However, conventional brain stimulation techniques frequently cause undesirable side effects by inadvertently stimulating adjacent brain structures, thus limiting its effectiveness. To mitigate these side effects, one approach is to design electrodes that enable directional stimulation, thereby avoiding neighboring structures [13, 14]. An alternative approach is to design control strategies that facilitate precise directional stimulation at desired instances.
To this end, we propose an information-theoretic based functional connectivity, defined by the directional flow of information or causal inferences among neurons across the brain network. The functional connectivity holds fundamental significance, as biological network systems depend on the dynamic communication and exchange of information among cells and their associated subsystems. For instance, in gene-regulatory networks, information flow quantifies a cell’s capacity to regulate the protein concentrations of other cells [15], taking into account the inherent randomness of individual molecular events. In neurological networks [16], information flows across synapses through the coordinated activity of multiple neural populations. Dendrites carry information towards the cell body, while axons transmit it away from the cell body. The information flow is defined as the influence of the excitation level of an excitatory neuron on the neighbouring neurons. Therefore, understanding the information flows among neurons or cells offer insights into the patterns of information routing associated with various brain activities, potentially introducing a new dimension in the treatment of neurological or psychiatric disorders. The significance of information transfer in brain stimulation techniques lies in its ability to describe how the directional and quantitative influence of the stimulation propagates across the entire network.
Neurons in neurological networks, proteins in gene regulatory networks, and cells in various biological networks are interconnected through a network of oscillators. These nodes exhibit oscillatory and synchronous dynamics, often incorporating a stochastic component [17–21]. Therefore, understanding the dynamics of how information flows among the nodes in these complex networks offers a fundamental challenge because of their inherent complexity and non-linear dynamics. The results in [22, 23] demonstrate that oscillatory dynamics facilitate the transmission of information in biological and oscillatory networks. These results lead to a fundamental question: How do the patterns of information routing across the network vary based on the intrinsic dynamics of nodes, and external driving signals or simulations? The results in [24] provide insights into how information flow is influenced by changes in network topology and noise. An important result in [22, 25] provides a fundamental mechanism illustrating how fluctuations in the phase differences of multiple oscillators give rise to vaarious IRPs. This mechanism also elucidates how these IRPs can adaptively transition between multiple stable states under the influence of external inputs. The phenomenon is particularly relevant in the context of neurological networks, where the information flows or the causal interactions among different brain areas are reconfigured to achieve various brain functions such as vision, memory, or motor preparations [26]. Importantly, these transitions between multiple stable states take place due to the combined influence of structured “brain noise” and the bias imposed by sensory and cognitive driving, even when the underlying structural (anatomic) connectivity remains constant.
The primary focus of this study is on examining how the direction and magnitude of causal interactions among nodes change under external stimulations while keeping the structural connectivity fixed. We employ network control theory and information theory to design optimal control inputs or stimulations, enabling flexible “on-demand” selection of functional patterns.In the context of brain stimulation, the control policy aims to find the optimal inputs that achieve the desired functional pattern, defined by the intended direction and magnitude of excitation levels among connected neurons across the brain network when different regions are stimulated. The study also provides valuable insights into the optimal energy required for the transition from one information routing pattern to another. Towards this goal, we provide the conditions on the network dynamics and coupling strengths among the oscillators under which one can optimally reroute information routing patterns to a desired pattern. Related literature on the control of brain networks includes using control techniques to find the optimal trajectories to steer from an initial state (baseline condition) to target states (high activity in sensorimotor systems) in finite time and with limited energy [27, 28]. These studies focus on the control of state trajectories, and there remains a limited body of research dedicated to the control of information flows or routing patterns in complex dynamical networks. One of the few works in the control of functional patterns (described by simple statistical dependencies or correlations among oscillators) is studied in [23, 29]. Other studies on the control of brain network [30–35] focus on examining the ease of accessibility for specific states within a dynamic regime, identifying the regions that require perturbation to enable access to these states, and quantifying the energy required to attain them.
We first demonstrate how the dynamic state of oscillatory networks, and the presence of noise collectively give rise to a distinct communication pattern. This pattern is quantified using an information-theoretic measure defined by Kleeman’s Information transfer [36, 37]. Our selection of this information transfer definition is based on its rigorous derivation, as presented in [36, 37], which enables the determination of instantaneous information flows. Furthermore, the formulation has gained widespread adoption in numerous applications such as in financial markets [38, 39], in studying climate science [40], and for detecting causality in time series data [41, 42]. We show that information routing patterns or functional patterns depend on the underlying synchronized or equilibrium states and switching between multiple stable states generates multiple information routing patterns. The result enables the study of network control theory for non-linear systems around the stable equilibrium states. We develop a mathematical framework to determine the optimal energy levels needed for external driving signals to facilitate transitions among the information routing patterns. We demonstrate how interventions in the form of noise and external inputs at specific nodes within the network can alter the flow of information between other nodes. The results provide generic insights into the mechanisms underlying information rerouting in complex networked systems. We also illustrate how our framework can be expanded to redirect information routing patterns towards desired configurations, both in finite and stationary time horizons.
2 Results
2.1 Information Routings in Coupled Oscillators
The study of brain networks, gene regulatory networks, and various other biological networks can be conducted within the framework of complex networks with oscillatory dynamics. Therefore, to understand how information transfer patterns change dynamically under the influence of interactions between nodes, we consider a network of n coupled oscillators evolving according to with xi ∈ ℝn, fi smooth vector fields, gij coupling functions, and hi,k denotes the impact of the random processes ζk with zero mean. We focus on weakly coupled oscillators where the separation between the phase and amplitude dynamics is possible [43]. Moreover, when the couplings are weak, we can reduce the system of nonlinear equations to a set of equations on a torus using invariant manifold theory [43]. Under this assumption, the phase-amplitude interactions among the individual nodes are negligible and the phase-amplitude dynamics are decoupled. Thus the network oscillatory dynamics can be expressed in terms of nodal phases only [44]. The stochastic oscillatory dynamics in equation (1) can thus be reduced to the averaged phase stochastic dynamics as where ωi denotes the intrinsic frequencies of node i, γi,j denotes the coupling function that depends on the phase differences only, and ςi is a Gaussian noise process, wi with zero mean and . Supporting information B provides a comprehensive derivation of equation (2) from equation (1).
We decompose the phase dynamics into a deterministic reference part, , and a fluctuating component, and focus on the phase-locked configuration with constant phase offsets, . The fundamental idea is that the infor-mation routing patterns are determined by the fluctuating state variables around the stable phase locking states. Note that, in oscillatory networks, there may exist multiple reference deterministic solutions, giving rise to different IRPs among the nodes. The fluctuating component is estimated using a small noise expansion giving a first-order approximation for the evolution of the fluctuating component of the phase dynamics [22] (Methods and Supporting information B). Although the approximation around the stable phase-locking states might not precisely represent the information transfers among the oscillatory nodes, it offers a valuable benefit by providing an analytical approximation for the evolution of probability distributions defined by ρj|i, ρ;j in equation (23) (Methods). The analytic model derived from the linearized model provides a close approximation to the underlying functional connectivity of oscillatory networks [22]. Supporting information B provides its relevance in the context of weakly connected oscillatory networks, such as those found in neurological systems. Further, we assume that the noise levels remain sufficiently low to prevent transitions between multiple stable states. However, this assumption can be relaxed, and we can compute and control the information transfer function for each switching instant. In this case, the network becomes temporal (time-varying). Our assumption for the white noise model has the advantage of being tractable, at least in providing some simple differential equations for the evolution of probability distributions. Using the formulation in equation (23) (Methods) together with the first-order approximation, information transfer from state xj to xi at time t is derived as (Supporting information A)
Where and σij denotes the (i, j) component of the state covariance matrix at time t, ∑(t). Using equation (3), the IRP for a network of n oscillators at time t, denoted as IRPt can be written as IRPt(i, j) = Tj→i(t). Thus, given a network of n coupled oscillators with coupling function γij and initial covariance ∑(0), IRPt is an n × n matrix with IRPt(i, j) = Tj→i(t), i, j ∈ {1, … n}. The IRP satisfies the relation (Theorem 1, Supporting information C) where ⊙ denotes the Hadamard or the element-wise matrix multiplication, ∑(t) is denoted by ∑t. The derivations in equations (23) and (3) are detailed in Supporting information A. The importance of concentrating solely on phase dynamics becomes apparent when considering that synchronization and phase-locking phenomena inherently foster interactions among the oscillators, acting as an innate catalyst for initiating the communication process. Similar investigations into phase dynamics are leveraged across a wide range of oscillatory networks to determine diverse functional relationships among the oscillators.
A model of a gene regulatory network consisting of two symmetrically coupled Goodwin oscillators (Supporting information D) is shown in Figure 1. Figures 1b-1e depict the IRP in the absence of any external input or influence. The deterministic reference states are determined by the zeros of the antisymmetric coupling with negative slope as shown in Figure 2. The presence of an external signal and noise affecting one of the transcribed mRNAs induces fluctuations around one of these two stable states. This, in turn, results in a shift in the phase difference and eventually impacts the couplings between the two oscillators. Sufficiently strong external signals have the potential to induce transitions between the two stable states, facilitating the interchange in the direction of information flows between the two oscillators as shown in Figures 1f-1i, without even altering the structural properties of the network. Despite the symmetry of the oscillatory network, the asymmetrical information flows depicted in Figures 1e and 1i arise from the uneven coupling strengths in the phase dynamics. That is, for the two oscillators in Figure 1a, the evolution of the fluctuating component of the phase dynamics can be written from equation (2) as where the coupling strengths and . In the reference state , oscillator 1 (green) receives inputs from oscillator 2 proportional to , while oscillator 2 receives inputs from oscillator 1 propor-tional to . Since significantly deviates from the phase-locked state of oscillator 1, inputs to oscillator 2 exert a strong influence on oscillator 1 to restore the phase-locking state. The information transfer is thus dominant in the direction from oscillator 2 to 1 (Tb→g, Figure 1e).
Note that the coupling strength from oscillator 2 to 1 in figure 1d is negative. This results in negative information transfer from oscillator 2 to oscillator 1, indicating that the influence from oscillator 2 to oscillator 1 is weakening, although it remains stronger in magnitude than the information flow from oscillator 1 to oscillator 2. With sufficiently strong input, the stable phase locking state is shifted as shown in figure 1g, resulting in the reverse directional information flow or pattern as shown in figure 1i. In the next section, we propose frameworks for steering information routing patterns toward a desired configuration in both finite time horizon and infinite time horizon scenarios.
2.2 Functional Control to achieve the desired IRP
The results in Figures 1 and 2 illustrate the significance of the coupling function’s behavior around the stable phase locking states in shaping the information routing patterns. The results in [24] elaborate on how multiplicative noise, additive noise, and local noise impact information transfer. This phenomenon wherein the functional properties of network systems are shaped by the coordinated behavior among their oscillatory components is present in diverse natural and technological network systems. For example, unique phase-locking patterns influence the coordinated movement of orbiting particle systems [45], facilitate successful mating in populations of fireflies [46], control active power flow in electrical grids [47], forecast global climate change phenomena [48], and support various cognitive functions in the brain [49, 50]. Despite its practical importance, the exploration and development of strategies to actively enforce specific information routing patterns have been relatively limited. In this section, we develop mathematical frameworks for designing optimal control inputs that reroute the IRPs to desired patterns. Figure 3 shows our framework and an example of control of information transfers in a network of 15 nodes at some finite instant t = 100.
The concept of regulating IRPs differs across various applications, depending on the system objectives or performance. For example, in the context of brain stimulation techniques, the control inputs correspond to external stimulations applied to different brain regions, inducing additional fluctuations in the phase differences between these two neurons. Utilizing these control inputs, it becomes feasible to generate personalized control profiles for individuals. These profiles have the potential to improve our understanding of psychological and clinical impairments in patients, offering a more biologically and mechanistically informed perspective [51, 52]. The temporal changes in the neuronal response or the synaptic coupling strength during certain activities or learning processes [53] can lead to specific IRPs within a specified time frame. This represents an example of finite horizon control of IRP, where the learning processes and activities serve as the control input. In gene regulatory and cell-signaling networks [15, 54, 55], a specific IRP might be favored in an infinite time horizon through the process of evolution, achieved by influencing various states. In engineered non-oscillatory networks like a wireless networked control system, IRP defines the channel capacity of the communication channel, and controlling the IRP aligns with the control of the Signal-to-Interference-plus-Noise Ratio (SINR) in finite time. This expansion of scope highlights the versatility and applicability of the IRP concept, demonstrating its relevance in diverse domains beyond oscillatory networks. Considering these diverse applications, we define our IRP control problem in both the finite and infinite horizons.
The main objective of this section is to enforce a desired IRP within both finite and infinite time frames, as depicted in Figure 3. An approach to achieve this objective is to regulate the fluctuations around the stable phase locking states using state feedback control inputs. This approach is logical as it aligns with the fundamental concept of information routing patterns: different IRPs arise due to the fluctuations in the phase differences among the oscillators [22].
With the control inputs u(t), the first-order approximation (Methods and Support-ing information B) of the fluctuating component in equation (2) can be rewritten as 9 where B ∈ ℝn×p is the input matrix that specifies which of the oscillators are affected or influenced by the control inputs, and ∑0 is the initial state covariance matrix. The transpose of the state matrix, A′ ∈ ℝn×n describes the weighted adjacency matrix. The corresponding directed network is denoted by 𝒢 (𝒱, ℰ𝒜), with nodes 𝒱 = {1, 2, …n}, given by the n states, ℰ𝒜 = {(i, j)|i, j ∈ 𝒱} is the edge set. We use the notation U to represent the collection of control functions with finite energy and define it as the set of permissible control inputs. We also use X′ to denote the transpose of a vector or a matrix X.
2.2.1 Finite Horizon Minimum Energy Control of IRP
The objective is to achieve the desired IRP in a finite time horizon, T. The selection of the desired pattern at T is limited by the constraint on the positive definiteness of the state covariance matrix at T. Corollary 1.1 (Supporting information C) provides the admissible conditions for a given desired routing pattern. Given an admissible desired pattern IRPd, we can formulate the information routing control problem as
We show in Theorem 2, Supporting information C that the Problem (7) can be solved through suitable feedback control inputs if and only the network is controllable [56]. And given a controllable network [30], the control input solving Problem (7) is of the form u(t) = K(t)ϕ(t) (Proposition 1, Supporting information C). The optimal control strategy u*(t) solving Problem (7) is then of the form where P (t) is a differentiable matrix function taking values in the set of n×n symmetric matrices and satisfies both the following coupled Riccati equations:
These results are summarized in Proposition 1, Supporting information C. Equation (9a) defines the evolution of the state covariance of the system in equation (6a) under the influence of the control input. The constraints in the evolution of the state covariance are defined by equations (9c) and (9d). It is important to highlight that equation (9b) shares similarities with a typical Linear Quadratic Regulator (LQR) problem, except the boundary constraint specified in (9d). The boundary value for P (t) in equation (9b) is unspecified and finding P (0), P (T) that satisfies the boundary constraints in (9d) is non-trivial. Also, the indefiniteness of P (t) places the context of our problem outside the standard LQR theory. Furthermore, the Riccati equations (9a) and (9b) are also coupled, not solely due to the boundary constraints, but also in terms of their dynamic behavior. Finding a closed-form solution to these coupled Riccati equations is challenging [57–62], and establishing both the existence and uniqueness of solutions for this system of equations is non-trivial. We summarize these arguments below.
If there exists solutions, {P (t), ∑(t)|0 ≤ t ≤ T} that satisfy the coupled Riccati equations in (9a) and (9b) and the boundary conditions in equations (9c), (9d), then the optimal feedback gain in (8) solves Problem 7.
In Theorem 2, Supporting information C, we show that there exist solutions {P (t), ∑(t)|0 ≤ t ≤ T} that satisfy the coupled Riccati equations in (9a) and (9b) provided . Further, we show that the state covariance can be steered between any two boundary constraints, thus satisfying the coupled boundary constraints in equations (9c) and (9d). Therefore, the set of control inputs 𝒰 that steers the IRP to the desired value is non-empty. To compute the optimal control inputs, we formulate the problem as a semidefinite program (SDP), which can be effectively solved using optimization tools like CVX [63] or YALMIP [64] as illustrated below.
Numerical computation of optimal control: Define the control input u(t) = −K(t)ϕ(t) so that
Define U (t) = −∑(t)K(t)′ such that the cost function in equation (10) can be written as
Thus, equation (11) is jointly convex in U (t) and ∑(t). Using U (t) = −∑(t)K(t)′, we can write equation (9a) as which is linear in both U (t) and ∑(t). The optimization problem in Problem (7) can thus be written as a semi-definite program as
The problem can be solved by discretizing equation (12) and the feedback gain can be recovered as K(t) = −U (t)′∑(t)−1. Note that the SDP formulation in equation (13) can be employed to attain a desired pattern specifically for a subset of interacting nodes. For instance, if we require to achieve a desired IRP only from node j to node i, then the constraint in equation (13d) is replaced with the constraint .
2.2.2 Minimum Energy Control to maintain stationary IRP
The objective here is to maintain a desired stationary IRPd for the phase difference differential equation in (6a). Given that the pair (A, B) is controllable, the feedback control law is of the form u(t) = −Kϕ(t) (similar derivation as in Proposition 1, Supporting information C) and we are interested in the one that minimizes the input energy rate defined by J(u) = 𝔼 [u′u]. Similar to equation (7), given an admissible (Corollary 1.1 in Supporting information C) desired IRPd, we can formulate the minimum energy control problem as that equation (6a) admits IRPd as the invariant IRP.
The optimal control strategy u*(t) solving Problem (14) is of the form where S is an n × n symmetric matrix such that (A − BB′S) is a Hurwitz matrix and satisfies both the following constraints:
The proof is given in Theorem 3, Supporting information C. It is important to highlight that, in contrast to finite horizon control scenarios, the state covariance matrix ∑ remains invariant in this context. This invariant state covariance matrix serves as a key determinant of the desired stationary IRP for the oscillatory network via equation (16b).
Numerical computation of optimal control: Define the control input u(t) = −Kϕ(t) so that
Define M = −∑K′ such that the cost function in equation (17) can be written as
Thus, equation (18) is jointly convex in M and ∑. Using M = −∑K′, we can write equation (16a) as which is linear in both M and ∑. The optimization problem in Problem (9) can thus be written as a semi-definite program as
Similar to the finite horizon control, The problem can be solved by discretizing equation (19) feedback gain can be recovered as K = −M ′∑−1.
3 Applications
3.1 Gene Regulatory Networks
In this section, we consider a gene regulatory network consisting of two interconnected biochemical oscillators following the Goodwin-type dynamics (Supporting information D) as illustrated in Fig. 1a and 4a.
In a single oscillator, the gene (underlined rectangle) gets transcribed into mRNA (rectangle) that has concentration xi within the cell. Subsequently, this mRNA is then translated into an enzyme (disk) with concentration yi. This enzyme helps in the production of a protein (triangle) with concentration zi. The concentration of the protein, in turn, suppresses the transcription of xi. This results in a nonlinear feedback loop that produces stochastic oscillatory dynamics as shown in Fig. 1b. The oscillatory dynamics are then reduced to the averaged phase stochastic dynamics of the form in equation (2). The fluctuating interactions between the two oscillators are then derived using the first-order approximation method (refer to Methods). The process reduces the nonlinear oscillatory network model into a simplified linear network dynamics model. The couplings between the two nodes are defined by the values of the coupling function at the stable phase locking states. The coupling function for gene regulatory model in Figs. 1a and 4a is shown in Fig. 2. The two oscillators exhibit two stable phase locking states and due to the anti-symmetric nature of the coupling function, the two stable states are separated by 2π. This leads to coupling values that are equal in magnitude but opposite in sign. To compute the information transfers, we assume that the initial covariance of the fluctuations is given by ∑(0) = I2, where I2 is an identity matrix of order 2. The evolution of the state covariance and the information transfers from time t = 0 to t = 120 are shown in Figures 4b-4d. The desired IRPs at t = {3, 50, 100} is given as |T2→1)(t)| = {0.001, 0.08, 0.0375}, |T1→2)(t)| = {0.001, 0.04, 0.0375} and the desired infinite horizon IRP is given as |T1→2(t = ∞)| = |T2→1(t = ∞)| = 0.0375, where |.| denotes the absolute values. We compute the optimal control inputs that drive the system towards the desired IRPs by solving equation (6). As we redirect the IRPs from a lower value to a higher value and then back to a lower value, we expect that fluctuations will progressively increase and then decrease gradually. The depicted phenomenon in Figure 4f illustrates the gradual increase in state covariance from t = 3 to t = 50, followed by a subsequent decrease until t = 100. In the infinite horizon scenario, when the IRP reaches a stable state, the state covariances also attain stability. Figures 4g and 4h show the evolution of the IRPs as they converge towards the predefined or desired IRPs during the time interval from t = 0 to t = 120.
3.2 Neurological Network
In this example, we study the information routing patterns among the excitatory populations of neurological networks. To model the dynamic interactions among the excitatory and inhibitory populations in a synaptically coupled neuronal network, we adopt the widely accepted Wilson-Cowan model of interacting oscillators (see Supporting information E). In neurological networks, a single neuron exhibits repeated firing when subjected to a constant current infection. Consequently, it is reasonable to regard a simulated neuron as a limit cycle, especially during short durations within a period of multiple spikes. We, thus, assume that each oscillator i has an asymptotically stable periodic solution characterized by a frequency denoted as ωi. The couplings among the neurons often consist of mild input currents affecting the membrane potential of the cells, suggesting the existence of weak couplings among the oscillators. We therefore, use averaging theory to derive equations that are solely dependent on the phase differences as in equation (2).
Figure 5 shows the phase reduction of a network of oscillatory dynamics consisting of 8 neurons. Supporting figure 1 shows the oscillatory dynamics of a pair of neurons along with the phase plane dynamics in the presence of stochastic noise. Examples of coupling functions and the anti-symmetric curves for two networks of 3 nodes are shown in Supporting figure 2. The stochastic component is approximated using linear approximations yielding a linear continuous stochastic model of the form in (28) (Methods and Supporting information B) and the corresponding reduced network model is shown in Figure 5d. We define the information transfer as the impact of one excitatory neuron on the excitation level of another neuron. The amount of information transfer depends on the degree of phase synchronization and the fluctuations around the stable phase synchronization state. In other words, information transfer is most effective when the pre-synaptic input of the sending neuron aligns with the post-synaptic neuron’s maximum excitability phase. We thus postulate that the increase in information transfer is a consequence of the increased fluctuations in the phase differences. A similar phenomenon can be noted in information transfers within gene regulatory networks, as illustrated in Figure 4f.
Figure 6 illustrates the process involving the regulation of information flows across multiple excitatory neurons within a network model consisting of 8 neurons. We assume that the control nodes are nodes 1, 4, 6, and 8, as depicted in Figure 6h. The corresponding input matrix B is taken as B = [0.01 0 0 0.01 0 0.01 0 0.01]T. Additionally, a noise input matrix, denoted as B1, is assumed to be B1 = [0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01]′ and initial state covariance matrix is set as ∑(0) = 5I8. The heatmaps in Figures 6c and 6d represent the IRPs between all nodes in the network at t = 100, 200 respectively. The IRPs in Figures 6e and 6i show only the portions of the information transfer curves in the considered time interval. The minimum energy control inputs that direct the IRPs to the desired admissible patterns (as shown in Figure 6f) are computed by solving the SDP problem in Figure 6g. The constraint ∑(0) = ∑0 guarantees that the initial IRPs at t = 0 remain unaffected by the control actions. The IRP constraint in Figure 6g can be decomposed to indi-vidual IRPs as , where denotes the desired IRP from node j to node i at time t.
In the context of brain stimulation, nodes 1, 4, 6, and 8 are the stimulated nodes. The initial influences from the rest of the nodes to node 1 at t = 100 and t = 200 are shown in the first columns of the IRP matrices in Figures 6c and 6d. The desired influences from the rest of the nodes to node 1 under stimulation from nodes 1, 4, 6, and 8 at t = 100 are illustrated by the column matrix in Figure 6f. Figure 6j demonstrates the achieved IRP. In the second scenario, the goal is to gradually increase the influences from the rest of the nodes to node 1 from 0 to 0.08 under stimulation, while maintaining a negligible steady influence from node 1 to the rest of the nodes, as depicted in the second and third column matrices in Figure 6f. The resulting IRP under stimulation is shown in Figure 6k. The first column and the first row indicate that the desired stimulation pattern is achieved using optimal control inputs.
4 Discussion
The above results provide a fundamental basis for analyzing the information routing capabilities in complex dynamical networks with special focus in neurological and gene regulatory networks. The IRPs arise when the signals communicate due to fluctuations around stable reference states. Our study demonstrates how these patterns evolve in response to fluctuations, leading to varying routing patterns. We have discussed the joint effect of external perturbations, and noise on these IRPs.
Our results regarding these functional patterns are rooted in the fundamental concept of information transfer, which can be described as follows: Information transfer from one random variable to another is quantified as the change in the entropy of the latter variable due to the influence of the former. This foundational principle is indeed employed in deriving Kleeman’s information transfer as depicted in equation (23). Hence, this information-theoretic metric remains unassociated with specific algorithms or communication protocols, relying solely on the inherent dynamics of the network. For instance, consider Figure 1, where an external stimulation impacting the mRNA of one of the oscillators is represented through variations in the phase difference between the two oscillators. This is subsequently interpreted as a modification in the coupling function, ultimately leading to the emergence of two distinct routing patterns. Our theory is also based on a first-order approximation method, which also enables us to highlight the significance of the stable phase locking states within the underlying IRP. In network dynamical systems where higher-order interactions hold significance, the approximation can be systematically extended to incorporate these higher-order terms using the Perron-Frobenius operator [37].
The explicit dependence of the IRP on the underlying network dynamics motivates us to strategically manipulate the collective network dynamics to attain the intended IRP. For example, achieving the desired IRP might involve optimizing the network’s topology [24] or applying control theory techniques to determine the external signals that guide patterns toward the desired outcome. The principles in this study primarily focus on the application of control theory to address the latter challenge. To this end, we have introduced a problem formulation aimed at determining the minimum control energy required to guide information transfers to any desired value within both finite and infinite time horizons. Our analysis reveals that the problem can be addressed by identifying the control inputs needed to guide probability distributions toward predefined distributions. For example, it is shown in Figure 4 how the distribution in Figure 4b is steered as shown in Figure 4f to achieve the target IRP. We have established that the optimal inputs can be determined by solving coupled Riccati equations with coupled boundary constraints. However, it is important to note that no closed-form solutions exist, and we resort to nonlinear optimization tools such as CVX to find near-optimal control inputs. We demonstrated our theory with two biological networks (gene regulatory and neurological) where desired IRPs are required at desired instances.
5 Methods
5.1 Information Transfer Patterns in Network Dynamical Systems
To understand how the information transfer patterns change dynamically under the influence of dynamic nodal interactions in the network, consider a stochastic system where the dynamics are given by: where x(t) ∈ Rn are the states of the system, f : ℝn × ℝ → ℝn describes the intrinsic network dynamics, B1 ∈ ℝn×m denotes the input noise matrix and w(t) ∈ ℝm is a white noise with mean zero and unit covariance. To quantify the instantaneous information flow from node xj to node xi, we use an information-theoretic measure defined by ‘Information Transfer’ [37]. The formulation is based on the fundamental notion of information transfer. More precisely, for any two random variables, xi, xj ∈ x(t), the transfer of information from xj to xi at time t, denoted as is defined as the variation in the instantaneous marginal entropy of xi in the presence and the absence of the influence of xj. That is, where is the rate of change of marginal entropy of xi from time t to t + Δt, and is the rate of change of marginal entropy of xi with contributions from all other states except from xj. The information transfer from xj to xi is then derived from equation (22) as [37] (Supporting information A) where 𝔼 denotes the expectation, ρ;j denotes the joint distribution of (x1, … xj−1, xj+1, … xn) at time t, ρi denotes the marginal distribution of the state xi. and w(t) ∈ ℝm is a white noise with mean zero and unit covariance. The rela-tionship between definition (23) and the widely used ‘Transfer Entropy’ is explained in Supporting information A. Tj→i(t) is then computed for all edges originating from node j and reaching node i. This concept is analogous to functional connectivity and information routing patterns (IRPs) in biological networks. Hence, we refer to this as functional connectivity or IRPs.
5.2 Reduction to linear stochastic system
For a network stochastic dynamical system with n nodes, equation (2) can be written as where γij(ϕi − ϕj) are the coupling functions and external input are modeled as independent Wiener processes, wk. In the unperturbed system (∑ik = 0), we assume that the phase dynamics described in equation (24) exhibit a stable phase-locked state with a consistent phase difference and a collective oscillation frequency Ω. This implies that for all i ∈ {1, … n},
We decompose the phase dynamics into two components: a deterministic reference part denoted as and a fluctuating part represented as . The solution to the deterministic dynamics is provided as follows:
Introducing new coordinates, , equation (24) can be written as where . We assume that the noise levels, ∑ik are small, and using the small noise expansion, the first order approximation of equation (27) is given by a multivariate Ornstein-Uhlenbeck process
Where . Thus, equation (28) is a linear stochastic continuous system, and the expression for Tj→i(t) is given in equation (3).
Author Contributions
SS: Conceptualization, Methodology, Writing – original draft. RP: Conceptualization, Resources, Writing – review & editing, Supervision, Funding acquisition. UV: Conceptualization, Visualization, Investigation. SL: Investigation, Writing – review & editing, Supervision.
Supporting Information
S1_file.pdf
Acknowledgements
The work is partially supported by grants from Indo-US Science and Technology Forum (IUSSTF) IUSSTF/JC-110/2019.