Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Cellular signaling beyond the Wiener-Kolmogorov limit

Casey Weisenberger, View ORCID ProfileDavid Hathcock, View ORCID ProfileMichael Hinczewski
doi: https://doi.org/10.1101/2021.07.15.452575
Casey Weisenberger
1Department of Physics, Case Western Reserve University, Cleveland, Ohio
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
David Hathcock
2Department of Physics, Cornell University, Ithaca, New York
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for David Hathcock
Michael Hinczewski
1Department of Physics, Case Western Reserve University, Cleveland, Ohio
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Michael Hinczewski
  • For correspondence: mxh605@case.edu
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

ABSTRACT

Accurate propagation of signals through stochastic biochemical networks involves significant expenditure of cellular resources. The same is true for regulatory mechanisms that suppress fluctuations in biomolecular populations. Wiener-Kolmogorov (WK) optimal noise filter theory, originally developed for engineering problems, has recently emerged as a valuable tool to estimate the maximum performance achievable in such biological systems for a given metabolic cost. However, WK theory has one assumption that potentially limits its applicability: it relies on a linear, continuum description of the reaction dynamics. Despite this, up to now no explicit test of the theory in nonlinear signaling systems with discrete molecular populations has ever seen performance beyond the WK bound. Here we report the first direct evidence the bound being broken. To accomplish this, we develop a theoretical framework for multi-level signaling cascades, including the possibility of feedback interactions between input and output. In the absence of feedback, we introduce an analytical approach that allows us to calculate exact moments of the stationary distribution for a nonlinear system. With feedback, we rely on numerical solutions of the system’s master equation. The results show WK violations in two common network motifs: a two-level signaling cascade and a negative feedback loop. However the magnitude of the violation is biologically negligible, particularly in the parameter regime where signaling is most effective. The results demonstrate that while WK theory does not provide strict bounds, its predictions for performance limits are excellent approximations, even for nonlinear systems.

1 Introduction

Fundamental mathematical limits on the behavior of biochemical reaction networks1–6 provide fascinating insights into the design space of living systems. Though these limits remain notoriously permeable compared to their analogues in physics—subject to re-interpretaton and exceptions as additional biological complexities are discovered—they still give a rough guide to what is achievable by natural selection for a given set of resources. They also raise other interesting issues7, 8: is selection actually strong enough to push a particular system toward optimality? When is performance sacrificed due to metabolic costs or the randomizing forces of genetic drift?

Information processing in cellular networks has been a particularly fertile ground for discussing optimality. Certain cellular processes like environmental sensing rely on accurate information transfer through intrinsically stochastic networks of reactions9, 10. Other processes in development and regulation depend on suppressing noise through homeostatic mechanisms like negative feedback11–14. Either scenario, whether maintaining a certain signal fidelity or suppressing fluctuations, can be quite expensive in terms of metabolic resources3, 15, and hence potentially an area where optimization is relevant.

Discussions of signaling performance limits are often framed in terms of information theory concepts like channel capacity16, 17, and complemented by direct experimental estimates18–25. In recent years, another tool has emerged for understanding constraints on biological signal propagation: optimal noise filter theory15, 26–30, drawing on the classic work of Wiener and Kolmogorov (WK) in engineered communications systems31–33. The theory maps the behavior of a biological network onto three basic components: a signal time series, noise corrupting the signal, and a filter mechanism to remove the noise. Once the identification is made, the payoff is substantial: one can use the WK solution for the optimal noise filter function to derive closed form analytical bounds on measures of signal fidelity (like mutual information) or noise suppression (like Fano factors). These bounds depend on the network’s reaction rate parameters, allowing us to determine a minimum energetic price associated with a certain level of performance15. Finally, the theory specifies the conditions under which optimality can be realized in a particular network.

To date, however, there has been one major caveat: the WK theory relies on a continuum description of the molecular populations in the network, and assumes all reaction rates are linearly dependent on the differences of these population numbers from their mean values. While this may be a good approximation in certain cases (i.e. large populations, with small fluctuations relative to the mean), it certainly raises doubts about the universal validity of the bounds derived from the theory. Biology is rife with nonlinearities, for example so-called ultrasensitive, switch-like rate functions34 in signaling cascades. Could these nonlinear effects allow a system to substantially outperform a WK bound derived using linear assumptions? Curiously, every earlier attempt to answer this question for specific systems26, 28 (summarized below) has yielded the same answer: the WK bound seemed to hold rigorously even when nonlinearities and discrete populations were taken into account.

The current work shows that this is not the full story. We have found for the first time two biological examples that can be explicitly proven to violate their WK bounds: a two-level signaling cascade and a negative feedback loop. To demonstrate this, we start by describing a general theoretical framework for signaling cascades with arbitrary numbers of intermediate species (levels), with the possibility of feedback interactions between the input and output species. We show how to calculate WK bounds based on the linearized, continuum version of this system, generalizing earlier WK results for single-level systems. In order to check the validity of the WK bound, we introduce an analytical approach for calculating exact moments of the discrete stationary probability distribution of molecular populations, starting from the underlying master equation. Our method works for arbitrarily long cascades in the absence of feedback. It allows us to find cases in a nonlinear two-level signaling cascade where the WK bound holds, as well as cases where it is violated. A similar picture emerges in a nonlinear single-level system with negative feedback, but here we use an alternative numerical approach to tackle the master equation. Remarkably, for the cases where nonlinearity helps beat the WK bound, the magnitude of the violation is tiny, typically fractions of a percent. We observe a trend that as the signaling efficiency increases, improving the biological function of the system, the size of the violation decreases or vanishes. This makes the WK value an excellent estimate for the actual performance limit in the biologically relevant parameter regime. Thus while the results show the WK theory does not rigorously bound the behavior of nonlinear signaling systems, they also put the theory on a more solid foundation for practical applications.

2 Results

2.1 Signaling network

We begin by defining a general model of an N-level cellular signaling cascade. Each specific system we consider in our analysis will be a special case of this model. As shown schematically in Fig. 1, we have an input chemical species X0 followed by N downstream species X1, …, XN . For example, if this was a model of a mitogen-activated kinase (MAPK) cascade35, the input X0 would be an activated kinase, which activates another kinase via phosphorylation (X1), which in turn leads to a sequence of downstream activations until we reach the final activated kinase XN . The copy number of species Xi is denoted by xi = 0, 1, 2, Hence the state of the system can be represented by the vector x = (x0, x1, … , xN). Stochastic transitions between states are governed by an infinite-dimensional Markovian transition rate matrix W. The element Wx′,x of this matrix represents the probability per unit time to observe state x′ at the next infinitesimal time step, given that the current state is x. The values of these elements will depend on the rates of the chemical reactions that are possible in our signaling network, as described below. The probability px(t) of being in state x at time t evolves according to the corresponding master equation36, Embedded Image

Figure 1.
  • Download figure
  • Open in new tab
Figure 1.

Overview of the N-level signaling cascade model, showing an example with N = 2. The signal from input species X0 is propagated through to output species XN, with the possibility of feedback back to the input. In the absence of feedback, the signal fidelity is measured via the error E, defined in terms of correlations between the input fluctuations δx0(t) = x0(t) – ⟨x0⟩ and output fluctuations δxN(t) = xN(t) – ⟨xN⟩. In the linearized system the error is related to the input-output mutual information Embedded Image through Embedded Image. For the system with feedback, the quantity we focus on is ϵ, the ratio of input variance Embedded Image with feedback to the variance Embedded Image without. This is also equal to the Fano factor Embedded Image, which measures the effectiveness of feedback in suppressing input fluctuations.

The first term on the right represents the gain of probability in state x due to transitions out of all other states x′ into x, and the second term the loss due to transitions out of x into all other states. We will focus on systems where W is time-independent and the system reaches a unique stationary distribution Embedded Image. The latter satisfies Eq. (1) with the left-hand side set to zero, Embedded Image

All physical observables we consider can be expressed as averages over this stationary distribution. If f(x) is some function of state x, we will use Embedded Image to denote the associated stationary average.

The detailed form of Eq. (2) for our cascade requires specifying all the possible chemical reactions in our network. We start with species X0, which is produced with some rate R0(xN) ≥ 0. We treat “production” as occurring with a single effective rate, encompassing all the substeps involved in activation of X0 from some inactive form (not explicitly included in the model). The functional form of the rate R0(xN) can be decomposed into two parts, Embedded Image

Here, F represents a constant baseline activation rate and ϕ(xN) the perturbation to that rate due to feedback from the final downstream species XN. ϕ(xN) is a potentially nonlinear function, with ϕ′ xN < 0 corresponding to negative feedback (production of X0 inhibited by increases in xN), and ϕ′ (xN) > 0 corresponding to positive feedback (production of X0 enhanced by increases in xN). In the absence of feedback, ϕ(xN) = 0. The possibility of feedback from the last species to an upstream one has analogues in biological systems like the ERK MAPK pathway37. Of course there may be feedback to multiple upstream species (as is the case for ERK), but here we only consider one feedback interaction as a starting point for modeling.

In a similar spirit, the baseline rate F is a constant for simplicity, representing the net effect of processes leading to the activation of X0 that are not explicitly part of the model. There are also deactivation processes for X0 (i.e. the action of phosphatases) which we model by an overall deactivation rate γ0x0 proportional to the current population. We denote the constant γ0 as the per-capita deactivation rate. For the case of no feedback, the marginal stationary probability of the input X0 is a Poisson distribution Embedded Image38, Embedded Image where Embedded Image, which in this case is equal to the mean and variance: Embedded Image, with δx0 ≡ x0 – ⟨x0⟩. Dynamically, the input signal has exponentially decaying autocorrelations, with characteristic time Embedded Image. More complex types of input (for example with time-dependent F(t) or non-exponential autocorrelations) can also be considered in generalizations of the model26, 27. For our system, once feedback is turned on, the input distribution is no longer simply described by Eq. (4), and in general will not have a closed form analytical solution.

For i > 0, the production function for the ith species Xi is Ri(xi–1) ≥ 0, depending on the population xi–1 directly upstream. The deactivation rate at the ith level is γixi. We allow the Ri functions to be arbitrary, and hence possibly nonlinear. Putting everything together, we can now write out the explicit form of Eq. (2), Embedded Image

For compactness of notation, we define x−1 ≡ xN and introduce the (N + 1)-dimensional unit vectors ei, where e0 = (1, 0, … , 0), e1 = (0, 1, 0, … , 0), e2 = (0, 0, 1, 0, … , 0) and so on. Eq. (5) is generally analytically intractable, in the sense that we cannot usually directly solve it to find the stationary distribution Embedded Image. Despite this limitation, we can still make progress on understanding signaling behavior in the cascade via alternative approaches. Linearization of the production functions, described in the next section, is one such ap-proach. Crucially, this approximation facilitates deriving bounds on signaling fidelity via the WK filter formalism. Later on we will also introduce exact analytical as well as numerical methods for tackling certain cases of Eq. (5), to explore the validity of the WK bounds in the presence of nonlinearities.

2.2 WK filter formalism

In this section, we provide a brief overview of linearizing our signaling model and mapping it to a WK filter, generalizing the approach developed in Refs. 26, 28, 38. This mapping allows us to derive bounds on various measures of signaling fidelity, which we know are valid at least within the linear approximation. The aim here is to summarize the bounds that we will later try to beat by introducing nonlinearities. Additional details of the WK approach can be found in the review of Ref. 38, which presents three special cases of our model: the N = 1 and N = 2 cascades without feedback, and the N = 1 system with feedback. The WK bound for the general N-level cascade, with and without feedback, is presented here for the first time, with the complete analytical derivation shown in the Supplementary Information (SI).

2.2.1 Linearization

If we consider the limit where the mean copy numbers of all the chemical species in the cascade are large, we can approximately treat each population xi as a continuous variable. If the magnitude of fluctuations in the stationary state is small relative to the mean, we can also approximate all the production functions to linear order around their mean values, Embedded Image with some coefficient ϕ1, σi0, and σi1. Here we have absorbed the zeroth-order Taylor coefficient of ϕ(xN) around Embedded Image into F. Note that the sign convention for the first-order coefficient ϕt1 means that ϕ1 > 0 corresponds to negative feedback. The stationary averages in the linearized case are: Embedded Image

We will use bar notation like Embedded Image to exclusively denote the linearized stationary mean values. Brackets like ⟨xi⟩ will always denote the true mean, whether the system is linear (in which case Embedded Image) or not.

One advantage of linearization is the ability to express dynamics in an analytically tractable form, using the chemical Langevin approximation39. The Langevin equations corresponding to Eq. (5) are: Embedded Image where the ni(t are Gaussian noise functions with correlations Embedded Image We can rewrite Eq. (8) in terms of deviations from the mean, δxi(t) ≡ xi(t) – ⟨xi⟩, plugging in Eqs. (6)-(7). The result is: Embedded Image

2.2.2 Finding bounds on signal fidelity by mapping the system onto a noise filter

The linear chemical Langevin approach also allows us to map the system onto a classic noise filter problem from signal processing theory. We describe two versions of this mapping here, the first for the system without feedback, and the second with feedback.

1. No feedback system

Imagine we are interested in understanding correlations between two dynamical quantities in our system, as a measure of how accurately signals are transduced through the cascade. The choice of these two quantities, one of which we will label the “true signal” s(t) and the other the “estimated signal” Embedded Image within the filter formalism, depends on the biological question we would like to ask. For the cascade without feedback (ϕ1 = 0), a natural question is how well the output XN reflects the input X0. The function of the cascade can be to output an amplified version of the input40, but there is inevitably corruption of the signal as it is transduced from level to level due to the stochastic nature of the biochemical reactions in the network. If we assign s(t) ≡ δx0(t) and Embedded Image, it turns out that because of the linearity of the dynamical system in Eq. (9) the two are related through a convolution, Embedded Image

The details of the functions H(t) and n(t), as derived from Eq. (9), are given in the SI Sec. 1. One can interpret Eq. (10) as a linear noise filter: a signal s(t) corrupted with additive noise n(t) (a function which depends on the Langevin noise terms ni(t) is convolved with a filter function H(t) to yield an estimate Embedded Image. The filter function, which encodes the effects of the entire cascade, obeys an important physical constraint: H(t) = 0 for all t < 0. This enforces causality, since it ensures that the current value of Embedded Image (the output in our case) only depends on the past history of the input plus noise, s(t′) + n(t′) for t′ < t.

The traditional version of filter optimization31–33 is searching among all possible causal filter functions H(t) for the one that minimizes the relative mean squared error between the signal and estimate: Embedded Image

Since the averages are taken in a stationary state, ϵ is time-independent, and can have values in the range 0 ≤ ϵ < ∞. For the case of a biological cascade however, where s(t) and Embedded Image are the times series of input and output fluctuations δx0(t) and δxN(t) respectively, we expect that the output may be an amplified version of the input. Hence a better measure of fidelity may be a version of Eq. (11) that is independent of the scale differences between signal and estimate. To define this scale-free error, note that the optimization search over all allowable H(t) necessarily involves searching over all constant prefactors A that might multiply a filter function H(t). Using AH(t) as the filter function instead of H(t), is equivalent to switching from Embedded Image to Embedded Image, as can be seen from Eq. (10). If we were to look at the error Embedded Image for a given Embedded Image and s(t), we can readily find the value of A that minimizes this error, which is given by Embedded Image. Plugging this value in, we can define a scale-free error E as follows: Embedded Image

By construction, Embedded Image and in fact E has a restricted range: 0 ≤ E ≤ 1. The independence of E from the relative scale of the output versus the input makes it an attractive measure of the fidelity of information transmission through the cascade. In fact, within the linear chemical Langevin approximation, one can show that Embedded Image, where Embedded Image is the instantaneous mutual information in bits between s(t) and Embedded Image38. Thus E will be the main measure of signal fidelity we focus on when we discuss the no-feedback cacade.

For a given s(t) and n(t), we denote the causal filter function H(t) that minimizes Embedded Image as the Wiener-Kolmogorov (WK) optimal filter HWK t. Because this optimization includes exploring over all possible prefactors of H(t), the same WK filter function simultaneously minimizes Embedded Image and Embedded Image, and the min-ima of the two error types coincide. We will denote this minimum as EWK. Hence we have ϵ ≥ E ≥ EWK in general for linear systems, and ϵ = E = EWK when H(t) = HWK(t).

The procedure for calculating HWK(t) for a specific system, and then finding the optimal error bound EWK, is based on analytical manipulation of the power spectra associated with s(t) and n(t)33, 38. We illustrate the details in SI Sec. 1, applying the method to our cascade model. This yields the following value for EWK for an N-level cascade without feedback: Embedded Image where Embedded Image is a dimensionless parameter associated with the jth level, and λ = λj is the jth root with positive real part (Re(λj) > 0) of the following polynomial B(λ): Embedded Image

This is a polynomial of degree 2N in λ, and hence has 2N roots. Because the coefficients of λ in the polynomial are real, the conjugate of any complex root must also be a root. Finally, because only even powers of λ appear in B(λ), the negative of a root is also a root. Putting all these facts together ensures that there will always be N roots λj where Re(λj) > 0. Moreover, among the set of λj, any complex roots come in conjugate pairs. This guarantees that the expression for EWK in Eq. (13) is always real. Note that the choice of ordering of the roots λj, j = 1, … , N is arbitrary, since it does not affect the result. Within the linear approximation, EWK gives a lower bound on the achievable E, and and hence an upper bound on the maximum mutual information between input and output, Embedded Image.

Special cases of Eq. (13) recover earlier results. For N = 1, we find the single root Embedded Image, and we can rewrite Eq. (13) in a simple form: Embedded Image which is the result found in Ref. 26. Similarly, the N = 2 version, with more complicated but still analytically tractable roots λ1 and λ2, was derived in Ref. 38. For the case of general N, the roots λj can be found numerically. However, there is one scenario where we know closed form expressions for all the λj for any N. This turns out to be the case where the biological parameters of the cascade are tuned such that filter function H(t) in Eq. (10) is proportional to HWK(t), and hence E = EWK. (We do not need strict equality of the filter functions, because the resulting value of E is independent of an overall constant in front of H(t).) That this is even possible is itself non-trivial; generally when we vary biological parameters in a system mapped onto a noise filter, we allow H(t) to explore a certain subspace of all possible filter functions. It is not guaranteed that any H(t) in that subspace will coincide with HWK(t) up to a proportionality constant. However, as shown in SI Sec. 1, for the no-feedback N-level model we can achieve H(t) ∝ HWK(t) when the following conditions are met: Embedded Image

These can be solved recursively to give nested radical forms: Embedded Image and so on. When these conditions are satisfied, the roots λj have straightforward analytical forms, namely λj = γj for all j. Hence we can substitute the values in Eq. (17) for λj in Eq. (13) to get EWK explicitly when this scenario is true. With the aid of the recursion relation in Eq. (16), we can then write EWK in this case as: Embedded Image where li ≡ γi−1Λi/γ0 are dimensionless positive constants. The simple form of the bound in Eq. (18) makes it useful for analyzing the energetic cost of increasing signal fidelity in a cascade. The biological implications of this bound are discussed later on.

2. System with feedback

The case with feedback uses a qualitatively different, and more abstract, mapping of the system onto a noise filter. Here the true and estimated signals are identified with the following quantities28, 38: Embedded Image

The subscript in δx0(t)|ϕ =0 denotes that this δx0(t) is obtained by solving Eq. (9) with the feedback turned off, ϕ(xN) = 0 or equivalently ϕ1 = 0. The δx0(t) without the subscript represents the solution with the feedback present. With this mapping, the error ϵ from Eq. (11) can be written as: Embedded Image

The underlying motivation is that negative feedback can serve as a homeostasis mechanism, dampening fluctuations δx0(t) in the X0 species that are the direct target of the feedback. Achieving a small ϵ, by making Embedded Image as close as possible to s(t), translates to an efficient suppression of X0 fluctuations (relative to their undamped magnitude in the absence of feedback). Note that in this case ϵ, rather than the scale-free error E, is the quantity used to specify system performance. Despite this difference, the problem is still a question of accurate information propagation through the cascade, because we need δxN(t) to encode a faithful representation of the input fluctuations in order to be able to effectively suppress them via negative feedback. Since the X0 fluctuations in the no-feedback system follow the Poisson distribution of Eq. (4), the denominator in Eq. (20) is given by Embedded Image. Thus Embedded Image, which is also known as the Fano factor (ratio of variance to the mean), a standard measure for the size of fluctuations. Pois-son distributions have Fano factors ϵ = 1, but negative feedback in optimal cases can reduce ϵ to values much smaller than 1.

The close connections between the no feedback and feedback analysis is apparent when we consider the analogue of the convolution in Eq. (10) for the feedback case. It turns out that s(t) and n(t) have the same functional forms as in the no feedback case, but the filter function H(t) is different (details in SI Sec. 2). Because the WK bound depends only on the power spectra of s(t) and n(t), the result for the bound EWK is exactly the same as Eq. (13), with roots λj specified by Eq. (14). The interpretation of Eq. (13) in this case is as a lower bound for the error in Eq. (20), namely ϵ ≥ EWK.

Unlike the no-feedback cascade, where we can in principle tune the biological parameters so that E = EWK, for the linearized negative feedback system we can only asymptotically approach the bound from above, ϵ → EWK. This limit is easiest to describe in the case where the production functions at each level are directly proportional to the upstream species, Ri(xi–1) ∝ xi−1 for i > 0. In terms of Eq. (6), this corresponds to setting Embedded Image, so that Ri(xi−1) = σi1xi−1. The following two conditions are then needed to approach WK optimality: i) the levels in the cascade have fast deactivation rates relative to the inverse autocorrelation time of the input, γi ≫ γ0 for i > 0; ii) the coefficient of the negative feedback function is tuned to the value, Embedded Image where Embedded Image

In this limit ϵ approaches EWK, with Eq. (13) evaluating to the same form as Eq. (15), except for Λ replaced by Λeff, Embedded Image

The N = 1 special case of this result, where Λeff = Λ1 = σi0/F, was the focus of Ref. 28.

2.2.3 Optimal bounds and metabolic costs

The general EWK bound in Eq. (13), and its corresponding values in various special cases (Eqs. (15), (18), (23)), depend on the production rate parameters σi0, σi1 and the per-capita deactivation rates γi at each state i. These processes have associated metabolic costs. If production involves activation of a substrate via phosphorylation, the cell has to maintain a sucient population of inactive substrate and also consumes ATP during phosphorylation. Similarly, deactivation requires maintaining a population of phosphatases. Achieving systems with better optimal performance can be expensive. To illustrate this, consider production functions of the form Ri(xi−1 = σi1xi−1, with Embedded Image, as described in the previous section. Since ϵ from Eq. (20) is given in terms of relative variance, a 10-fold decrease in the standard deviation of fluctuations would require a 100-fold decrease in E. To decrease the optimal EWK from Eq. (23) by a factor of 100, one would need rougly a 104 increase in Λeff, assuming we are in the regime where Λeff ≫ 1. This extreme cost of eliminating fluctuations via negative feedback3 has to be borne across the whole cascade: since Λeff in Eq. (21) is potentially bottlenecked by one σi0 much smaller than the others, the mean production rates for all the levels must be hiked up in order to increase Λeff.

An analogous story emerges when we analyze the same system without negative feedback. The relevant measure here is the scale-free error E between the time series of input and output populations, or equivalently the mutual information Embedded Image. Imagine we would like to increase the mutual information upper bound Embedded Image by 1 bit. In the limit of li ≫ 1 in Eq. (18), this can be achieved for example by increasing every li by a factor of 16, regardless of N. Given Embedded Image and Embedded Image we can evaluate the dimensionless constants associated with level i as Embedded Image and li = (γi−1/γ0)Λi = (σi0/σi−1,0)(γi−1/γ0)2. Hence increasing ei requires either increasing the relative mean production between the ith level and its predecessor, or the per-capita deactivation rate of the latter (if i > 0), or some combination of both. Note the Λi parameter has a simple physical interpretation here: the average number of i molecules produced per molecule of Xi−1 during the characteristic time interval Embedded Image of input fluctuations. The massive cost of achieving multiple bits of mutual information between input and output in a biological signaling cascade is consistent with the narrow range of experimentally measured Embedded Image values, spanning ~ 1 to 3 bits18–25, with most systems near the lower end of the spectrum.

2.3 Nonlinearity in N = 1 signaling models: earlier attempts to go beyond the WK limit

The linearized noise filter approach described above provides a general recipe for deriving bounds on signaling: i) start with a linear chemical Langevin description of the system; ii) identify signal and estimate time series that are based on observables of interest, and are related via convolution in terms of some system-specific filter function H(t); iii) derive the optimal filter function HWK(t) and the the corresponding error bound EWK; iv) explore if and under what conditions the system can reach optimality. But the procedure leaves open an important question: is the resulting bound EWK a useful approximation describing the system’s performance limits, or can biology potentially harness nonlinearity to enhance performance significantly beyond the WK bound? We know that nonlinear, Hill-like functional relationships are a regular feature of biological signaling41, manifested in some cases as an extreme switch-like input-output relation known as ultrasensitivity34. Is EWK still relevant in these scenarios? This section summarizes previous efforts to answer this question (all for the N = 1 case), setting the stage for our main calculations.

2.3.1 Nonlinearity in the N = 1 model without feedback

Ref. 26 derived an exact solution for the no-feedback N = 1 system with an arbitrary production function R1(x0) and discrete populations. The input signal remains the same as in the linear case, governed by production rate R0 (xN) = F and deactivation rate γ0. The starting point is expanding R1 (x0) in terms of a series of polynomials, Embedded Image

Here, Embedded Image is a polynomial of nth degree in x0, which depends on Embedded Image as a parameter. The functions Embedded Image are variants of so-called Poisson-Charlier polynomials, whose properties are described in detail in the SI Sec. 4. Similar expansions have found utility in spectral solutions of master equations42, 43. The most important characteristic of these polynomials is that they are orthogonal with respect to averages over the Poisson distribution Embedded Image defined in Eq. (4). If we denote Embedded Image the average of a function f(x) with respect to a Poisson distribution Embedded Image, then26, 44 Embedded Image

The first few polynomials are given by: Embedded Image

Eq. (25) allows the coefficients σ1n from Eq. (24) to be evaluated in terms of moments with respect to Embedded Image, Embedded Image

Using Eq. (26) we can write the first two coefficients as Embedded Image

They have a simple physical interpretation: σ10 is the mean production rate and σ11 is a measure of how steep the production changes with x near Embedded Image, and they are exactly the same as the coefficients in the linear expansion of Eq. (6). If σ1n ≠ 0 for any n ≥ 2 then the production function R1(x0) is nonlinear.

The exact expression for the error E derived in Ref. 26 takes the form: Embedded Image

The nonlinear σ1n coefficients for n ≥ 2 contribute to the expression in the brackets as Embedded Image multiplying a positive factor, and hence if nonzero always act to increase the error regardless of their sign. It turns out that Eq. (29) is bounded from below by the N = 1 WK limit from Eq. (15), Embedded Image with Embedded Image. The WK limit is achieved when Embedded Image, just as predicted by Eq. (17), and when R1 (x0) has the optimal linear form, Embedded Image Embedded Image, with all σ1n = 0 for n ≥ 2.

Increasing the slope of R1(x0) at Embedded Image will increase σ11 and hence Λ1, progressively decreasing the EWK limit. This can be seen in Fig. 2, which illustrates different production functions and the corresponding error values. Eventually, the slope will become so steep that it is impossible to have a purely linear function R1(x0) with that value of σ11. This is because σ11 must always be smaller than Embedded Image to have a linear production function that is everywhere non-negative, R1(x0) > 0 for all x0 ≥ 0. σ11 can be arbitrarily large for very steep, sigmoidal production functions R1(x0), but in this case, the error will be significantly larger than EWK due to the contributions from the nonlinear coefficients σ1n, n≥2. We see this for the largest values of Λ1 in Fig. 2B, with the added error due to nonlinearity overwhelming the benefit from large Λ1. In summary, for the N = 1 no-feedback model, there is no way to beat the WK limit, regardless of the choice of R1 (x0).

Figure 2.
  • Download figure
  • Open in new tab
Figure 2.

The case of a no-feedback N = 1 signaling model with a nonlinear production function R1(x0) for parameters: F = 1 s−1, γ0 = 0.01 s−1, σ10 = 100 s−1. A) Examples of a variety of production functions R1(x0), colored from red to yellow based on their steepness at Embedded Image, and hence the size of the corresponding parameter Λ1. Superimposed in black is the marginal distribution Embedded Image of the input species X0. B) For the production functions shown in panel A, the corresponding exact error E from Eq. (29) (circles) as a function of Λ1. The WK bound EWK from Eq. (30) is shown in blue for comparison. Adapted from Ref. 26.

2.4 Nonlinearity in the N = 1 TetR negative feedback circuit

Ref. 28 studied an N = 1 negative feedback loop inspired by data from an experimental synthetic yeast gene circuit45. In this circuit, TetR messenger RNA (the X0 species) leads to the production of TetR protein (the X1 species), while the protein in turn binds to the promoter of the TetR gene, inhibiting the production of the messenger RNA. The model is similar to Eq. (8) when N = 1, Embedded Image with a linear production function R1(x0) = σ11x0, but with a sigmoidal Hill function form for the feedback R0 (x1), Embedded Image

Note that this model has additional contribution to the deactivation of the output, a function Γx1 that is also sigmoidal: Embedded Image

The parameters Ai, vi, θi, i = 1, 2 are all non-negative and determine the shape of the two Hill functions, which are common phenomenological expressions for regulatory interactions in biology41.

Using numerical methods to solve the corresponding master equation (similar to those described below), one can carry out a parameter search to solve the following optimization problem: with the constraints of fixed γ0, γ1, σ11, Embedded Image (and hence also fixed Λ1 = σ11/γ0), one can vary the Hill function parameters to find the smallest possible ϵ. The circles in Fig. 3 show the optimization results for different Λ1 in the range Λ1 = 2 – 10, comparable to experimental estimates46. The EWK bound from Eq. (23) is shown for comparison as a solid curve. The dashed curve shows an exact bound for the system, derived by Lestas, Vinnicombe, and Paulsson (LVP)3 using information theory, which applies when R1(x0) is linear and where the negative feedback from X1 back to X0 can occur via any function (linear or nonlinear). This exact bound is given by Embedded Image

Figure 3.
  • Download figure
  • Open in new tab
Figure 3.

The Fano factor E for the TetR N = 1 negative feedback loop of Ref. 28. Numerical optimization results are shown as circles, while the WK and LVP lower bounds (Eqs. (23) and (34) respectively) are shown as solid and dashed curves. Figure adapted from Ref. 28.

Note that ELVP ≤ EWK, which opens the possibility that a nonlinear system could fall somewhere between the two curves, beating EWK. However, as we see in Fig. 3, the optimal numerical results were only able to approach from above, and never outperformed, the WK limit.

Based on the nonlinear N = 1 results with and without feedback described above, one could plausibly imagine that WK theory somehow provides universal bounds. Despite the fact that the WK limit was derived for linear systems, it surprisingly gives a rigorous bound for the nonlinear no-feedback model, and a numerical search to beat the limit proved fruitless in the feedback case. However, as we demonstrate in the next section, such a conclusion is premature.

3 Beating the WK limit

To explore the validity of the WK limit more broadly, we need to be able to obtain precise error results in a wider range of nonlinear signaling systems. In this section, we provide two lines of evidence that demonstrate for the first time error values below EWK. The first is from an N = 2 no-feedback cascade with a linear R1(x0) and quadratic R2(x1) production functions. (We will prove that the related case, where R1(x0) is nonlinear but R2(x1) linear, always gives E ≥ EWK.) These results use an exact expression for E that is valid for any N > 1 no-feedback system, based on a recursion relation derived from the master equation of Eq. (5) that can be evaluated numerically to arbitrary precision. The second line of evidence is from a N = 1 negative feedback loop with linear production function R1(x0) and a feedback function ϕ(x1) that includes a quadratic contribution. Here a solvable recursion relation for E is not possible, so we use a numerical solution of Eq. (5).

3.1 Exact calculation of error in the nonlinear, discrete N > 1 model without feedback

In order to understand the behavior of more complex nonlinear signaling cascades, we need to generalize the exact N = 1 no-feedback error expression from Eq. (29) to systems with N > 1. To start, let us introduce some convenient notation to deal with multiple-level systems. Along with the N + 1-dimensional vector x = x0, x1, … , xN that describes the full state of our system, we will define t he N-dimensional truncated vector Embedded Image that is missing the final component xN. In a similar way we define the truncated N-dimensional unit vectors Embedded Image, i = 1, … , N – 1, where Embedded Image, and so on until Embedded Image. Consider the following generating function derived from the stationary distribution Embedded Image, Embedded Image

The subscript Embedded Image denotes the fact that Embedded Image depends on all components x0 through xN−1, but xN has been eliminated through the sum. If one carries out the sum over xN on both sides of Eq. (5), one can rewrite the master equation entirely in terms of generating functions, Embedded Image

Here, the pth derivative of Embedded Image with respect to y is denoted as Embedded Image. If we take p derivatives with respect to y of both sides of Eq. (36), and then set y = 1, we get the following relation, Embedded Image

The above relation turns out to be the main one we need to evaluate the scale-free error E. To see this, let us rewrite the expression for E from Eq. (12) with the noise filter mapping Embedded Image, s(t) ≡ δx0 (t): Embedded Image

All the moments on the right-hand side of Eq. (38) with respect to the stationary distribution that involve xN (t) can in fact be expressed in terms of Embedded Image (1): Embedded Image

Here, Embedded Image and Embedded ImageEmbedded Image. The remaining moments in Eq. (38), those that involve only x0(t), are known from the fact that the marginal distribution of the input X0 is just the Poisson distribution Embedded Image of Eq. (4). This yields Embedded Image

Recall that the barred notation denotes the linearized stationary averages defined in Eq. (7). Thus the approach to finding E is as follows: i) use Eq. (37) to derive properties of Embedded Image 1 that allow us to evaluate the moments in Eq. (39); ii) together with Eq. (40), we can then plug the moment results into Eq. (38) to derive an expression for E. Here we will summarize the final result, with the full details of the derivation shown in the SI Sec. 3.

To facilitate the solution, we expand the production function in terms of Poisson-Charlier polynomials, just as in Eq. (24) for the N = 1 case, Embedded Image

Each expansion coefficient is given by the analogue of Eq. (27), averaging over a Poisson distribution: Embedded Image

To tackle the Embedded Image (1), we will define new functions Embedded Image through the relation: Embedded Image

Here Embedded Image is a multi-dimensional Poisson distribution, Embedded Image

Thus any Embedded Image represents the deviation of Embedded Image (1) from a simple multi-dimensional Poisson distribution. Similarly, we can define a multi-dimensional version of the Poisson-Charlier polynomials, Embedded Image where Embedded Image is an N-dimensional vector of integers ni ≥ 0. Let us expand Embedded Image these polynomials, Embedded Image defining expansion coefficients Embedded Image. It turns out the moments in Eq. (39) are all just linear combinations of the Embedded Image, which follows from the properties of the Poisson-Charlier polynomials averaged with respect to Poisson distributions: Embedded Image

Here Embedded Imageis the N-dimensional zero vector. Plugging this into Eq. (38) gives Embedded Image

The final piece of the solution is converting Eq. (37) into a recursion relation for the coefficients Embedded Image: Embedded Image with Embedded Image. The coefficients Embedded Image are given by the following expansion in terms of σin and Embedded Image: Embedded Image and Embedded Image are polynomials in z given by: Embedded Image with Embedded Image

Here the sum starts at the largest of the three values 0, n – k, or m – k, and ⌊w⌋ denotes the largest integer less than or equal to w.

The general procedure for calculating E works as follows:

  1. For a given set of production functions Ri(xi–1, we calculate the expansion coefficients σin using Eq. (42). If necessary, we truncate the expansion above some order M, setting σin = 0 for n > M. In practice, because of the rapid convergence of Eq. (41) for xi−1 near Embedded Image, choosing M = 3 or 4 is sufficient. But we can increase the cutoff M to get whatever numerical precision we desire.

  2. We plug the resulting σin into Eq. (50), and this in turn defines the Embedded Image that appear in Eq. (49).

  3. We solve the recursive system of equations in Eq. (49) for Embedded Image, and Embedded Image, and use these to find E from Eq. (48).

Though complex in appearance, the procedure is easy to implement as a numerical algorithm, and with any finite cutoff M is guaranteed to yield a value for E. As we increase M we generally quickly converge to the exact E for the system.

In some cases, the entire procedure can be carried out analytically to give exact closed form expressions for E. When N = 1, we recover the result in Eq. (29), as expected. Another example is the N = 2 system where the first level production function R1 (x0) is arbitrary, but the second level function R2 (x1) is linear (and hence σ2n = 0 for n ≥ 2). Here E is given by: Embedded Image

Just as in Eq. (29), any nonlinear contributions to R1 (x0) always increase E, since the coefficients σ1n for n ≥ 2 only appear in the brackets in Eq. (53) as Embedded Image multiplying positive factors. In this scenario, E EWK always, where E ≥ WK is given by Eq. (13).

The simplest case where we are able to observe a violation of the EWK limit is for N = 2 when R1(x0) is linear and the R2(x1) is quadratic: σ1n = 0 for n ≥ 2 and σ2m = 0 for m ≥ 3. The resulting analytical expression for E is complicated, but we can investigate its optimal behavior numerically. In Fig. 4 we conducted a numerical minimization of E with respect to γ2 and the quadratic coefficient σ22 for various combinations of r ≡ γ1/γ0 and Λ1, keeping Λ2 fixed. If we denote this minimum value Emin, Fig. 4 shows log10|Emin/EWK – 1| with the cool colored contours indicating Emin > EWK and warm colored contours indicating Emin < EWK. In the purely linear case described earlier, we found that E = EWK when the conditions from Eq. (17) are satisfied, which corresponds to Embedded Image shown as as dashed white curve in the figure. With the addition of the quadratic term in R2(x1), the region near that curve now supports solutions that beat the WK limit (the warm colored band in Fig. 4). However the improvement relative to the WK bound is exceedingly small, roughly ~ 0.001 – 0.01% better.

Figure 4.
  • Download figure
  • Open in new tab
Figure 4.

Contour plot of log10|Emin/EWK – 1 for the N = 2 cascade with linear R1(x0) and quadratic R2(x1). The minimum value of the error Emin at a given r = γ1/y0 and Λ1 is found by numerical minimization with respect to γ2 and σ22, with fixed Λ2 = 5. Cool colors denote regions where Emin > EWK and warm colors where Emin < EWK. The dashed white curve corresponds to Embedded Image.

To understand the small size of the improvement, let us look at a subset of the parameter space that is analytically tractable. Set the linear portions of the production functions to be directly proportional to the upstream population, R1(x0) = σ11x0 and Embedded ImageEmbedded Image, which means Embedded Image for n = 1, 2. Furthermore, imagine that the conditions of Eq. (17) are fulfilled f or γ1 and γ2, w hich means E = EWK when the quadratic perturbation σ22 = 0. In this case Λ1 = r2 – 1, with r ≡ γ1/γ0 > 1, and Λ2 = ρr, with ρ ≡ σ20/σ10. EWK from Eq. (18) can then be written as: Embedded Image

Let us focus on the regime where signaling is at least as effective as in many experimentally measured cascades18–25, which means Imax ≳ 1 bit or equivalently EWK ≲ 1/4. This generally requires r ≫ 1 and ρ ≫ 1. In this limit, the complicated full expression for E simplifies, and we can expand the difference E – EWK to second order in the perturbation parameter σ22, Embedded Image

There is a minimum E = Emin at Embedded Image, with Embedded Image

Though we can violate the EWK bound, the size of the violation becomes small for r ≫ 1. From Eq. (54) we know that EWK ~ 2/r for ρ, r ≫ 1, and hence the relative magnitude Embedded Image. The negligible scale of the improvement over EWK is consistent with the numerical results of Fig. 4, though the latter was calculated over a broader portion of the parameter space.

3.2 Revisiting nonlinearity in the N = 1 model with feedback

The violation of the EWK bound in the no-feedback case raises the question of whether similar results are possible in the presence of feedback. We return to N = 1 system used for the TetR model above, but with several simplifications: i) we do not include the additional nonlinear degradation term Γ(x1); ii) rather than a Hill function for R0 (x1), we use Eq. (3), with a quadratic form for the feedback function ϕ(x1), Embedded Image

Depending on the values of ϕ1 and ϕ2, there could be a range of x1 where R0(x1) in Eq. (3) becomes negative, which is unphysical. In our numerical calculations, we thus always use max(R0 (x1), 0) as the feedback function.

However for the parameters we explored, the range of x1 where the sign switch in R0(x1) occurs is far outside the typical range of stationary state x1 fluctuations, so the precise details of the cutoff have a negligible influence on the results. A final important difference from the TetR model is that we will also investigate the regime of smaller Λ1 (the numerics in the earlier study were confined to Λ1 ≥ 2). Based on the intuition from the no-feedback case, we guess that any violation of the EWK bound might become very small for large Λ1, and hence difficult to detect numerically.

Though Eq. (57) has a simple form that is convenient for parameter exploration, it has one feature that makes it somewhat unrealistic from a biological perspective. For ϕ1 > 0, ϕ2 < 0 (the case that will be of interest to us below) the slope dϕ(x1)/dx1 becomes positive for Embedded Image, corresponding to positive feedback for smaller x1 populations. Since we would like to concentrate on systems with negative feedback, we also define an alternative feedback function Embedded Image that avoids this issue by being constant for Embedded Image and monotonically decreasing for Embedded Image: Embedded Image

As we will see below, it turns out that both ϕ(y) and Embedded Image give qualitatively similar results.

The Poisson-Charlier expansion approach of the previous example can also be applied to a general N level feedback system, yielding a set of coupled linear equations for the coefficients Embedded Image analogous to Eq. (49). However, because of the feedback interaction between xN and x0, these equations are no longer particularly useful: lower order coefficients depend on higher order ones in an infinite hierarchy of equations that has no closure for any nonlinear ϕ(x1). We thus turn to an alternative approach: solving the master equation, Eq. (5), for the 2D stationary probability Embedded Image, where x = (x0, x1). Since x0 and x1 can be any non-negative integer, Eq. (5) is an infinite linear system of equations. To make it amenable to a fast numerical solution, we truncate the range of allowable (x0, x1) to be within six standard deviations of Embedded Image and Embedded Image. We estimate the standard deviations from the linear case (ϕ2 = 0), where closed form expressions are available in terms of the system parameters. The actual standard deviations in the presence of nonzero ϕ2 for the parameter range we considered were not perturbed significantly, so this estimation procedure worked well. Similarly, Embedded Image and Embedded Image were good estimates for the actual ⟨x0⟩ and ⟨x1⟩, because the mean of the distribution shifts only a small amount with ϕ2. The window established by this procedure had a typical width of around ~ 100 for x1 and ~ 700 for x0 for parameters in the range described below. In Eq. (5), all Embedded Image outside the allowable range of x were set to zero. This means that Eq. (5) becomes a finite system of linear equations that can be solved effi-ciently using sparse matrix methods. Once the stationary distribution is known numerically, one can then easily calculate the error ϵ from Eq. (20) by finding the marginal distribution of x0 and calculating its first and second mo-ments. We checked for convergence and boundary effects by redoing the solution using window widths that were different than six standard deviations, and verified that the results were unchanged up to the desired precision (< 10−4 for the calculation of ϵ). For select parameter sets, we also validated the moments of the stationary distribution against kinetic Monte Carlo simulations47, though for the latter achieving high precision is difficult because of the computational time required.

We used the following parameter values (all in units of s−1): γ0 = 2, γ1 = 200, σ10 = 8000, σ11 = 2. The value of F was varied to allow for a range of possible Embedded Image. The value of ϕ1 was set to the optimality condition from Eq. (21), Embedded Image where we have used the fact that Λeff = Λ1 for N = 1. This guarantees that in the linear feedback case of ϕ2 = 0, the system should be close to the WK limit (up to correction factors due to finite γ1, since technically the WK limit is only approached in the feedback case when γ1 → ∞). Fig. 5A shows numerical results the Fano factor E as a function of ϕ2 for different values of Λ1 between 0.25 and 1. In all cases for linear feedback (ϕ2 = 0), we see that ϵ > EWK, where EWK is given by Eq. (23). The fact that ϵ is above EWK for the linear system is due to the fact that γ1 is finite. For the case ϕ2 > 0 (not shown in the graphs), the error increases, while for ϕ2 < 0 we see that the error decreases, until it dips below the EWK line before increasing again. The choice of feedback function, ϕ(x1) or Embedded Image, does not make a significant difference. Interestingly, the violation of the WK bound is quite small, just as in the no-feedback case, as we can see more clearly in Fig. 5B, where the ratio ϵ/EWK is plotted, in this case using the Embedded Image function. The largest dip we observed is still only about 1.5% below EWK. Moreover, in order to see any violation at all we had to look at small Λ1 ≥ 1. In this regime, EWK is quite large, just below the Poissonian Fano factor value of 1. Hence the fluctuations are only slightly reduced by the feedback. Once Λ1 becomes larger, in the more biologically relevant regime where negative feedback is effective at suppressing fluctuations, we found it impossible to observe any violations of EWK. This could possibly explain the lack of any evidence of violations in the earlier TetR study28 (see Fig. 3), where only Λ1 ≥ 2 was considered. Though the figures show results for only one set of parameter values, other sets we tried produced qualitatively similar results: the nonlinear case beat the WK limit for small Λ1, but it was always by a small amount.

Figure 5.
  • Download figure
  • Open in new tab
Figure 5.

A) Numerically calculated Fano factor ϵ in the N = 1 nonlinear feedback system. The plots show E versus ϕ2 for the quadratic feedback function ϕ(x1) (blue) from Eq. (57) and the monotonic alternative Embedded Image (orange) from Eq. (58), using the parameters described in the text. The WK bound EWK in shown as a dashed red line. The subgraphs depict cases with four different values of A1 between 0.25 and 1. B) The Fano factor results from panel A, using the feedback function Embedded Image, but normalized with respect to EWK. The dashed red line is ϵ /EWK = 1.

4 Conclusions

Using a combination of analytical and numerical approaches, we have been able to show that the Wiener-Kolmogorov optimal error EWK is not a universal lower bound for biological signaling cascades, both with and without feedback. However, far from undermining the usefulness of the WK theory, our results actually strengthen its practical value as a general purpose approximation to estimate performance limits in signaling systems. In some cases, for example the N = 1 or N = 2 no-feedback systems with nonlinear production in the first level, the EWK bound continues to hold rigorously despite nonlinearity. And in all cases where the bound is broken, the extent of the violation is negligible and decreases or vanishes in the regime where the system is effective at its respective task (either propagating the upstream signal with high fidelity or suppressing fluctuations). Further study is needed to see if the performance gain beyond the EWK bound can be made substantial, for example by combining the effects of nonlinearity from multiple levels in the cascade. However, additional nonlinearity is not necessarily beneficial: in Eqs. (29) and (53), and as depicted in Fig. 2, each higher order nonlinear contribution pushes us further away from the EWK limit.

Thus for practical purposes, the WK approach remains an excellent way of deriving biological bounds that remain meaningful even when the underlying assumptions of the theory (like linearity) no longer strictly hold. Equally importantly, the theory allows one to ascertain under what conditions one can actually achieve this kind of optimality. In all the signaling systems investigated so far, EWK is either directly attainable or can be asymptotically approached by tuning parameters. This is in contrast to a rigorous bound like ELVP from Eq. (34), which holds for arbitrarily complex feedback mechanisms in a system with linear production. However, it has overestimated the optimal capabilities of all the feedback networks we have investigated: none of our systems ever gets close to ELVP. A recent example of the versatility of the WK theory is the study of kinase-phosphatase signaling networks in Ref. 15. A simple analytical WK bound, derived from a linearized N = 1 network, explains a previously unknown optimal relationship between signal fidelity, bandwidth, and minimum ATP consumption. It holds across a vast biological parameter space deduced from bioinformatic databases, and remains valid even when all the microscopic, nonlinear reaction details of the system are taken into account. The robustness of the WK bound, highlighted in the results of the current study, help us understand the theory’s success in such contexts.

Beyond future applications of WK theory to other specific systems, and possible experimental validation, there is still work to be done in developing the analytical techniques (like the Poisson-Charlier expansion) which we used for the no-feedback cascade. Exact results in nonlinear systems are relatively rare and hence valuable in themselves, and also as benchmarks for a variety of simpler approximations like the WK theory. The expansion method we described is currently limited by cases where the recursive system of equations does not close (i.e. in the presence of feedback, and more generally in biochemical networks with loops). Carefully tailored moment closure approaches48 might provide a way forward, and broaden the applicability of the method to systems with different types of feedback and other more complex network motifs.

Supplementary Information

1 Deriving the WK optimal filter results for the multi-level cascade without feedback

1.1 Mapping the system onto a noise filter

The starting point for the derivation is the system of equations in main text Eq. (9), with ϕ1 = 0 in the absence of feedback: Embedded Image where the Gaussian noise functions satisfy Embedded Image). Taking the Fourier transform of Eq. (S1), we can solve the system of equations for the fluctuation functions δxi (ω) in Fourier space, Embedded Image with f(ω) denoting the Fourier transform of a function f(t). Iteratively plugging the result for δxj-1(ω) into the δxj(ω) equation, starting from j = 1, we can solve Eq. (S2) to get the following expressions for the Fourier space input and output fluctuations: Embedded Image

Let us compare the result for δxN(ω) to the Fourier transform of main text Eq. (10), the noise filter convolution integral: Embedded Image

We can make a mapping of the system to a linear noise filter with the following choice of estimate, signal, noise, and filter function: Embedded Image

1.2 Concise overview of WK optimal filter theory

To apply WK theory to our problem, let us summarize its main results (see Ref. 1 for a more detailed review). Given a Fourier-transformed signal and noise functions s(ω) and n(ω), let us denote the corresponding power spectra Ps(ω) and Pn(ω). The spectra are defined through the relation ⟨f(ω)f(ω′)⟩ = 2πPf(ω)δ(ω + ω′), where f = s or n. For the signal corrupted by noise, y(ω) ≡ s(ω) + n(ω), the corresponding power spectrum is Py(ω) = Ps(ω) + Pn(ω) if the noise is uncorrelated with the signal. This is indeed the case, since the Gaussian noise functions nj(ω) in Eq. (S5) that contribute to n(ω) are uncorrelated with n0(ω), the function that enters into the signal δx0(ω) in Eq. (S3).

Once Ps(ω) and Ps(ω) are specified, one can find a corresponding optimal filter function HWK(ω). Optimality here means that the time-domain function HWKt, plugged into the convolution integral of main text Eq. (10), minimizes the error Embedded Image between the estimate and signal defined in main text Eq. (11). In Fourier space the optimal filter takes the following form if signal and noise are uncorrelated2: Embedded Image

The + superscripts and subscripts denote two types of causal decompositions. For example, the function Embedded Image is defined via Embedded Image, where the factor Embedded Image is chosen such that it has no zeros or poles in the upper half-plane. This decomposition always exists for all the physical power spectra we encounter in signaling contexts. The other decomposition, denoted by {G(ω)}+ for a function G(ω), can be calculated from Embedded Image. Here Embedded Image indicates the Fourier transform of a function f(t), Embedded Image the inverse Fourier transform, and Θ(t) is a unit step function3. In practice, it is often convenient to calculate it through an alternative method: doing a partial fraction expansion of G(ω) and keeping only those terms with no poles in the upper half-plane.

To find the lower bound on ϵ, we inverse Fourier transform HWK(ω) back to the time domain. The minimum error EWK can then be expressed compactly in the following form, which is convenient for calculations: Embedded Image where Embedded Image is the signal autocorrelation function, given by the inverse Fourier transform of its power spectrum.

1.3 Calculating the optimal filter function HWK

Given Eqs. (S3), (S6), and the properties of the Gaussian noise functions nj(t), which in Fourier space satisfy Embedded Image, the power spectra for the signal and noise can be written as: Embedded Image Embedded Image

Here we have used the facts that Embedded Image for i > 0, and have introduced the dimensionless constants Embedded Image. Summing Ps(ω) and Pn(ω), we can write Py(ω) in the form: Embedded Image where B(λ) is the polynomial from main text Eq. (14), Embedded Image

As discussed in the main text, this polynomial will always have N roots λj, j = 1, … , N, where Re(λj) > 0. (The other N roots of the polynomial are just −λj.) Thus we can factor B(iω) in the following way: Embedded Image

Since ω = i,λj for j = 1, … , N are all the zeros of B(iω) in the complex lower half plane, this enables us to write down the decomposition Embedded Image where Embedded Image Embedded Image and Embedded Image Continuing with the calculation of HWK(ω), we see that: Embedded Image

The quantity Embedded Image is computed from taking the causal part of the partial fraction decomposition of Eq. (S14). Because the only causal pole (pole in the lower half plane) of Eq. (S14) is −iγ0, all other terms in the decomposition are dropped, yielding: Embedded Image where Embedded Image. Finally, we can divide this result by Embedded Image, following Eq. (S6), giving us the optimal filter: Embedded Image

Plugging in the definitions of C and K, we can rewrite the prefactor to get the final form for the optimal filter function: Embedded Image

1.4 Calculating the optimal error EWK

To calculate EWK from Eq. (S7), we first take the inverse Fourier transform of HWK(ω) from Eq. (S17), which gives a sum of exponentials in the time domain, Embedded Image

Using the fact that Embedded Image, we can evaluate the integral in Eq. (S7) to find Embedded Image

Reversing the partial fraction decomposition, Embedded Image with y = γ0, the error reduces to the value in main text Eq. (13): Embedded Image

1.5 Conditions under which the system can achieve WK optimality

In order for the system to attain E = EWK, the parameters must be tuned such that H(ω) ∝ HWK(ω), where H(ω) and Hopt(ω) are given by Eqs. (S5) and (S17) respectively. Comparing the two functions, we see that they are proportional to one another when λj = γj for all j = 1, … , N. Satisfying this condition actually requires a certain relationship between the different per-capita deactivation rates γj and the Λj parameters.

To see this, let us first denote BN(λ) as the polynomial from Eq. (S10) for a particular value of N. The explicit forms of the polynomials for the first few values of N are as follows: Embedded Image

Consider the N = 1 system. There is one root λ1 with a positive real part, and we set it to λ1 = γ1 to satisfy the condition. This requires that B1 (γ1) = 0, which occurs when Embedded Image. Interestingly, this same value of γ1 will also be a root for all higher polynomials N > 1. Because the additional terms in the higher polynomials all contain a Embedded Image factor, we see that BN (γ1) = B1 (γ1) = 0 for N > 1.

Thus B2 (λ) has one root Embedded Image that we have already found, and a new root λ2 = γ2 whose value we need to determine. This will be true iteratively at every higher value of N: the first N − 1 roots λj = γj, j = 1, … , N − 1, will be the same roots as for BN − 1(λ), and there will one new root λN = γN. This follows from the structure of the BN (λ) polynomials, where Embedded Image

We can find all the higher roots by induction. Let us assume that we have already found the values of λj = γj for j = 1, … , N – 1 and are interested in finding λN = γN. The known roots allow us to completely factor BN - 1(λ), and from the definition of the polynomials in Eq. (S10) that factorization has to take the form: Embedded Image

Note that we know the overall prefactor in the factorization above from the prefactor of the highest power λ2(N−1) in the definition of BN−1 (λ). Turning to BN (λ), we can write this polynomial as BN−1 (λ) plus an added term, Embedded Image

Comparing Eq. (S25) to Eq. (S24), we see that Embedded Image

Setting the factor in the brackets to zero allows us to find the new root λN = γN in terms of the previous root γN−1, Embedded Image

Starting from the known value of Embedded Image, we can iteratively use Eq. (S27) to find all the higher roots. The solutions are the nested radical forms shown in main text Eq. (17), Embedded Image

When these conditions are satisfied, the expression for EWK simplifies to the form in main text Eq. (18), Embedded Image where li = γi−1/Λi/γ0.

2 Deriving the WK optimal filter results for the multi-level cascade with feedback

2.1 Mapping the system onto a noise filter, finding the WK filter function and bound

The feedback derivation starts with main text Eq. (9), but with the ϕ1 term present: Embedded Image

The noise filter mapping is qualitatively different from the no feedback case, taking the form of main text Eq. (19), Embedded Image

We know the δx0 (t)| ϕ=0 solution in Fourier space already, having calculated it in Eq. (S3), Embedded Image

We can manipulate the Fourier space counterpart of Eq. (S30) to relate Embedded Image to s(ω) through a noise filter equation, Embedded Image where Embedded Image

Comparing to Eq. (S5), we see that s(ω) and n(ω) in this mapping are exactly the same as in the no feedback case. Hence Ps(ω) and Pn(ω) are the same, which means the calculation of HWK and EWK is unchanged. The result for EWK in Eq. (S21) serves as a lower bound for the error ϵ.

2.2 Conditions under which the system can achieve WK optimality

Comparing H(ω) from Eq. (S34) and HWK(ω) from Eq. (S17), one sees that achieving H(ω) = HWK(ω), and hence ϵ = EWK, is non-trivial. However there is one scenario where this can be approximately fulfilled. We will show that in a certain limit the N-level feedback system effectively behaves like an N = 1 level system with an effective Λ1 parameter. Note that the N = 1 version of Pn(ω) from Eq. (S8b) looks like: Embedded Image

Let us now consider an N-level system where γj ≫ γ0 for j > 0. The main frequency scale in the system is set by the input signal, which has characteristic frequency γ0, so typical frequencies ω that are relevant to the system behavior all share the property that ω ≪ γj for j > 0. If we use this simplification in Eq. (S8b), the noise power spectrum can be approximated as: Embedded Image

Comparing Eq. (S35) to Eq. (S36), we note that the multi-stage noise power spectrum is approximately the same form as for an N = 1 system, except with Λ1 replaced by an effective parameter Λeff given by: Embedded Image

For the special case where the production functions Rj(xj-1) = σj1xj-1, and hence Embedded Image for j > 0, the expression for Λeff simplifies to the result shown in main text Eq. (22): Embedded Image

The corresponding N = 1 optimal filter HWK(ω) from Eq. (S17), with Λeff instead of Λ1, can be expressed as: Embedded Image

Here we have used the fact that Embedded Image is the root for B1 (λ) from Eq. (S22), and substituted in Λeff.

Let us now write H(ω) from Eq. (S34) using the approximation ω ≪ γj for j > 0, Embedded Image

We can thus approximately have H(ω) ≈ HWK(ω) from Eq. (S39) when the feedback strength is tuned to the value from main text Eq. (21), Embedded Image which then ensures that ϵ ≈ EWK, with the latter having the N = 1 form, Embedded Image

3 Exact error calculation in the nonlinear cascade without feedback

This section fills in the details of the calculation that transforms main text Eq. (37), a relation for the generating function Embedded Image and its derivatives Embedded Image, into the recursion relation of main text Eq. (49). The ultimate goal is to use the recursion relation to find the coefficients Embedded Image in order to evaluate the exact error E given by main text Eq. (48): Embedded Image

Recall the expansions defined in the main text for all the quantities of interest: Embedded Image where Embedded Image

Here we use the multi-dimensional versions of the Poisson distributions and Poisson-Charlier polynomials, Embedded Image

More details on the Poisson-Charlier polynomials can be found in the next section of the SI, which provides a brief guide to their most useful properties.

Since we know the production functions Ri (xi−1) for our system of interest, we can easily find the coefficients σin in Eq. (S44), using main text Eq. (42). To derive the coefficients Embedded Image we start with the relation in main text Eq. (37): Embedded Image

Using Eq. (S45) and the fact that Poisson distributions satisfy Embedded Image, we can rewrite Eq. (S47) in terms of the Embedded Image functions: Embedded Image

Let us introduce one more expansion, for products of the Ri (xi−1) and Embedded Image functions, Embedded Image

Because Ri (xi−1) and Embedded Image have their own individual expansions in terms of the Poisson-Charlier polynomials, defined by Eq. (S44), the coefficients Embedded Image are entirely determined by the coefficients σin and Embedded Image of the individual expansions. This relation, a property of the Poisson-Charlier polynomials, is explained in more detail in SI Sec. 4.5. It takes the form: Embedded Image where Embedded Image are polynomi defined in Eqs. (S66)-(S67).

Let us define Embedded Image as the average of a function Embedded Image with respect to Embedded Image. Using the recursion relationships for Poisson-Charlier polynomials shown in Eq. (S64), one can prove the following useful identities: Embedded Image where Embedded Image. By multiplying Eq. (S48) by Embedded Image and summing over Embedded Image, we can use the above averages to obtain the following relation: Embedded Image

We can rearrange this obtain the recursion relation in main text Eq. (49), Embedded Image

This relation, together with Embedded Image which we know from the normalization property Embedded Image, is sufficient for us to calculate any coefficient Embedded Image of interest.

4 Properties of the Poisson-Charlier polynomials

4.1 Definition of the polynomials

In this section, we summarize some properties of the polynomials Embedded Image used in our analytical expansion approach for calculating moments of master equations. These are variants of Poisson-Charlier (PC) polynomials4,5, Embedded Image, related by a trivial factor to the standard PC definition: Embedded Image

The nth function Embedded Image is a polynomial in x of degree n, depending on the parameter Embedded Image. It is defined as follows: Embedded Image

Here Embedded Image are given by: is the kth falling factorial of x, with (x)0 ≡ 1. The first few polynomials Embedded Image

These Embedded Image appear in a variety of master equation expansion approaches, for example the spectral method of Refs. 6, 7. In fact, Embedded Image, where ⟨n|x⟩ is the mixed product defined in Eq. A8 of Ref. 6 (with Embedded Image substituted for the rate parameter g).

4.2 Orthogonality with respect to the Poisson distribution

One of the convenient properties of these polynomials is that they have simple averages with respect to the Poisson distribution, Embedded Image where x is a non-negative integer, and Embedded Image is the parameter that defines the mean of the distribution, so that Embedded Image. Let us denote the average of a function f x with respect to the Poisson distribution Embedded Image in following way: Embedded Image

Then the polynomials of Eq. (S55) satisfy the following orthogonality relationship8, 9: Embedded Image

Since Embedded Image, a special case of Eq. (S59) when n′ = 0 gives an expression for the mean: Embedded Image

4.3 Using the polynomials as a basis for function expansions

The polynomials form a basis in which one can expand arbitrary functions of populations f(x), Embedded Image for some coefficients αn. To calculate the mth coefficient αm, we multiply both sides of Eq. (S61) by Embedded Image take the average with respect to Embedded Image: Embedded Image where we have used the orthogonality relation Eq. (S59). Thus αm is given by: Embedded Image where we have plugged in the definition of Embedded Image from Eq. (S55). For the kinds of functions we ordinarily encounter in working with master equations, the coefficients αm rapidly decay with m, so in practice we can often form an excellent approximation by just keeping the first few (n ≤ 5) terms in the expansion of Eq. (S61)9.

4.4 Recursion relationships

The polynomials satisfy the following recursion relationships, as can be easily verified from their definition in Eq. (S55): Embedded Image

4.5 Expanding the product of polynomials

The final property that comes in useful in calculations is that the product of two polynomials Embedded Image and Embedded Image can be itself expanded in a linear combination of polynomials in the following form: Embedded Image where the coefficients Embedded Image are polynomials in Embedded Image given by: Embedded Image

Here, the sum starts at the largest of the three values 0, n–k, and m–k, and ⌊z⌋ denotes the largest integer less or equal to z. The quantity Embedded Image is defined as: Embedded Image

Thus for example if one had two functions f(x) and g(x) with individual expansions, Embedded Image then the product can be expanded as Embedded Image with coefficients given by Embedded Image

Acknowledgements

This article was submitted as part of the Dave Thirumalai Festschrift. M.H. would like to acknowledge Dave’s absolutely formative role in his own scientific career: first as a mentor and role model during my postdoctoral studies, and continuing to this day as a collaborator and friend. Discussions with Dave shaped my view of what it means to be a biophysical theorist (and so much else) and I happily see his influence live on in my own scientific mentoring of students. The work described in the current article is a coda to a line of research that first began under Dave’s auspices in 2013, when we developed the Wiener-Kolmogorov approach for biological signaling systems. The open question raised by our first article on the topic—can the WK bound ever be beaten via nonlinearity—is here answered in the affirmative. But instead of weakening the applicability of WK theory, the surprising insignificance of nonlinear enhancements actually strengthens the case for it.

References

  1. 1.↵
    Berg, H. C. & Purcell, E. M. Physics of chemore-ception. Biophys. J. 20, 193 (1977).
    OpenUrlCrossRefPubMedWeb of Science
  2. 2.↵
    von Hippel, P. H. & Berg, O. G. Facilitated target location in biological systems. J. Biol. Chem. 264, 675–678 (1989).
    OpenUrlAbstract/FREE Full Text
  3. 3.↵
    Lestas, I., Vinnicombe, G. & Paulsson, J. Fun-damental limits on the suppression of molecular fluctuations. Nature 467, 174–178 (2010).
    OpenUrlCrossRefPubMedWeb of Science
  4. 4.
    Song, Y. & Hyeon, C. Thermodynamic uncertainty relation to assess biological processes. J. Chem. Phys. 154, 130901 (2021).
    OpenUrl
  5. 5.
    Schmiedl, T. & Seifert, U. Efficiency of molecular motors at maximum power. EPL 83, 30005 (2008).
    OpenUrl
  6. 6.↵
    Hwang, W. & Hyeon, C. Energetic costs, precision, and transport efficiency of molecular motors. J. Phys. Chem. Lett. 9, 513–520 (2018).
    OpenUrlCrossRef
  7. 7.↵
    Lynch, M. & Marinov, G. K. The bioenergetic costs of a gene. Proc. Natl. Acad. Sci. 112, 15690–15695 (2015).
    OpenUrlAbstract/FREE Full Text
  8. 8.↵
    Ilker, E. & Hinczewski, M. Modeling the growth of organisms validates a general relation between metabolic costs and natural selection. Phys. Rev. Lett. 122, 238101 (2019).
    OpenUrl
  9. 9.↵
    Cheong, R., Rhee, A., Wang, C. J., Nemenman, I. & Levchenko, A. Information Transduction Capacity of Noisy Biochemical Signaling networks. Science 334, 354–358 (2011).
    OpenUrlAbstract/FREE Full Text
  10. 10.↵
    Bowsher, C. G. & Swain, P. S. Environmental sensing, information transfer, and cellular decision-making. Curr. Opin. Biotech. 28, 149–155 (2014).
    OpenUrlCrossRefPubMedWeb of Science
  11. 11.↵
    Becskei, A. & Serrano, L. Engineering stability in gene networks by autoregulation. Nature 405, 590–593 (2000).
    OpenUrlCrossRefPubMedWeb of Science
  12. 12.
    Thattai, M. & van Oudenaarden, A. Intrinsic noise in gene regulatory networks. Proc. Natl. Acad. Sci. USA 98, 8614–8619 (2001).
    OpenUrlAbstract/FREE Full Text
  13. 13.
    Simpson, M. L., Cox, C. D. & Sayler, G. S. Fre-quency domain analysis of noise in autoregulated gene circuits. Proc. Natl. Acad. Sci. USA 100, 4551–4556 (2003).
    OpenUrlAbstract/FREE Full Text
  14. 14.↵
    Austin, D. W. et al. Gene network shaping of inherent noise spectra. Nature 439, 608–611 (2006).
    OpenUrlCrossRefPubMedWeb of Science
  15. 15.↵
    Wang, T.-L., Kuznets-Speck, B., Broderick, J. & Hinczewski, M. The price of a bit: energetic costs and the evolution of cellular signaling. bioRxiv 2020.10.06.327700 (2020).
  16. 16.↵
    Thomas, P. J. & Eckford, A. W. Capacity of a simple intercellular signal transduction channel. IEEE Trans. Inf. Theory 62, 7358–7382 (2016).
    OpenUrl
  17. 17.↵
    Rhee, A., Cheong, R. & Levchenko, A. The applica-tion of information theory to biochemical signaling systems. Phys. Biol. 9, 045011 (2012).
    OpenUrlCrossRefPubMed
  18. 18.↵
    Tkačik, G., Callan, C. G. & Bialek, W. Information flow and optimization in transcriptional regulation. Proc. Natl. Acad. Sci. 105, 12265–12270 (2008).
    OpenUrlAbstract/FREE Full Text
  19. 19.
    Cheong, R., Rhee, A., Wang, C. J., Nemenman, I. & Levchenko, A. Information transduction capacity of noisy biochemical signaling networks. Science 334, 354–358 (2011).
    OpenUrlAbstract/FREE Full Text
  20. 20.
    Uda, S. et al. Robustness and compensation of infor-mation transmission of signaling pathways. Science 341, 558–561 (2013).
    OpenUrlAbstract/FREE Full Text
  21. 21.
    Voliotis, M., Perrett, R. M., McWilliams, C., McAr-dle, C. A. & Bowsher, C. G. Information transfer by leaky, heterogeneous, protein kinase signaling systems. Proc. Natl. Acad. Sci. 111, E326–E333 (2014).
    OpenUrlAbstract/FREE Full Text
  22. 22.
    Selimkhanov, J. et al. Accurate information trans-mission through dynamic biochemical signaling networks. Science 346, 1370–1373 (2014).
    OpenUrlAbstract/FREE Full Text
  23. 23.
    Potter, G. D., Byrd, T. A., Mugler, A. & Sun, B. Dynamic sampling and information encoding in biochemical networks. Biophys. J. 112, 795–804 (2017).
    OpenUrlCrossRef
  24. 24.
    Suderman, R., Bachman, J. A., Smith, A., Sorger, P. K. & Deeds, E. J. Fundamental trade-offs between information flow in single cells and cellular populations. Proc. Natl. Acad. Sci. 114, 5755–5760 (2017).
    OpenUrlAbstract/FREE Full Text
  25. 25.↵
    Keshelava, A. et al. High capacity in g protein-coupled receptor signaling. Nat. Commun. 9, 1–8 (2018).
    OpenUrlCrossRefPubMed
  26. 26.↵
    Hinczewski, M. & Thirumalai, D. Cellular signaling networks function as generalized Wiener-Kolmogorov filters to suppress noise. Phys. Rev. X 4 (041017, 2014).
    OpenUrl
  27. 27.↵
    Becker, N. B., Mugler, A. & ten Wolde, P. R. Optimal prediction by cellular signaling networks. Phys. Rev. Lett. 115 (258103, 2015).
    OpenUrl
  28. 28.↵
    Hinczewski, M. & Thirumalai, D. Noise control in gene regulatory networks with negative feedback. J. Phys. Chem. B 120, 6166–6177 (2016).
    OpenUrl
  29. 29.
    Samanta, H. S., Hinczewski, M. & Thirumalai, D. Optimal information transfer in enzymatic networks: A field theoretic formulation. Phys. Rev. E 96, 012406 (2017).
    OpenUrl
  30. 30.↵
    Zechner, C., Seelig, G., Rullan, M. & Khammash, M. Molecular circuits for dynamic noise filtering. Proc. Natl. Acad. Sci. USA 113, 4729–4734 (2016).
    OpenUrlAbstract/FREE Full Text
  31. 31.↵
    Wiener, N. Extrapolation, Interpolation and Smoothing of Stationary Times Series (Wiley, New York, 1949).
  32. 32.
    Kolmogorov, A. N. Interpolation and extrapolation of stationary random sequences. Izv. Akad. Nauk SSSR., Ser. Mat. 5, 3–14 (1941).
    OpenUrl
  33. 33.↵
    Bode, H. W. & Shannon, C. E. A simplified derivation of linear least square smoothing and prediction theory. Proc. Inst. Radio. Engin. 38, 417–425 (1950).
    OpenUrl
  34. 34.↵
    Goldbeter, A. & Koshland, D. E. An amplified sensitivity arising from covalent modification in biological-systems. Proc. Natl. Acad. Sci. U.S.A. 78, 6840–6844 (1981).
    OpenUrlAbstract/FREE Full Text
  35. 35.↵
    Martín, H., Flández, M., Nombela, C. & Molina, M. Protein phosphatases in MAPK signalling: we keep learning from yeast. Mol. Microbiol. 58, 6–16 (2005).
    OpenUrlCrossRefPubMedWeb of Science
  36. 36.↵
    van Kampen, N. G. Stochastic processes in physics and chemistry (Elsevier, Amsterdam, 2007).
  37. 37.↵
    Sturm, O. et al. The mammalian MAPK/ERK path-way exhibits properties of a negative feedback amplifier. Sci. Signal. 3, ra90, 1–7 (2010).
    OpenUrlCrossRefPubMed
  38. 38.↵
    Hathcock, D., Sheehy, J., Weisenberger, C., Ilker, E. & Hinczewski, M. Noise filtering and prediction in biological signaling networks. IEEE Trans. Mol. Biol. Multi-Scale Commun. 2, 16–30 (2016).
    OpenUrl
  39. 39.↵
    Gillespie, D. T. The chemical Langevin equation. J. Chem. Phys. 113, 297–306 (2000).
    OpenUrlCrossRefWeb of Science
  40. 40.↵
    Detwiler, P. B., Ramanathan, S., Sengupta, A. & Shraiman, B. I. Engineering aspects of enzymatic signal transduction: Photoreceptors in the retina. Biophys. J. 79, 2801–2817 (2000).
    OpenUrlCrossRefPubMedWeb of Science
  41. 41.↵
    Alon, U. An Introduction to Systems Biology: Design Principles of Biological Circuits (Chapman and Hall/CRC, 2006).
  42. 42.↵
    Mugler, A., Walczak, A. M. & Wiggins, C. H. Spectral solutions to stochastic models of gene expression with bursts and regulation. Phys. Rev. E 80, 041921 (2009).
    OpenUrl
  43. 43.↵
    Walczak, A. M., Mugler, A. & Wiggins, C. H. A stochastic spectral analysis of transcriptional regulatory cascades. Proc. Natl. Acad. Sci. USA 106, 6529–6534 (2009).
    OpenUrlAbstract/FREE Full Text
  44. 44.↵
    Ogura, H. Orthogonal functionals of the Poisson process. IEEE Trans. Info. Theory 18, 473–481 (1972).
    OpenUrlCrossRef
  45. 45.↵
    Nevozhay, D., Adams, R. M., Murphy, K. F., Josic, K. & Balazsi, G. Negative autoregulation linearizes the dose-response and suppresses the heterogeneity of gene expression. Proc. Natl. Acad. Sci. USA 106, 5123–5128 (2009).
    OpenUrlAbstract/FREE Full Text
  46. 46.↵
    Cai, L., Friedman, N. & Xie, X. S. Stochastic protein expression in individual cells at the single molecule level. Nature 440, 358–362 (2006).
    OpenUrlCrossRefPubMedWeb of Science
  47. 47.↵
    Gillespie, D. T. Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81, 2340–2361 (1977).
    OpenUrlCrossRefPubMedWeb of Science
  48. 48.↵
    Kuehn, C. Moment closure—a brief review. In Control of self-organizing nonlinear systems, 253–271 (Springer, 2016).

References

  1. 1.↵
    Hinczewski, M. & Thirumalai, D. Noise control in gene regulatory networks with negative feedback. J. Phys. Chem. B 120, 6166–6177 (2016).
    OpenUrl
  2. 2.↵
    Bode, H. W. & Shannon, C. E. A simplified derivation of linear least square smoothing and prediction theory. Proc. Inst. Radio. Engin. 38, 417–425 (1950).
    OpenUrl
  3. 3.
    Becker, N. B., Mugler, A. & ten Wolde, P. R. Optimal prediction by cellular signaling networks. Phys. Rev. Lett. 115 (258103, 2015).
    OpenUrl
  4. 4.↵
    Özmen, N. & Erkuş-Duman, E. On the Poisson-Charlier polynomials. Serdica Math. J. 41, 457–470 (2015).
    OpenUrl
  5. 5.↵
    Roman, S. The Umbral Calculus (Dover, 2005).
  6. 6.↵
    Mugler, A., Walczak, A. M. & Wiggins, C. H. Spectral solutions to stochastic models of gene expression with bursts and regulation. Phys. Rev. E 80, 041921 (2009).
    OpenUrl
  7. 7.↵
    Walczak, A. M., Mugler, A. & Wiggins, C. H. A stochastic spectral analysis of transcriptional regulatory cascades. Proc. Natl. Acad. Sci. USA 106, 6529–6534 (2009).
    OpenUrlAbstract/FREE Full Text
  8. 8.↵
    Ogura, H. Orthogonal functionals of the Poisson process. IEEE Trans. Info. Theory 18, 473–481 (1972).
    OpenUrlCrossRef
  9. 9.↵
    Hinczewski, M. & Thirumalai, D. Cellular signaling networks function as generalized Wiener-Kolmogorov filters to suppress noise. Phys. Rev. X 4 (041017, 2014).
    OpenUrl
Back to top
PreviousNext
Posted July 16, 2021.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Cellular signaling beyond the Wiener-Kolmogorov limit
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Cellular signaling beyond the Wiener-Kolmogorov limit
Casey Weisenberger, David Hathcock, Michael Hinczewski
bioRxiv 2021.07.15.452575; doi: https://doi.org/10.1101/2021.07.15.452575
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Cellular signaling beyond the Wiener-Kolmogorov limit
Casey Weisenberger, David Hathcock, Michael Hinczewski
bioRxiv 2021.07.15.452575; doi: https://doi.org/10.1101/2021.07.15.452575

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Systems Biology
Subject Areas
All Articles
  • Animal Behavior and Cognition (3686)
  • Biochemistry (7780)
  • Bioengineering (5672)
  • Bioinformatics (21252)
  • Biophysics (10565)
  • Cancer Biology (8164)
  • Cell Biology (11916)
  • Clinical Trials (138)
  • Developmental Biology (6744)
  • Ecology (10390)
  • Epidemiology (2065)
  • Evolutionary Biology (13847)
  • Genetics (9698)
  • Genomics (13059)
  • Immunology (8131)
  • Microbiology (19973)
  • Molecular Biology (7839)
  • Neuroscience (42996)
  • Paleontology (318)
  • Pathology (1276)
  • Pharmacology and Toxicology (2257)
  • Physiology (3350)
  • Plant Biology (7216)
  • Scientific Communication and Education (1309)
  • Synthetic Biology (2000)
  • Systems Biology (5529)
  • Zoology (1126)