Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Likelihood-free nested sampling for biochemical reaction networks

Jan Mikelson, View ORCID ProfileMustafa Khammash
doi: https://doi.org/10.1101/564047
Jan Mikelson
1Department of Biosystems Science and Engineering, ETH Zurich, Switzerland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mustafa Khammash
1Department of Biosystems Science and Engineering, ETH Zurich, Switzerland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Mustafa Khammash
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Supplementary material
  • Preview PDF
Loading

Abstract

The development of mechanistic models of biological systems is a central part of Systems Biology. One major challenge in developing these models is the accurate inference of the model parameters. In the past years, nested sampling methods have gained an increasing amount of attention in the Systems Biology community. Some of the rather attractive features of these methods include that they are easily parallelizable and give an estimation of the variance of the final Bayesian evidence estimate from a single run. Still, the applicability of these methods is limited as they require the likelihood to be available and thus cannot be applied to stochastic systems with intractable likelihoods. In this paper, we present a likelihood-free nested sampling formulation that gives an unbiased estimator of the Bayesian evidence as well as samples from the posterior. Unlike most common nested sampling schemes we propose to use the information about the samples from the final prior volume to aid in the approximation of the Bayesian evidence and show how this allows us to formulate a lower bound on the variance of the obtained estimator. We proceed and use this lower bound to formulate a novel termination criterion for nested sampling approaches. We illustrate how our approach is applied to several realistically sized models with simulated data as well as recently published biological data. The presented method provides a viable alternative to other likelihood-free inference schemes such as Sequential Monte Carlo or Approximate Bayesian Computations methods. We also provide an intuitive and performative C++ implementation of our method.

1 Introduction

The accurate modelling and simulation of biological processes such as gene expression or signalling has gained a lot of interest over the last years, resulting in a large body of literature addressing various types of models along with the means for their identification and simulation. The main purpose of these models is to qualitatively or quantitatively describe observed biological dynamics while giving insights into the underlying bio-molecular mechanisms.

One important aspect in the design of these models is the determination of the model parameters. Often there exists a mechanistic model of the cellular processes, but their parameters (e.g. reaction rates or initial molecule concentrations) are largely unknown. Since the same network topology may result in different behaviour depending on the chosen parameters [26], this presents a major challenge for modelling and underscores the need for effective parameter estimation techniques.

The models used in Systems Biology can be coarsely classified into two groups: deterministic and stochastic models. Deterministic models usually rely on ordinary differential equations which, given the parameters and initial conditions, can describe the time evolution of the biological system in a deterministic manner. However, many cellular processes like gene expression are subject to random fluctuations [12, 36], which can have important biological functions [43, 49, 31] as well as contain useful information about the underlying molecular mechanisms [39]. The important role of stochastic fluctuations in biological systems has lead to increased interest in stochastic models and methods for their parameter inference[3, 25, 32, 41, 42, 56]. Such stochastic models are usually described in the framework of stochastic chemical reaction networks that can be simulated using Gillespie’s Stochastic Simulation Algorithm (SSA) [17]. In recent years, the availability of single-cell trajectory data has drastically increased, providing detailed information about the (potentially stochastic) development of single cells throughout time.

Despite the increasing interest in stochastic systems, performing inference on them is still challenging and the available methods are computationally very demanding (see for instance [3, 20, 53]). Common algorithmic approaches for such cases include various kinds of sequential Monte Carlo methods (SMC) [9, 6], Markov Chain Monte Carlo (MCMC) methods [19, 3, 45], approximate Bayesian computation (ABC) methods [54, 32, 28], iterative filtering [27] and nested sampling (NS) approaches [52, 29, 37, 15]. Furthermore, to reduce computational complexity, several of these inference methods rely on approximating the model dynamics (for instance using the diffusion approximation [18] or linear noise approximation [11]). However, these approximations may not always be justifiable (in the case of low copy numbers of the reactants for example) and might obscure crucial system behaviour. One particular problem that is common to most inference methods is the usually high dimensional parameter space. Most of the sampling-based inference techniques require the exploration of the full parameter space, which is not an easy task as the dimension of the parameter space increases. In this paper, we focus on nested sampling methods and investigate its applicability to stochastic systems. Coming originally from the cosmology community, NS (originally introduced in [52]) has gained increasing popularity and found also applications in Systems Biology (see for instance [1, 5, 10, 29, 46]). Several implementations of NS are available ([14, 22]) and in [29] the authors even provide a NS implementation specifically for a Systems Biology context. Even though the original purpose of NS was to efficiently compute the Bayesian evidence, it has more and more become a viable alternative to MCMC methods for the approximation of the posterior (see for instance [16, 24]).

There are various reasons for the interest in NS which are discussed in detail in [40, 33] and the references within. Some of the rather appealing features of NS is that it performs well for multimodal distributions [16, 22], is easily parallelizable [23, 5] and provides a natural means to compute error bars on all of its results without needing multiple runs of the algorithm [51, 40]. For a comparison of MCMC and NS see for instance [46, 40], for a discussion of other methods to compute the Bayesian evidence using MCMC see [40, 34]. Like standard MCMC methods, NS requires the availability of the likelihood l(θ) which limits its use to models that allow for the computation of the likelihood such as deterministic models and simple stochastic models. In this paper, we consider an extension to the original NS framework that, similarly to the particle MCMC method [55] and particle SMC [2], allows the use of approximated likelihoods instead of the actual likelihood to be used for NS. In the following we introduce the notation and problem formulation, in section 2 we revisit the basic NS idea and outline some of its features. Section 3 is dedicated to the likelihood-free NS formulation and in section 4 we demonstrate its performance on several chosen examples.

1.1 Chemical Reaction Networks

We are considering a nx-dimensional Markov Process X(t) depending on a d-dimensional parameter vector θ. We denote with Xi(t) the ith entryof the state vector at time t and with Embedded Image the state vector at time t. We will write Xτ = X(tτ) when talking about the state vector at a timepoint tτ indexed with τ.

In the context of stochastic chemical reaction networks this Markov process describes the abundances of nx species Embedded Image, reacting through nR reactions Embedded Image written as Embedded Image where pji is the numbers of molecules of species Xi involved in reaction Rj, and qji is the number of molecules of species Xi produced by that reaction. The random variable Xi(t) corresponds to the number of molecules of species Xi at time t. Each reaction ℛj has an associated propensity. The reaction propensities at a given time t depend on the current state X(t) and on a d-dimensional parameter vector θ.

1.2 General Task

The process X(t) is usually not directly observable but can only be observed indirectly through a ny-dimensional observation vector Embedded Image which depends on the state Xτ and on the d-dimensional parameter vector θ ∈ Ω where Ω ⊆ ℝd denotes the parameter space. We shall assume that the variable Y is not observed at all times but only on T timepoints t1, …, tT and only for M different trajectories. With y we denote the collection of observations at all time points. In the Bayesian approach the parameter vector θ is treated as a random variable with associated prior π(θ). The goal is not to find just one set of parameters, but rather to compute the posterior distribution 𝒫(θ|y) of θ Embedded Image where l(y|θ) (we will also write l(θ) if the dependence on y is clear from the context) is the likelihood of θ for the particular observation y and Z is the Bayesian evidence Embedded Image This has several advantages over a single point estimate as it gives insight into the areas of the parameter space resulting in model behaviour similar to the observations as well as about their relevance for the simulation outcome (a wide posterior indicates non-identifiability for example). For a detailed discussion of Bayesian approaches see for instance [34]. In this paper we follow the Bayesian approach and aim to recover the posterior 𝒫(θ|y). In the following section we briefly outline the basic nested sampling approach.

2 Nested Sampling

Nested sampling is a Bayesian inference technique that was originally introduced by John Skilling in [52] to compute the Bayesian evidence 1.1. NS can be viewed as an importance sampling technique (as for instance discussed in [47]) as it approximates the evidence by generating samples θi, weights wi and likelihoods li = l(θi) such that the weighted samples (θi, wi) can be used to obtain numerical approximations of a function f over the prior π Embedded Image

To compute an approximation Embedded Image of the Bayesian evidence 1.1, f is chosen to be the likelihood function l Embedded Image

The points θi are sampled from the prior distribution constrained to super level sets of the likelihood corresponding to an increasing sequence of thresholds. In this sense it can also be viewed as a sequential Monte Carlo method, where the intermediate distributions are the nested super level sets of the likelihood. This way, samples from NS are concentrated around the higher regions of the likelihood. One can also use the weights li × wi instead of wi to approximate functions over the posterior 𝒫(θ) Embedded Image

2.1 NS algorithm

In the following we briefly outline the NS a lgorithm. First, a set ℒ0 of N “live” particles {θi}i=1,…,N is sampled from the prior π Embedded Image and their likelihoods li = l(θi) are computed. Then the particle with the lowest likelihood Embedded Image gets removed from the set of live particles and saved together with its likelihood Embedded Image in a set of “dead” particles 𝒟. A new particle θ* is then sampled from the prior under the constraint that its likelihood is higher than ϵ1 Embedded Image This particle is combined with the remaining particles of ℒ0 to form a new set of live particles ℒ1 that are now distributed according to the constrained prior π(θ|l(θ) > ϵ1), which we denote as

Embedded Image .

This procedure is repeated until a predefined termination criteria is s atisfied. The result is a sequence of dead points θi with corresponding likelihoods ϵi that are concentrated in the regions of high likelihood. The Nested Sampling procedure is shown in Algorithm 1.

Algorithm 1

Nested sampling algorithm

1: Given observations y and a prior π(θ) for θ.

2: Sample N particles θk from the prior π and save in the set ℒ0, set = 𝒟{∅}

3: for i = 1, 2, …, m do

4: Set θi = arg min {l(θ)|θ ∈ ℒi-1} and ϵi = l(θi)

5: Add {θi, ϵi} to 𝒟

6: Set ℒi = ℒi-1\θi

7: Sample θ* ∼ π(θ l(θ) > ϵi) and add it to ℒi

8: end for

2.2 Approximating the Bayesian Evidence

Nested sampling exploits the fact that the Bayesian evidence 1.1 can also be written1 (see [52]) as a one dimensional integral Embedded Image over the prior volume

Embedded Image where L(x) denotes the likelihood corresponding to the constrained prior with volume x Embedded Image We have visualized these quantities on a simple example with a uniform prior on [0, 1] in Figure 1.

Figure 1:
  • Download figure
  • Open in new tab
Figure 1:

Illustration of the nested sampling approximation with a uniform prior on [0, 1]. A: The integral over the parameter space ∫Ωl(θ)dθ. B: The transformed integral Embedded Image over the prior volume x.

The sampling scheme of nested sampling provides a sequence of likelihoods ϵ1 < ϵ2 < … < ϵm, but their corresponding prior volumes x(ϵi) are not known. However, since the ϵi are obtained by iteratively removing the lowest likelihood of N uniformly distributed points on the constrained prior π(θ|l(θ) > ϵi-1), the prior volume x(ϵi) can be written as Embedded Image where each t(i) is an independent sample of the random variable t which is distributed as the largest of N uniform random variables on the interval [0, 1] and x0 = 1 (For further justification and discussion on this see [52, 15, 8] and the references within). The values t(i) are not known and need to be estimated. Since their distribution is known2, they can be approximated by their means Embedded Image (or by the mean of their logs Embedded Image), and thus the ith prior volume can be approximated as Embedded Image With these prior volumes one can compute the importance weights wi in equation 2.2 and 2.3 for each of the dead particles θi as Embedded Image These weights correct for the fact that the samples in 𝒟 are not drawn uniformly from the prior, but are concentrated in areas of high likelihood. We note that to integrate a function on the parameter space Ω over the prior π, as in equations 2.2, only these weights are needed. To approximate Z, NS uses these weights to integrate the likelihood function l(θ) over the prior Embedded Image where m is the number of performed NS iterations and the subscript 𝒟 in Embedded Image emphasizes that for NS the evidence estimate is obtained using only the dead points in 𝒟. The justification for these weights as well as an in depth discussion and error approximation can be found in [8, 24, 30] and the references therein. This basic idea of nested sampling has seen several modifications and improvements over the years, along with in-depth discussions of various sampling schemes for the constrained prior [14, 22], parallel formulations [22, 23, 5] and several implementations [14, 22, 29].

2.3 Termination of NS

Assuming that the distribution 2.4 can be efficiently sampled, each iteration of the NS scheme has the same computational complexity (the computationally most expensive step is usually to sample θ* ∼ π(θ|l(θ) > ϵi) and computing its likelihood). The NS algorithm is usually run until the remaining prior volume multiplied by the highest likelihood in this volume is smaller than a predefined fraction of the current BE estimate (see [52]). We write this quantity as Embedded Image Some other termination criteria have been suggested (for instance in [22]), but since the prior volume decreases exponentially with the number of NS iterations and each iteration takes the same computational time, the choice of the particular termination criterion is not critical.

2.4 Parallelization of NS

The parallelization of NS can be done in a very straight forward manner. Still several different parallelization schemes have been suggested in [22, 23, 5] (for a short overview see section S1). We use a parallelization scheme similar to the one presented in [23], where at each iteration not only the one particle with the lowest likelihood is resampled, but the r lowest particles. The resulting parallel scheme is outlined in Algorithm 2. With r parallel particles the final approximation 2.8 changes to Embedded Image ,with Embedded Image and Embedded Image being ith sample of tj which is the jth largest number among N uniform numbers between 0 and 1 3 (with the obvious boundary condition x0,r = 1). We note that this is slightly different than the parallelization scheme presented in [22, 23, 5], for a brief discussion see S1.

Algorithm 2:

Parallel nested sampling algorithm. The samples drawn in line 11 are all independent and thus can be drawn in parallel

1: Given observations y and a prior π(θ) for θ.

2: Sample N particles θk from the prior π and save them in the set ℒ0, set 𝒟 = {∅}

3: for i = 1, 2, …, m do

4: for j = 1, 2, …, r do

5: Set θi,j = arg min {l(θ)|θ ∈ ℒi-1} and ϵi,j = l(θi,j)

6: Add {θi,j, ϵi,j} to 𝒟

7: remove θi,j from ℒi-1

8: end for

9: Set ℒi = ℒi-1

10: for j = 1, 2, …, r do

11: Sample θ* ∼ π(θ | l(θ) >, ϵi,r) and add it to ℒi

12: end for

13: end for

3 Likelihood-free nested sampling (LF-NS)

In many cases (such as most of the above mentioned stochastic models) the likelihood l(θ) cannot be directly computed, making approaches like MCMC methods or nested sampling not applicable. Fortunately, many variations of MCMC have been described circumventing this problem by intro- ducing likelihood-free MCMC methods such as [35] or [55] as well as other likelihood-free methods such as ABC [54] or likelihood-free sequential Monte Carlo (SMC) methods [50]. These approaches usually rely on forward simulation of a given parameter vector θ to obtain a simulated data set that can then be compared to the real data or can be used to compute a likelihood approximation Embedded Image. In the following we briefly illustrate one way to approximate the likelihood.

3.1 Likelihood approximation using particle filters

A common way to approximate the likelihood through forward simulation is using a particle filter (see for instance [44] or [19]), which iteratively simulates the stochastic system with H particles and then resamples these particles. In the following we illustrate such a particle filter likelihood approximation on a simple birth death model, where one species (mRNA) is produced at rate k = 1 and degrades at rate γ = 0.1. We simulated one trajectory (shown in Figure 2 A) of this system using Gillespies stochastic simulation algorithm (SSA [17]) and, using the finite state projection (FSP [38]), computed the likelihood l(k) for different values of k while keeping γ fixed to 0 .1. The true likelihood for different k is shown as the solid red line in Figure 2 B and C. We also illustrated the likelihood approximation Embedded Image using a particle filter ([19]) with H = 100 particles for three values of k. For each of the values for k we computed 1000 realizations of Embedded Image and plotted the empirical distributions in Figure 2 C. Note that Embedded Image is itself a random variable with distribution Embedded Image and has a mean equal to the true likelihood Embedded Image (see for instance [44]). We also sampled 106 values of k from a log uniform prior and approximate for each k its likelihood with the same particle filter with H = 100 particles. We plotted the contour lines of this joint distribution in Figure 2 B. Embedded Image In the following we discuss how to utilize such a likelihood approximation to apply the above described NS procedure to cases where the likelihood is not available. Throughout the paper we assume that the likelihood approximation Embedded Image is obtained using a particle filter, but our result hold for any unbiased likelihood estimator.

Figure 2:
  • Download figure
  • Open in new tab
Figure 2:

A: A simulated trajectory of the birth death system using k = 1 and γ = 0.1 with 21 equally spaced measurements (taken to be normally distributed around the mRNA count with σ = 2).B: Top: Likelihood for different parameters k (red) and contour lines of the joint distribution Π(k, log(k) of the parameter k and its likelihood approximation Embedded Image, based on 106 samples of the likelihood approximation obtained with a particle filter with 100 particles. Bottom: The constrained priors Embedded Image and Embedded Image for ϵ = 1e - 24. C: Example distributions Embedded Image (blue) for k = 1, 1.2 and 1.4 and the true likelihood l(k) red. D: blue: The ratio α(m) of the probability masses of Embedded Image above and below each likelihood threshold ϵi, constrained to those regions of k with Embedded Image (these are the parameter regions in panel B between Embedded Image and Embedded Image). purple: The evidence as estimated by all particles with likelihood below Embedded Image.

3.2 The LF-NS scheme

From here on we assume that the true likelihood l(θ) is not available, but a realization Embedded Image of the approximated likelihood having the distribution Embedded Image with Embedded Image can be computed.

For NS, the constraint prior π(θ|l(θ) > ϵi) needs to be sampled. Since in the likelihood-free case, the likelihood l(θ) is not available and Embedded Image is itself a random variable, the set Embedded Image (which is the support of the constrained prior) is not defined. To apply the NS idea to the likelihood-free case, we propose to perform the NS procedure on the joint prior Embedded Image on the set Ω × ℝ>0. This joint prior can be sampled by drawing a sample θ* from the prior π(θ) and then drawing one sample Embedded Image from the distribution of likelihood approximations Embedded Image. With such a sampling scheme we perform the NS steps of constructing the set of “dead” particles 𝒟 on the joint prior 3.10. As in standard NS, we sample a set of N “live” particles Embedded Image from Embedded Image, then we iteratively remove the particle Embedded Image with the lowest likelihood sample Embedded Image from the set of live points and add it to the dead points. The LF-NS algorithm is shown in Algorithm 3.

The parallel version of LF-NS is analogous to the parallelization of the standard NS algorithm in Algorithm 2.

Algorithm 3:

Likelihood-free nested sampling algorithm

1: Given observations y, a prior π(θ) for θ and a likelihood approximation Embedded Image.

2: Sample N particles Embedded Image from the prior Embedded Image and save it in the set ℒ0, set 𝒟 = {∅}

3: for i = 1, 2, …, m do

4: Find Embedded Image and set Embedded Image and Embedded Image

5: Add {θi, ϵi} to 𝒟

6: Set ℒi = ℒi-1\{θi, ϵi}

7: Sample Embedded Image and add it to ℒi

8: end for

3.3 LF-NS is unbiased

As for standard NS, the sampling procedure for LF-NS guarantees that each set of live points ℒi contains N samples uniformly distributed according to the constrained joint prior Embedded Image, thus removing the sample with the lowest likelihood approximation Embedded Image results in the same shrinkage of prior volume as the standard NS scheme. The prior volumes xi = t(i)xi-1 now correspond to the volumes of the constraint joint priors Embedded Image and the resulting weights Embedded Image can be used, similarly as in equation 2.2, to integrate functions f over the constrained prior Embedded Image Using f (θi, ϵi) = ϵi we can use this to approximate the Bayesian Evidence Embedded Image where the last equality relies on the unbiasedness of Embedded Image.

While the procedure for LF-NS is very similar to the standard NS algorithm, the new samples θ* have to be drawn from the constraint joint prior Embedded Image instead from the constrained prior π(θ|l(θ) > ϵ). In the following we discuss the resulting difficulties and show how to overcome them.

3.4 Sampling from the super-level sets of the likelihood

One of the main challenges [7, 40, 4] in the classical NS algorithm is the sampling from the prior constrained to higher likelihood regions π(θ|l(θ) > ϵ). A lot of effort has been dedicated to find ways to sample from the constraine prior efficiently, the most popular approaches include slice sampling [22] and ellipsoid based sampling [16].

In the case of LF-NS, at the ith iteration we are sampling not just a new parameter vector θ* but also a realization of its likelihood approximation Embedded Image from Embedded Image Since it is in general not possible to sample Embedded Image from the constraint distribution Embedded Image directly, we sample θ* from the prior π(θ), then sample Embedded Image from the unconstrained distribution Embedded Image and accept the pair Embedded Image only if Embedded Image. While this procedure guarantees that the resulting samples are drawn from 3.11, the acceptance rate might become very low. Each live set ℒi consists of N pairs Embedded Image distributed according to 3.11, thus the parameter vectors θk in ℒi are distributed according to Embedded Image We plotted an example of the distributions 2.4 and 3.12 for the example of the birth-death process in Figure 2 B. The distribution 3.12 has usually an infinite support, although in practice 3.12 will be close to zero for large areas of the parameter space Ω. Similarly to NS, we propose to use the set ℒi to drawn from the areas where 3.12 is non zero. Slice sampling methods ([1, 22]) are unfortunately not applicable for LF-NS since they require a way to evaluate its target distribution at each of its samples. We can still use ellipsoid sampling schemes, but unlike in the case of NS where the target distribution π(θ|l(θ) > ϵ) has compact support, the target distribution for LF-NS 3.12 has potentially infinite support framing ellipsoid based sampling approaches rather unfitting. Sampling using MCMC methods (as suggsted in [52]) is expected to work even for target distributions with infinite support, but suffer from the known MCMC drawbacks, as they produce correlated samples and might get stuck in disconnected regions.

To account for the smooth shape of 3.12 we propose to employ a density estimation approach. At each iteration i, we estimate the density Embedded Image from the live points and employ a rejection sampling approach to sample uniformly from the prior on the domain of this approximation. As density estimation technique, we use Dirichlet Process Gaussian Mixture Model (DP-GMM) [21], which approximates the distribution Embedded Image with a mixture of Gaussians. DP-GMM uses a hierarchical prior on the mixture model and assumes that the mixture components are distributed according to a Dirichlet Process. The inference of the distribution is an iterative process that uses Gibbs sampling to infer the number and shape of the Gaussians as well as the parameters and hyper parameters of the mixture model. DP-GMM estimations perform comparably well with sparse and high dimensional data and are less sensitive to outliers. Further, since we employ a parallelized LF- NS scheme, the density estimation has to be performed only after the finish of each parallel iteration, making the computational effort of density estimations negligible compared to the computational effort for the likelihood approximation. For a detailed comparison between DP-GMM and kernel density estimation and a further discussion of DP-GMM see [21], for an illustration of DP-GMM, KDE and ellipsoid sampler see section S2. Even though for the presented examples we employ DP-GMM, we note that in theory any sampling scheme that samples uniformly from the prior π(θ) over the support of Embedded Image will work.

3.5 A lower bound on the estimator variance

Unlike for NS, for LF-NS, even if at each iteration the proposal particle θ* is sampled from the support of Embedded Image, it will only be accepted with probability Embedded Image. This means that depending on the variance of the likelihood estimation Embedded Image and the current likelihood threshold ϵi the acceptance rate for LF-NS will change and with it the computational cost. We illustrated this on the example for the birth-death model above. For each of the 106 samples Embedded Image from Embedded Image we set ϵi = li and considered the particles Embedded Image and k+ = max(kj : lj ≥ ϵi) (illustrated in Figure 2 B). The particles {kj} in between Embedded Image and Embedded Image give a numerical approximation of the support of Embedded Image. We denote with Embedded Image all the pairs Embedded Image with kj between Embedded Image and Embedded Image with a likelihood above ϵi and with Embedded Image the pairs with a likelihood below ϵi Embedded Image and computed the ratio of the number of their element Embedded Image The values of α(m) give us an idea what the acceptance rate for LF-NS looks like in the best case where the new particles k* are sampled from the support of Embedded Image. We plotted α(m) in Figure 2 D as well as the evidence Embedded Image. We see that α(m) decreases to almost zero as Embedded Image approaches Z. The shape of αm will in general be dependent on the variance of the likelihood approximation Embedded Image. For a further discussion on the acceptance rate for different particle filter settings see section S3.

Due to this possible increase in computational time, it is important to terminate the LF-NS algorithm as soon as possible. We propose to use for the Bayesian evidence estimation not only the dead particles 𝒟, but also the current live points ℒm. This possibility has been already mentioned in other places (for instance in [8, 24, 30]) but is usually not applied, since the contribution of the live particles decreases exponentially with the number of iterations.4 Since for standard NS each iteration is expected to take the same amount of time, most approaches simply increase the number of iterations to make the contribution of the live particles negligibly small.

The Bayesian evidence can be decomposed as Embedded Image where xm is the prior volume for iteration m. The first integral Embedded Image imated through the N live samples at any given iteration, while the integral Embedded Image through the dead samples. Writing Embedded Image for the estimator of the integral of the likelihoods in the live set, we propose the following estimator for Z Embedded Image where Embedded Image approximates the finite sum Embedded Image by replacing the random variables xi with their means Embedded Image Since Embedded Image is an unbiased estimator of Embedded Image and Embedded Imageis an unbiased estimator of Embedded Image, the estimator Embedded Image is an unbiased estimator of the Bayesian evidence Z for any m. In particular, this implies that terminating the LF-NS algorithm at any iteration m will result in an unbiased estimate for Z. However, terminating the LF-NS algorithm early on will still result in a very high variance of the estimator. Since the error of replacing the integral Z with the finite sum Embedded Image is negligible compared to the error resulting from replacing xi with Embedded Image (see [13] or [8]), this variance is a result of the variances in xi and the variance in the Monte Carlo estimate Embedded Image.5 In the following we formulate a lower bound Embedded Image on the estimator variance Embedded Image at iteration m, show that this lower bound is monotonically increasing in m and propose to terminate the LF-NS algorithm as soon as the current estimator variance differs from this lower bound by no more than a predefined threshold d.

Treating the prior volumes xi and the Monte Carlo estimate Embedded Image as random variables, the variance Embedded Image of the NS estimator at iteration m can be estimated at each iteration without additional computational effort (see section S4 and [30]). This variance depends on the variance of the Monte Carlo estimate Embedded Image and is monotonically increasing in Embedded Image (see section S5). We define the term Embedded Image which is the same variance Embedded Image but under the additional assumption that the Monte Carlo estimate has variance 0: Embedded Image. Clearly we have for any m (see section S5) Embedded Image More importantly, as we show in section S5.2, Embedded Image is monotonically increasing in m Embedded Image This allows us to bound the lowest achievable estimator variance Embedded Image from below Embedded Image The terms for Embedded Image and Embedded Image both contain the unknown value Lm which can be approximated using its Monte Carlo estimate Embedded Image giving us the estimations of the above variances Embedded Image and Embedded Image. We use these variance estimates to formulate a termination criteria by defining Embedded Image and terminate the algorithm as soon as Embedded Image for some predefined d. This termination criteria seems intuitive since it terminates the LF-NS algorithm as soon as a continuation of the algorithm is not expected to make the final estimator significantly more accurate. As a final remark we note that the final estimator Embedded Image as well as the termination criteria using Embedded Image can of course also be applied in the standard NS case.

4 Examples

We test our proposed LF-NS algorithm on three examples for stochastic reaction kinetic models. The first example is the birth death model, already introduced in section 3.1, the second example is the Lac-Gfp model used for benchmarking in [32] and the third example is a transcriptional model from [48] with corresponding real data. In the following examples all priors are chosen as uniform or log-uniform in the bounds indicated in the posterior plots.

4.1 The stochastic birth-death Model

We first revisit the example of section 3.1 to compare our inference results to the solution obtained by FSP. We use the same data as in section 3.1 and use the same log-uniform prior. We run our LF-NS algorithm as described above using DP-GMM for the sampling. We used N = 100 LF-NS particles, H = 100 particle filter particles and sample at each iteration r = 10 particles. We ran the LF-NS algorithm until Embedded Image is smaller than 0.001. We show the obtained posterior in Figure 3 A. Figure 3 B shows the obtained estimates of the Bayesian evidence, where the shaded areas indicate the standard error at each iteration. The dashed red line indicates the true BE computed from 106 samples from Embedded Image. The estimates of the lower Embedded Image and upper bound Embedded Image for the lowest achievable estimator variance Embedded Image are shown in Figure 3 C and we can clearly see how they converge to the same value. For our termination criteria we show the quantities Embedded Image and Embedded Image in Figure 3 D.

Figure 3:
  • Download figure
  • Open in new tab
Figure 3:

A: Histogram of the posterior 𝒫(k) estimate obtained with LF-NS using N = 100 and H = 100. The true posterior is indicated in black. B: Development of the estimation of the Bayesian evidence using the estimation based solely on the dead points Embedded Image, the estimate approximation from the live points Embedded Image and the estimation based on both Embedded Image. The corresponding standard errors are indicated as the shaded areas. The true Bayesian evidence is indicated with the dashed red line. C: Estimate of the current variance estimate Embedded Image and the lower bounds for the lowest achievable variance Embedded Image. D: Developments of the different error estimations for each iteration.

4.2 The Lac-Gfp model

We demonstrate how our algorithm deals with a realistic sized stochastic model, by inferring the posterior for the parameters of the Lac-Gfp model illustrated in Figure 4 A. This model has been already used in [32] as a benchmark, although with distribution-data. Here we use the model to simulate a number of trajectories and illustrate how our approach infers the posterior of the used parameters. This model is particularly challenging in two aspects. First, the number of parameters is 18, making it a fairly large model to infer. Secondly, the model exhibits switch-like behaviour which makes it very hard to approximate the likelihood of such a switching trajectory (see section S6.2 and particularly Figure S 3 for further details). We used N = 500 LF-NS particles, H = 500 particle filter particles and sample at each iteration r = 50 particles.

Figure 4:
  • Download figure
  • Open in new tab
Figure 4:

A: Schematic of the Lac-Gfp Model where the final measurement is the mature GFP (mGFP) and the input is IPTG (assumed to be constant 10μM). B: Development of the estimation of the Bayesian evidence using the estimation based solely on the dead points Embedded Image, the estimate approximation from the live points Embedded Image and the estimation that uses both Embedded Image. The corresponding standard errors are indicated as the shaded areas. C: The acceptance rate of the LF-NS algorithm for each iteration (blue) and the cumulative time needed for each iteration in hours (red). The computation was performed on 48 cores in parallel on the Euler cluster of the ETH Zurich. D: Estimate of the current variance estimate Embedded Image and the lower bounds for the lowest achievable variance Embedded Image. E: Marginals of the inferred posterior distributions of the parameters based on one simulated trajectory. The blue lines indicate the parameters used for the simulation of the data.

The measured species in this example is fluorescent Gfp (mGFP) where it is assumed that each Gfp-molecule emits fluorescence according to a normal distribution. We used one trajectory to infer the posterior, whose marginals are shown in Figure 4 E. The solid blue lines indicate the parameters used to simulate the data. Figure 4 B shows the estimated Bayesian evidence with corresponding standard errors for each iteration. Figure 4 D shows the corresponding estimations of the bounds of the lowest achievable variance. As we see, the estimated Bayesian evidence, as well as the estimated variance bounds, do several jumps in the process of the LF-NS run. These jumps correspond to iterations in which previously unsampled areas of the parameter space got sampled with a new maximal likelihood. In Figure 4 C we plotted the acceptance rate of the LF-NS algorithm for each iteration as well as the cumulative computational time6. The inference for this model took well over 12 hours and as we see, the computational time for each iteration seems to increases exponentially, as the acceptance rate decreases. The low acceptance rate is expected, since the number of particle filter particles H = 500 results in a very high variance of the particle filter estimate (see Figure S3 B). Clearly, for this example, the early termination of LF-NS is essential to obtain a solution within a reasonable time.

4.3 A stochastic transcription model

As a third example we use a transcription model recently used in [48], where an optogenetically inducible transcription system is used to obtain live readouts of nascent RNA counts. The model consists of a gene that can take two configurations “on” and “off”. In the “on” configuration mRNA is transcribed from this gene and can be individually measured during this transcription process (see [48] for details). We modelled the transcription through n = 8 subsequent RNA species that change from one to the next at a rate λ. This is done to account for the observed time of 2 minutes that one transcription event takes. With such a parametrization the mean time to move from species RNA1 to the degradation of RNAn is Embedded Image. An illustration of the model is shown in Figure 5 A. For the inference of the model parameters we chose five trajectories of real biological data, shown in Figure 5 C. Clearly, the system is inherently stochastic and requires corresponding inference methods. We ran the LF-NS algorithm for N = 500 and H = 500 on these five example trajectories. The resulting marginal posteriors are shown in Figure 5 B, we also indicated the model ranges considered in [48]. These ranges were chosen in [48] in an ad-hoc manner but, apart from the values for koff seem to fit very well with our inferred results. In Figure 5 D and E we show the development of the evidence approximation as well as the corresponding standard errors and the development of the upper and lower bound estimation for the lowest achievable variance Embedded Image. Similarly to the Lac-Gfp example, we see that the development of the BE estimate is governed by random spikes which again are due to the sampling of particles with a new highest likelihood.

Figure 5:
  • Download figure
  • Open in new tab
Figure 5:

A: A schematic representation of the gene expression model. The model consists of a gene that switches between an “on” and an “off” state with rates kon and koff. When “on” the gene is getting transcribed at rate kr. The transcription process is modelled through n RNA species that sequentially transform from one to the next at rate λ. The observed species are all of the intermediate RNAi species. B: The marginal posterior distribution of the parameters of the system. The shaded areas indicate the parameter ranges that were considered in [48]. C: The five trajectories used for the parameter inference. D: Development of the estimation of the Bayesian evidence using the estimation based solely on the dead points Embedded Image, the estimate approximation from the live points Embedded Image and the estimation that uses both Embedded Image. The corresponding standard errors are indicated as the shaded areas. E: Estimate of the current variance estimate Embedded Image and the lower bounds for the lowest achievable variance Embedded Image.

5 Discussion

We have introduced a likelihood-free formulation of the well known nested sampling algorithm and have shown that it is unbiased for any unbiased likelihood estimator. While the utilization of NS for systems without an available likelihood is straight forward, one has to take precautions to avoid infeasibly high computational times. Unlike for standard NS it is crucial to include the estimation of the live samples to the final BE estimation as well as terminate the algorithm as soon as possible. We have shown how using a Monte Carlo estimate over the live points not only results in an unbiased estimator of the Bayesian evidence Z, but also allows us to derive a formulation for a lower bound on the achievable variance in each iteration. This lower bound at each iteration has been shown to be a lower bound for the best achievable variance and has allowed us to formulate a novel termination criterion that stops the algorithm as soon as a continuation can at best result in an insignificant improvement in accuracy. While the formulation of the variances and its lower bound were derived having a parallel LF-NS scheme in mind, they can equally well be used in the standard NS case and can be added effortlessly to already available toolboxes such as [14] or [22]. We emphasize that the lower variance bound approximation Embedded Image is neither a strict error term, as it only gives information of the variance of the estimator, nor a strict lower bound of the estimator variance since it contains the unknown term Lm. Instead, it gives an estimate of the lowest achievable estimator variance that depends on the Monte Carlo estimate of the likelihood average over the live points Embedded Image. This can be seen Figure 4 D and Figure 5 E, where the lower bound estimate Embedded Image does not only make jumps, but also decreases after each jump (the actual lower bound estimate Embedded Image is monotonically increasing in m as shown in section S5.2). Our suggested LF-NS scheme has three different parameters that govern the algorithm behaviour. The number of LF-NS particles N determines how low the minimal variance of the estimator can get, where low values for N result in a rather high variance and high values for N result in a lower variance. The number of particles for the particle filter H determines how wide or narrow the likelihood estimation is and thus determines the development of the acceptance rate of the LF-NS run, while the number of LF-NS iterations determines how close the variance of the final estimate comes to the minimal variance. We have demonstrated the applicability of our method on several models with simulated as well as real biological data. Our LF-NS can, similarly to ABC, pMCMC or SMC models deal with stochastic models with intractable likelihoods and has all of the advantages of classic NS. We believe that particularly the variance estimation that can be performed from a single LF-NS run proves to be useful as well as the straight forward parallelization.

Footnotes

  • 1 for this to hold some weak conditions have to be satisfied, see for details [8] and [15]

  • 2 t ∼ ℬ(N, 1) with ℬ(a, b) being the Beta distribution with parameters a and b.

  • 3 This means tj ∼ ℬ(N - j + 1, j)

  • 4 We point out that while in classical nested sampling the contribution of the live points can indeed be made arbitrarily small, the resulting estimator (employing only the dead points) is strictly speaking not unbiased since it approximates the Bayesian evidence not over the full prior volume but only up to the final xm, which is the quantity Embedded Image in equation 3.13

  • 5 As pointed out in [24], when using nested sampling approximations to approximate the integral of arbitrary functions f over the posterior, an additional error is introduced by approximating the average value of f (θ) on the contour line of l(θ) = ϵi with the value f (θi).

  • 6 The computation was performed on 48 cores of the Euler cluster of the ETH Zurich.

References

  1. [1].↵
    Stuart Aitken and Ozgur E Akman. Nested sampling for parameter inference in systems biology: application to an exemplar circadian model. BMC systems biology, 7(1):72, 2013.
    OpenUrl
  2. [2].↵
    Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle markov chain monte carlo for efficient numerical simulation. In Monte Carlo and quasi-Monte Carlo methods 2008, pages 45–60. Springer, 2009.
  3. [3].↵
    Richard J Boys, Darren J Wilkinson, and Thomas BL Kirkwood. Bayesian inference for a discretely observed stochastic kinetic model. Statistics and Computing, 18(2):125–135, 2008.
    OpenUrl
  4. [4].↵
    Brendon J Brewer, Livia B Pártay, and Gábor Csányi. Diffusive nested sampling. Statistics and Computing, 21(4):649–656, 2011.
    OpenUrl
  5. [5].↵
    Nikolas S Burkoff, Csilla Várnai, Stephen A Wells, and David L Wild. Exploring the energy landscapes of protein folding simulations with bayesian computation. Biophysical journal, 102(4):878–886, 2012.
    OpenUrlCrossRefPubMedWeb of Science
  6. [6].↵
    Olivier Cappé, Simon J Godsill, and Eric Moulines. An overview of existing methods and recent advances in sequential monte carlo. Proceedings of the IEEE, 95(5):899–924, 2007.
    OpenUrlCrossRefWeb of Science
  7. [7].↵
    Nicolas Chopin and C Robert. Contemplating evidence: properties, extensions of, and alternatives to nested sampling. Technical report, Citeseer, 2007.
  8. [8].↵
    Nicolas Chopin and Christian P Robert. Properties of nested sampling. Biometrika, 97(3):741–755, 2010.
    OpenUrlCrossRefWeb of Science
  9. [9].↵
    Arnaud Doucet, Nando De Freitas, and Neil Gordon. An introduction to sequential monte carlo methods. In Sequential Monte Carlo methods in practice, pages 3–14. Springer, 2001.
  10. [10].↵
    Richard Dybowski, Trevelyan J McKinley, Pietro Mastroeni, and Olivier Restif. Nested sampling for bayesian model comparison in the context of salmonella disease dynamics. PloS one, 8(12):e82317, 2013.
    OpenUrlCrossRefPubMed
  11. [11].↵
    Johan Elf and Måns Ehrenberg. Fast evaluation of fluctuations in biochemical networks with the linear noise approximation. Genome research, 13(11):2475–2484, 2003.
    OpenUrlAbstract/FREE Full Text
  12. [12].↵
    Michael B Elowitz, Arnold J Levine, Eric D Siggia, and Peter S Swain. Stochastic gene expression in a single cell. Science, 297(5584):1183–1186, 2002.
    OpenUrlAbstract/FREE Full Text
  13. [13].↵
    M Evans. Discussion of nested sampling for bayesian computations by john skilling. Bayesian Statistics, 8:491–524, 2007.
    OpenUrl
  14. [14].↵
    F Feroz, MP Hobson, and M Bridges. Multinest: an efficient and robust bayesian inference tool for cosmology and particle physics. Monthly Notices of the Royal Astronomical Society, 398(4):1601–1614, 2009.
    OpenUrlCrossRef
  15. [15].↵
    F Feroz, MP Hobson, E Cameron, and AN Pettitt. Importance nested sampling and the multinest algorithm. arXiv preprint arXiv:1306.2144, 2013.
  16. [16].↵
    Farhan Feroz and MP Hobson. Multimodal nested sampling: an efficient and robust alternative to markov chain monte carlo methods for astronomical data analyses. Monthly Notices of the Royal Astronomical Society, 384(2):449–463, 2008.
    OpenUrlCrossRef
  17. [17].↵
    Daniel T Gillespie. Exact stochastic simulation of coupled chemical reactions. The journal of physical chemistry, 81(25):2340–2361, 1977.
    OpenUrlCrossRefPubMedWeb of Science
  18. [18].↵
    Daniel T Gillespie. The chemical langevin equation. The Journal of Chemical Physics, 113(1):297–306, 2000.
    OpenUrlCrossRefWeb of Science
  19. [19].↵
    Andrew Golightly and Darren J Wilkinson. Bayesian parameter inference for stochastic biochemical network models using particle markov chain monte carlo. Interface Focus, 1(6):807–820, 2011.
    OpenUrlCrossRefPubMedWeb of Science
  20. [20].↵
    Andrew Golightly and Darren J Wilkinson. Bayesian inference for markov jump processes with informative observations. arXiv preprint arXiv:1409.4362, 2014.
  21. [21].↵
    Dilan Görür and Carl Edward Rasmussen. Dirichlet process gaussian mixture models: Choice of the base distribution. Journal of Computer Science and Technology, 25(4):653–664, 2010.
    OpenUrl
  22. [22].↵
    WJ Handley, MP Hobson, and AN Lasenby. Polychord: next-generation nested sampling. Monthly Notices of the Royal Astronomical Society, 453(4):4384–4398, 2015.
    OpenUrlCrossRef
  23. [23].↵
    R Wesley Henderson and Paul M Goggans. Parallelized nested sampling. In AIP Conference Proceedings, volume 1636, pages 100–105. AIP, 2014.
    OpenUrl
  24. [24].↵
    Edward Higson, Will Handley, Mike Hobson, Anthony Lasenby, et al. Sampling errors in nested sampling parameter estimation. Bayesian Analysis, 2018.
  25. [25].↵
    Andreas Hilfinger and Johan Paulsson. Separating intrinsic from extrinsic fluctuations in dynamic biological systems. Proceedings of the National Academy of Sciences, 108(29):12167–12172, 2011.
    OpenUrlAbstract/FREE Full Text
  26. [26].↵
    Piers J Ingram, Michael PH Stumpf, and Jaroslav Stark. Network motifs: structure does not determine function. BMC genomics, 7(1):108, 2006.
    OpenUrlCrossRefPubMed
  27. [27].↵
    Edward L Ionides, C Bretó, and Aaron A King. Inference for nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 103(49):18438–18443, 2006.
    OpenUrlAbstract/FREE Full Text
  28. [28].↵
    Nick Jagiella, Dennis Rickert, Fabian J Theis, and Jan Hasenauer. Parallelization and highperformance computing enables automated statistical inference of multi-scale models. Cell Systems, 2017.
  29. [29].↵
    Rob Johnson, Paul Kirk, and Michael PH Stumpf. Sysbions: nested sampling for systems biology. Bioinformatics, 31(4):604–605, 2015.
    OpenUrlCrossRefPubMed
  30. [30].↵
    Charles R Keeton. On statistical uncertainty in nested sampling. Monthly Notices of the Royal Astronomical Society, 414(2):1418–1426, 2011.
    OpenUrlCrossRef
  31. [31].↵
    Caroline H Ko, Yujiro R Yamada, David K Welsh, Ethan D Buhr, Andrew C Liu, Eric E Zhang, Martin R Ralph, Steve A Kay, Daniel B Forger, and Joseph S Takahashi. Emergence of noise-induced oscillations in the central circadian pacemaker. PLoS biology, 8(10):e1000513, 2010.
    OpenUrlCrossRefPubMed
  32. [32].↵
    Gabriele Lillacci and Mustafa Khammash. The signal within the noise: efficient inference of stochastic gene regulation models using fluorescence histograms and stochastic simulations. Bioinformatics, 29(18):2311–2319, 2013.
    OpenUrlCrossRefPubMed
  33. [33].↵
    Thomas Liphardt. Efficient computational methods for sampling-based metabolic flux analysis. PhD thesis, ETH Zurich, 2018.
  34. [34].↵
    David JC MacKay and David JC Mac Kay. Information theory, inference and learning algorithms. Cambridge university press, 2003.
  35. [35].↵
    Paul Marjoram, John Molitor, Vincent Plagnol, and Simon Tavaré. Markov chain monte carlo without likelihoods. Proceedings of the National Academy of Sciences, 100(26):15324–15328, 2003.
    OpenUrlAbstract/FREE Full Text
  36. [36].↵
    Harley H McAdams and Adam Arkin. Stochastic mechanisms in gene expression. Proceedings of the National Academy of Sciences, 94(3):814–819, 1997.
    OpenUrlAbstract/FREE Full Text
  37. [37].↵
    Pia Mukherjee, David Parkinson, and Andrew R Liddle. A nested sampling algorithm for cosmological model selection. The Astrophysical Journal Letters, 638(2):L51, 2006.
    OpenUrl
  38. [38].↵
    Brian Munsky and Mustafa Khammash. The finite state projection algorithm for the solution of the chemical master equation. The Journal of chemical physics, 124(4):044104, 2006.
    OpenUrlCrossRefPubMed
  39. [39].↵
    Brian Munsky, Brooke Trinh, and Mustafa Khammash. Listening to the noise: random fluctuations reveal gene network parameters. Molecular systems biology, 5(1), 2009.
  40. [40].↵
    Iain Murray. Advances in Markov chain Monte Carlo methods. PhD thesis, Citeseer, 2007.
  41. [41].↵
    Gregor Neuert, Brian Munsky, Rui Zhen Tan, Leonid Teytelman, Mustafa Khammash, and Alexander van Oudenaarden. Systematic identification of signal-activated stochastic gene reulation. Science, 339(6119):584–587, 2013.
    OpenUrlAbstract/FREE Full Text
  42. [42].↵
    Ertugrul M Ozbudak, Mukund Thattai, Iren Kurtser, Alan D Grossman, and Alexander van Oudenaarden. Regulation of noise in the expression of a single gene. Nature genetics, 31(1):69–73, 2002.
    OpenUrlCrossRefPubMedWeb of Science
  43. [43].↵
    Johan Paulsson, Otto G Berg, and Måns Ehrenberg. Stochastic focusing: fluctuation-enhanced sensitivity of intracellular regulation. Proceedings of the National Academy of Sciences, 97(13):7148–7153, 2000.
    OpenUrlAbstract/FREE Full Text
  44. [44].↵
    Michael Pitt, Ralph Silva, Paolo Giordani, and Robert Kohn. Auxiliary particle filtering within adaptive metropolis-hastings sampling. arXiv preprint arXiv:1006.1914, 2010.
  45. [45].↵
    Michael K Pitt, Ralph dos Santos Silva, Paolo Giordani, and Robert Kohn. On some properties of markov chain monte carlo simulation methods based on the particle filter. Journal of Econometrics, 171(2):134–151, 2012.
    OpenUrlCrossRefWeb of Science
  46. [46].↵
    Nick Pullen and Richard J Morris. Bayesian model comparison and parameter inference in systems biology using nested sampling. PloS one, 9(2):e88419, 2014.
    OpenUrlCrossRefPubMed
  47. [47].↵
    Christian P Robert and Darren Wraith. Computational methods for bayesian model choice. In AIP Conference Proceedings, volume 1193, pages 251–262. AIP, 2009.
    OpenUrl
  48. [48].↵
    Marc Rullan, Dirk Benzinger, Gregor W Schmidt, Andreas Milias-Argeitis, and Mustafa Khammash. An optogenetic platform for real-time, single-cell interrogation of stochastic transcriptional regulation. Molecular cell, 70(4):745–756, 2018.
    OpenUrlCrossRef
  49. [49].↵
    Michael Samoilov, Sergey Plyasunov, and Adam P Arkin. Stochastic amplification and signaling in enzymatic futile cycles through noise-induced bistability with oscillations. Proceedings of the National Academy of Sciences, 102(7):2310–2315, 2005.
    OpenUrlAbstract/FREE Full Text
  50. [50].↵
    Scott A Sisson, Yanan Fan, and Mark M Tanaka. Sequential monte carlo without likelihoods. Proceedings of the National Academy of Sciences, 104(6):1760–1765, 2007.
    OpenUrlAbstract/FREE Full Text
  51. [51].↵
    John Skilling. Nested sampling’s convergence. In AIP Conference Proceedings, volume 1193, pages 277–291. AIP, 2009.
    OpenUrl
  52. [52].↵
    John Skilling et al. Nested sampling for general bayesian computation. Bayesian analysis, 1(4):833–859, 2006.
    OpenUrl
  53. [53].↵
    Vassilios Stathopoulos and Mark A Girolami. Markov chain monte carlo inference for markov jump processes via the linear noise approximation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 371(1984):20110541, 2013.
    OpenUrlPubMed
  54. [54].↵
    Tina Toni, David Welch, Natalja Strelkowa, Andreas Ipsen, and Michael PH Stumpf. Approx-imate bayesian computation scheme for parameter inference and model selection in dynamical systems. Journal of the Royal Society Interface, 6(31):187–202, 2009.
    OpenUrl
  55. [55].↵
    Darren J Wilkinson. Parameter inference for stochastic kinetic models of bacterial gene regulation: a bayesian approach to systems biology. In Proceedings of 9th Valencia International Meeting on Bayesian Statistics, pages 679–705, 2010.
  56. [56].↵
    Christoph Zechner, Michael Unger, Serge Pelet, Matthias Peter, and Heinz Koeppl. Scalable inference of heterogeneous reaction kinetics from pooled single-cell recordings. Nature methods, 11(2):197–202, 2014.
    OpenUrl
View Abstract
Back to top
PreviousNext
Posted February 28, 2019.
Download PDF

Supplementary Material

Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Likelihood-free nested sampling for biochemical reaction networks
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Likelihood-free nested sampling for biochemical reaction networks
Jan Mikelson, Mustafa Khammash
bioRxiv 564047; doi: https://doi.org/10.1101/564047
Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
Citation Tools
Likelihood-free nested sampling for biochemical reaction networks
Jan Mikelson, Mustafa Khammash
bioRxiv 564047; doi: https://doi.org/10.1101/564047

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Bioinformatics
Subject Areas
All Articles
  • Animal Behavior and Cognition (2430)
  • Biochemistry (4791)
  • Bioengineering (3333)
  • Bioinformatics (14684)
  • Biophysics (6640)
  • Cancer Biology (5172)
  • Cell Biology (7429)
  • Clinical Trials (138)
  • Developmental Biology (4367)
  • Ecology (6874)
  • Epidemiology (2057)
  • Evolutionary Biology (9926)
  • Genetics (7346)
  • Genomics (9533)
  • Immunology (4558)
  • Microbiology (12686)
  • Molecular Biology (4948)
  • Neuroscience (28348)
  • Paleontology (199)
  • Pathology (809)
  • Pharmacology and Toxicology (1392)
  • Physiology (2024)
  • Plant Biology (4504)
  • Scientific Communication and Education (977)
  • Synthetic Biology (1299)
  • Systems Biology (3917)
  • Zoology (726)