Abstract
The adoption of deep learning techniques in genomics has been hindered by the difficulty of mechanistically interpreting the models that these techniques produce. In recent years, a variety of post-hoc attribution methods have been proposed for addressing this neural network interpretability problem in the context of gene regulation. Here we describe a complementary way of approaching this problem. Our strategy is based on the observation that two large classes of biophysical models of cis-regulatory mechanisms can be expressed as deep neural networks in which nodes and weights have explicit physiochemical interpretations. We also demonstrate how such biophysical networks can be rapidly inferred, using modern deep learning frameworks, from the data produced by certain types of massively parallel reporter assays (MPRAs). These results suggest a scalable strategy for using MPRAs to systematically characterize the biophysical basis of gene regulation in a wide range of biological contexts. They also highlight gene regulation as a promising venue for the development of scientifically interpretable approaches to deep learning.
Deep learning – the use of large multi-layer neural networks in machine learning applications – is revolutionizing information technology [1]. There is currently a great deal of interest in applying deep learning techniques to problems in genomics, especially for understanding gene regulation [2–5]. These applications remain somewhat controversial, however, due to the difficulty of mechanistically interpreting neural network models trained on functional genomics data. Multiple attribution strategies, which seek to extract meaning post-hoc from neural networks that have rather generic architectures, have been proposed for addressing this interpretability problem [6–8]. However, there remains a substantial gap between the outputs of such attribution methods and fully mechanistic models of gene regulation.
Here we advocate for a complementary approach: the inference of neural network models whose architecture reflects explicit biophysical hypotheses for how cis-regulatory sequences function. This strategy is based on two key observations. First, two large classes of biophysical models can be formulated as three-layer neural networks that have a stereotyped form and in which nodes and weights have explicit physiochemical interpretations. This is true of thermodynamic models, which rely on a quasi-equilibrium assumption [9–14], as well as kinetic models, which are more complex but do not make such assumptions [15–17]. Second, existing deep learning frameworks such as TensorFlow [18] are able to rapidly infer such models from the data produced by certain classes of MPRAs.
The idea of using neural networks to model the biophysics of gene regulation goes back to at least [19]. To our knowledge, however, the fact that all thermodynamic models of gene regulation can be represented by a simple three-layer architecture has not been previously reported. We are not aware of any prior work that uses neural networks to represent kinetic models or models involving King-Altman diagrams. There is a growing literature on modeling MPRA data using neural networks [20–24]. However, such modeling has yet to be advocated for MPRAs that dissect the mechanisms of single cis-regulatory sequences, such as in [25–29], which is our focus here. Finally, some recent work has used deep learning frameworks to infer parametric models of cis-regulation [30, 31]. These studies did not, however, infer the type of explicit biophysical quantities, such as ΔG values, that our approach recovers.
All thermodynamic models of cis-regulation can be represented as three-layer neural networks as follows. First one defines a set of molecular complexes, or “states”, which we index using s. Each state has both a Gibbs free energy ΔGs and an associated activity αs. These energies determine the probability Ps of each state occurring in thermodynamic equilibrium via the Boltzmann distribution,1
The energy of each state is, in turn, computed using integral combinations of the individual interaction energies ΔGj that occur in that state. We can therefore write ΔGs = Σj ωsjΔGj, where ωsj is the number of times that interaction j occurs in state s. The resulting activity predicted by the model is given by the activities αs of the individual states averaged over this distribution, i.e., t = Σs αsPs.
Fig. 1 illustrates a thermodynamic model for transcriptional activation at the E. coli lac promoter. This model involves two proteins, CRP and RNAP, as well as three interaction energies: ΔGC, ΔGR, and ΔGI. The rate of transcription t is further assumed to be proportional to the fraction of time that RNAP is bound to DNA (Fig. 1A). This model is summarized by four different states, two of which lead to transcription and two of which do not (Fig. 1B). Fig. 1C shows the resulting formula for t in terms of model parameters. This model is readily formulated as a feed-forward neural network (Fig. 1D). Indeed, all thermodynamic models of cis-regulation can be formulated as three-layer neural networks: layer 1 represents molecular interaction energies, layer 2 (which uses a softmin activation) represents state probabilities, and layer 3 (using linear activation) represents the biological activity of interest, which in this case is transcription rate.
A thermodynamic model of transcriptional regulation. (A) Transcriptional activation at the E. coli lac promoter is regulated by two proteins, CRP and σ70 RNA polymerase (RNAP). CRP is a transcriptional activator that up-regulates transcription by stabilizing RNAP-DNA binding. ΔGC and ΔGR respectively denote the Gibbs free energies of the CRP-DNA and RNAP-DNA interactions, while ΔGI denotes the Gibbs free energy of interaction between CRP and RNAP. (B) Like all thermodynamic models of gene regulation, this model consists of a set of states, each state having an associated Gibbs free energy and activity. The probability of each state is assumed to follow the Boltzmann distribution. (C) The corresponding activity predicted by such thermodynamic models is the state-specific activity averaged together using these Boltzmann probabilities. (D) This model formulated as a three-layer neural network. First layer nodes represent interaction energies, second layer nodes represent state probabilities, and third layer nodes represent transcriptional activity. The values of weights are indicated; gray lines correspond to zero weights. The second layer has a softmin activation, while the third has a linear activation. All thermodynamic models of cis-regulation can be represented using this general three-layer form.
We can infer thermodynamic models like these for a cis-regulatory sequence of interest (the wild-type sequence) from the data produced by an MPRA performed on an appropriate sequence library [26]. Indeed, a number of MPRAs have been performed with this explicit purpose in mind [26, 27, 33–35]. To this end, such MPRAs are generally performed using libraries that consist of sequence variants that differ from the wild-type sequence by a small number of single nucleotide polymorphisms (SNPs). The key modeling assumption that motivates using libraries of this form is that the assayed sequence variants will form the same molecular complexes as the wild-type sequence, but with Gibbs free energies and state activities whose values vary from sequence to sequence. By contrast, variant libraries that contain insertions, deletions, or large regions of random DNA (e.g. [20, 21, 23, 24, 31]) are unlikely to satisfy this modeling assumption.
Fig. 2A summarizes the sort-seq MPRA described in [26]. Lac promoter variants were used to drive GFP expression in E. coli, cells were sorted into 10 bins using fluorescence-activated cell sorting, and the variant promoters within each bin were sequenced. This yielded data comprising about 5 × 104 variant lac promoter sequences, each associated with one of 10 bins. The authors then fit the biophysical model shown in Fig. 1C, but under the assumption that and
, where
is a one-hot encoding of promoter DNA sequence.
Inference of a thermodynamic model from MPRA data. (A) Schematic of the sort-seq MPRA of [26]. A 75 bp region of the E. coli lac promoter was mutagenized at 12% per nucleotide. Variant promoters were then used to drive the expression of GFP. Cells carrying these expression constructs were then sorted using FACS, and the variant sequences in each bin were sequenced. This yielded data on about 5 × 104 variant promoters across 10 bins. (B) The neural network from Fig. 1D, but with ΔGC and ΔGR expressed as linear functions of the DNA sequence , as well as a dense feed-forward network mapping activity t to bins via a probability distribution p(bin|t). Gray lines indicate weights fixed at 0. The weights linking nodes P3 and P4 to node t were constrained to have the same value tsat. (C) The parameter values inferred from the MPRA data of [26]. Shown are the CRP energy matrix
, the RNAP energy matrix
and the CRP-RNAP interaction energy ΔGI. Since increasingly negative energy corresponds to stronger binding, they y-axis in the logo plots is inverted. Logos were generated using Logomaker [32].
Here we used TensorFlow to infer the same model formulated as a deep neural network. Specifically, we augmented the network in Fig. 1D by making ΔGC and ΔGR sequence-dependent as in [26]. To link t to the MPRA measurements, we introduced a feed-forward network with one hidden layer and a softmax output layer corresponding to the 10 bins into which cells were sorted. Model parameters were then fit to the MPRA dataset using stochastic gradient descent and early stopping. The results agreed well with those reported in [26]. In particular, the parameters in the energy matrices for CRP and RNAP
exhibited respective Pearson correlation coefficients of 0.986 and 0.994 with those reported in [26]. The protein-protein interaction energy that we found, ΔGI = −2.9 kcal/mol, was also compatible with the previously reported value ΔGI = −3.3 ± 0.4 kcal/mol.
A major difference between our results and those of [26] is the ease with which they were obtained. Training of the network in Fig. 2B consistently took about 15 minutes on a standard laptop computer. The model fitting procedure in [26], by contrast, relied on a custom Parallel Tempering Monte Carlo algorithm that took about a week to run on a multi-node computer cluster (personal communication), and more recent efforts to train biophysical models on MPRA data have encountered similar computational bottlenecks [34, 35].
Also of note is the fact that in [26] the authors inferred models using information maximization. Specifically, the authors fit the parameters of by maximizing the mutual information I[t; bin] between model predictions and observed bins. One difficulty with this strategy is the need to estimate mutual information. Instead, we used maximum likelihood to infer the parameters of
as well as the experimental transfer function (i.e., noise model) p(bin|t), which was modeled by a dense feed-forward network with one hidden layer. These two inference methods, however, are essentially equivalent: in the large data regime, the parameters of t that maximize I[t; bin] are the same as the parameters one obtains when maximizing likelihood over the parameters of both t and p(bin|t); see [36–38].
A shortcoming of thermodynamic models is that they ignore non-equilibrium processes. Kinetic models address this problem by providing a fully non-equilibrium characterization of steady-state activity. Kinetic models are specified by listing explicit state-to-state transition rates rather than Gibbs free energies. Fig. 3A shows a three-state kinetic model of transcriptional initiation consisting of unbound promoter DNA, an RNAP-DNA complex in the closed conformation, and the RNAP-DNA complex in the open conformation [39, 40]. The rate k3 going from the open state to the unbound state represents transcript initiation. The overall transcription rate in steady state is therefore k3 times the occupancy of the open complex.
A kinetic model for transcriptional initiation by E. coli RNAP. (A) In this model, promoter DNA can participate in three complexes: unbound, closed, and open [39, 40]. Transitions between these complexes are governed by four rate constants: k1, k−1, k2, and k3. (B) A formula for the steady-state rate of mRNA production can be obtained using King-Altman diagrams [41, 42]. (C) This formula can be represented using the three-layer neural network shown, where layer 1 represents log transition rates, layer 2 (softmax activation) represents normalized King-Altman diagrams, and layer 3 (linear activation) represents promoter activity. Black lines indicate weight 1; gray lines indicate weight 0. Note that the single nonzero weight connecting layer 2 to layer 3 (orange) is actually the transition rate k3 from layer 1. All kinetic models of cis-regulation share this general three-layer form.
King-Altman diagrams [41, 42], a technique from mathematical enzymology, provide a straight-forward way to compute steady-state occupancy in kinetic models. Specifically, each state’s occupancy is proportional to the sum of directed spanning trees (a.k.a. King-Altman diagrams) that flow to that state, where each spanning tree’s value is given by the product of rates comprising that tree. Fig. 3B illustrates this procedure for the kinetic model in Fig. 3A. Every such kinetic model can be represented by a three-layer neural network (e.g., Fig. 3C) in which first layer nodes represent log transition rates, second layer nodes (after a softmax activation) represent normalized King-Altman diagrams, and third layer nodes represent the activities of interest.
Here we have shown how both thermodynamic and kinetic models of gene regulation can be formulated as three-layer deep neural networks in which nodes and weights have explicit biophysical meaning. This represents a new strategy for interpretable deep learning in the study of gene regulation, one complementary to existing post-hoc attribution methods. We have further demonstrated that a neural-network-based thermodynamic model can be rapidly inferred from MPRA data using TensorFlow. This was done in the context of a well-characterized bacterial promoter because previous studies of this system have established concrete results against which we could compare our inferred model. The same modeling approach, however, should be readily applicable to a wide variety of biological systems amenable to MPRAs, including transcriptional regulation and alternative mRNA splicing in higher eukaryotes.
Code Availability and Acknowledgements
The neural network model shown in Fig. 2C, as well as the scripts used to infer it from the data of [26], are available at https://github.com/jbkinney/19_mlcb. We thank Anand Murugan, Yifei Huang, Peter Koo, Alan Moses, and Mahdi Kooshkbaghi for helpful discussions, as well as and three anonymous referees for providing constructive criticism. This project was supported by NIH grant 1R35GM133777 and a grant from the CSHL/Northwell Health partnership.
Footnotes
tareen{at}cshl.edu
Presented at the 14th conference on Machine Learning in Computational Biology (MLCB 2019), Vancouver, Canada.
Presented at the 14th conference on Machine Learning in Computational Biology (MLCB 2019), Vancouver, Canada. Revised to add a link to code and to correct a typo in the King-Altman diagrams shown in Figure 3.
↵1 To reduce notational burden, all ΔG values are assumed to be in thermal units. At 37°C, one thermal unit is 1 kBT = 0.62 kcal/mol, where kB is Boltzmann’s constant and T is temperature.