Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

The optimal odor-receptor interaction network is sparse in olfactory systems: Compressed sensing by nonlinear neurons with a finite dynamic range

View ORCID ProfileShanshan Qin, Qianyi Li, Chao Tang, Yuhai Tu
doi: https://doi.org/10.1101/464875
Shanshan Qin
1Center for Quantitative Biology, Peking University, Beijing, 100871, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Shanshan Qin
Qianyi Li
2Yuanpei College, Peking University, Beijing, 100871, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Chao Tang
1Center for Quantitative Biology, Peking University, Beijing, 100871, China
3School of Physics, Peking University, Beijing 100871, China
4Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: tangc@pku.edu.cn
Yuhai Tu
5IBM T. J. Watson Research Center, Yorktown Heights, New York 10598, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: yuhai@us.ibm.com
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

Abstract

There are numerous different odorant molecules in nature but only a relatively small number of olfactory receptor neurons (ORNs) in brains. This “compressed sensing” challenge is compounded by the constraint that ORNs are nonlinear sensors with a finite dynamic range. Here, we investigate possible optimal olfactory coding strategies by maximizing mutual information between odor mixtures and ORNs’ responses with respect to the bipartite odor-receptor interaction network (ORIN) characterized by sensitivities between all odorant-ORN pairs. We find that the optimal ORIN is sparse – a finite fraction of sensitives are zero, and the nonzero sensitivities follow a broad distribution that depends on the odor statistics. We show that the optimal ORIN enhances performances of downstream learning tasks (reconstruction and classification). For ORNs with a finite basal activity, we find that having a basal-activity-dependent fraction of inhibitory odor-receptor interactions increases the coding capacity. All our theoretical findings are consistent with existing experiments and predictions are made to further test our theory. The optimal coding model provides a unifying framework to understand the peripheral olfactory systems across different organisms.

I. INTRODUCTION

Animals rely on their olfactory systems to detect, discriminate and interpret external odor stimuli to guide their behavior. Natural odors are typically mixtures of different odorant molecules whose concentrations can vary over several orders of magnitude [1–3]. Remarkably, animals can distinguish a large number of odorants and their mixtures by using a relatively small number of odor receptors (ORs) [4, 5]. For example, humans have only ~ 300 ORs [6, 7], and the often cited number of odors that can be distinguished is ~ 10000 [8], the real number may be even larger [9] (see also [10] and [11]). Humans can also distinguish odor mixtures with up to 30 different compounds [12]. In comparison, the highly olfactory lifestyle and exquisite olfactory learning ability of the fly is afforded by only ~ 50 ORs [4, 13]. The olfactory system achieves such remarkable ability through a combinatorial code in which each odorant is sensed by multiple receptors and each receptor can be activated by many odorants [14–16]. In both mammals and insects, odorants bind to receptors in the cilia or dendrites of olfactory receptor neurons (ORNs), each of which expresses only one type of receptor. ORNs that express the same receptors then converge onto the same glomerulus in olfactory bulb (mammals) or antennal lobe (insects), whose activity patterns contain the information about external odor stimuli [13, 17–19]. A key question that we want to address here is how ORNs best represent external olfactory information that can be interpreted by the brain to guide an animal’s behavior [4, 13, 20].

It has long been hypothesized that the input-output response functions of sensory neurons are “selected” by statistics of the stimuli in the organism’s natural environment to transmit a maximum amount of information about its environment, generally known as the efficient coding hypothesis [21, 22] or the related InfoMax principle [23, 24]. For instance, the contrast-response function of interneurons in the fly’s compound eye can be well approximated by the cumulative probability distribution of contrast in the natural environment [22]. The receptive fields of neurons in early visual pathway are thought to exploit statistics of natural scenes [25–29]. Similar result has also been observed in the auditory system [30]. In all these cases, to achieve maximum information transmission, an “ideal” neuron should transform the input distribution into a uniform output distribution [22, 31] and a population of neurons should decorrelate their responses [28, 32, 33].

However, unlike light or sound, which can be characterized by a single quantity such as wavelength or frequency, there are a huge number of odorants each with its own unique molecular structure and different physiochemical properties [34, 35]. The high dimensionality of the odor space thus poses a severe challenge for the olfactory system to code olfactory signals. Fortunately, typical olfactory stimuli are sparse with only a few types of odorant molecules in an odor mixture [1, 2, 36]. The sparsity of the odor mixture immediately reminds us of the powerful compressed sensing (CS) theory developed in computer sceince and signal processing community [37, 38]. The CS theory shows that sparse high dimensional signals can be encoded by a small number of sensors (measurements) through random projections; and the highly compressed signal can be reconstructed (decoded) with high fidelity by using an L1-minimization algorithm [37–39]. However, conventional CS theory assumes the sensors to have a linear response function with essentially an infinite dynamic range [40]. In contrast, ORN response is highly nonlinear [41, 42], with a typical dynamic range less than 2 orders of magnitude, which is far less than the typical concentration range of odorants [42].

The use of the CS theory has recently been explored in olfactory systems. For example, Zhang and Sharpee proposed a fast reconstruction algorithm in a simplified setup with binary ORNs and binary odor mixtures without concentration information [43]. In another work, Krishnamurphy et al. studied how the overall “hour-glass” (compression followed by decompression) structure of the olfactory circuit can facilitate olfactory association and learning, again with the assumption that ORN responses to odor mixtures are linear [44]. Following ideas in CS theory, Singh et al recently proposed a fast olfactory decoding algorithm that might be implemented in the downstream olfactory system [45]. However, All these studies primarily focus on the downstream decoding and learning of the compressed signals by assuming a linear neuron response function. The question of how ORNs with a nonlinear input-output response function can best compress the sparse high dimensional odor information remains unanswered.

Another related study is the recent work by Zwicker et al. [46] where the authors investigated the maximum entropy coding scheme for the olfactory system by using a simplified binary response function, where an odor only induces a response when its concentration is above a threshold that is inversely proportional to the receptor sensitivity to the odor. They found two conditions for the binary ORNs to maximize the information transmission. The first condition is that each ORN on average responses to half of the odors, i.e., half of the odors have a concentration that is higher than the corresponding threshold; and the other condition is that the responses from different ORNs need to be uncorrelated. These results were obtained by studying the average activities of the ORNs and their correlations. However, due to the limitation of the binary input-output response function and the specific prior for the sensitivity distribution used in [46], the optimal coding strategy for neurons with realistic physiological properties remains unclear.

To address these important open questions, here we study the optimal coding scheme by using a realistic ORN input-output response function where the ORN output depends on the odor concentration continuously in a nonlinear (sigmoidal with odor concentration on a logarithmic scale) form characterized by its sensitivity, or equivalently the inverse of the half-maximum response concentration. By optimizing the input-output mutual information in the full sensitivity matrix space without any prior and following general insights from compressed sensing for sparse odor mixtures, we systematically study the statistical properties of the optimal sensitivity matrix and their dependence on odor statistics.

We found that the optimal ORN sensitivity matrix is sparse, i.e., each receptor only responds to a fraction of the odorants in its environment and its sensitivity to the other odorants is zero. The sparsity itself is a robust (universal) feature of the optimal sensitivity matrix, and the value of the optimal sparsity depends on statistics of the odor mixture and the number of ORNs. By using a simple mean-field theory, we show that the sensitivity matrix sparsity is caused by the competition (trade-off) between enhancing multiple odor detection and avoiding odor-odor interference. Next, we demonstrate the advantages of the maximum entropy (information) coding scheme in two downstream “decoding” (learning) tasks: reconstruction and classification of odor stimuli. Finally, we generalize our theory to neurons with a finite basal activity, where we found that the optimal coding strategy is to allow co-existence of both odor-evoked inhibition and activation with the fraction of inhibitory interactions depending on the basal activity. Comparisons with existing data in different organisms are consistent with our theory, which provides a unified framework to understand olfactory coding. Possible predictions based on our theory and future directions to incorporate more biological complexities such as odor-odor and receptor-receptor correlations in our model are also discussed.

II. RESULTS

We first describe the mathematical setup of the problem before presenting the results. An odor mixture can be represented as a vector c = (c1,…,cN), where cj is the concentration of odorant (ligand) j (= 1, 2,…,N) and N is the number of all possible odorants in the environment. A typical odor mixture is sparse with only n(≪ N) odorant molecules that have non-zero concentrations. As illustrated in Fig. 1(a) (dotted box), the odor mixture signal c is sensed by M sensors. The encoding process, which maps c to the ORN response vector r = (r1, r2, …,rM), is determined by the bipartite odorant-ORN interaction network characterized by the sensitivity matrix W, whose elements are denoted as Wij, the sensitivity of i-th sensor (ORN) to j-th odorant for all odorant-ORN pairs with j = 1, 2,…,N and i = 1, 2,…, M. For simplicity, we used a simple competitive binding model [47, 48] [and see Supplemental Material (SM)], in which the normalized response of ORN i (=1, 2, ⋯, M) to odor c can be described by a nonlinear function Embedded Image where Fi is the response function which is taken to be the form in Eq. 1, and ηi represents the noise term. For convenience, we assume ηi is a Gaussian noise with zero mean and standard deviation σ0. Other forms of the nonlinear response function and noise can be used without affecting the general conclusions of our paper.

FIG. 1.
  • Download figure
  • Open in new tab
FIG. 1.

Schematics of peripheral odor coding. (a) Illustration of interaction between odorants and odor receptors. There are N possible odorant molecules and M ORNs. The interaction between the i’th ORN and the j’th odorant is characterized by the sensitivity Wij. The odorant-ORN interactions in the peripheral olfactory system (the dotted box) are characterized by the (M × N) sensitivity matrix W. The odor information collected by the ORNs are passed on to downstream network for further processing. (b) Typical ORN-odorant dose-response curve according to Eq. (1). The range of linear response (in the log scale) is highlighted by the shaded area.

As illustrated in Fig. 1(b), the input-output response curve is highly nonlinear (Sigmoidal) resulting in a finite response range for each sensor, which is less than the range of concentration for a typical odorant molecule. Therefore, to encode the full concentration range of an odorant molecule, an odorant needs to interact with multiple sensors with different sensitivities. On the other hand, given the fact that M < N, each sensor has to sense multiple odorant molecules. These two considerations lead to the many-to-many odor-receptor interaction network characterized by the sensitivity matrix W = {Wij | i = 1, 2,…, M; j = 1, 2,…,N}.

Eq. (1) maps the external odor stimulus c to the internal neuronal activity r = (r1, r2, ⋯, rM). The downstream olfactory circuits then use this response pattern to evaluate (decode) odor information (both their identities and concentrations) in order to guide the animal’s behaviors. The quality of encoding odor information by the periphery ORNs directly sets the upper limit of how well the brain can decode the odor information [49]. In this paper, we focus on discovering the key statistical properties of the sensitivity matrix W that allows biologically realistic ORNs to best represent the external odor information.

A given odor environment can be generally described by a probability distribution Penv(c). To convey maximum information about external odor stimuli in their response patterns to the brain, ORs/ORNs can adjust their sensitivity matrix W to “match” odor statistics Penv(c). Without any assumption on what information the brain may need, the mutual information I(c, r) between stimuli and response pattern of ORNs sets the limit on how much odor information are received by the peripheral ORNs and thus serves as a good “target function” to be maximized [23, 24, 46, 50–52]. I is defined as Embedded Image where H(r) and H(r|c) are the entropy of output distribution Pr(r) and conditional distribution P(r|c). In the limit of small noise, the second term is independent of W and negligible, hence, we will use H(r) as our target function for optimization.

H(r) depends on W and Penv(c) because Pr(r) depends on W and Penv(c): Embedded Image

In this paper, we study the optimal coding strategy by maximizing the mutual information I or equivalently the differential entropy H with respect to the sensitivity matrix W for different odor mixture statistics Penv(c) and different numbers of ORNs. The mutual information as given in Eq. (2) can only be computed analytically for very simple cases. For more general cases, we used the covariance matrix adaptation - evolution strategy (CMA-ES) algorithm to find the optimal sensitivity matrix [53, 54] (see SM for technical details).

A. The optimal sensitivity matrix is sparse

Odor concentration varies widely in natural environment [1, 55]. To capture this property, we studied the case where the odorant concentrations in an odor mixture follows a log-normal distribution with variance Embedded Image. Other broad distributions such as power law distributions are also studied without changing the general conclusions. For simplicity, we consider the case where odorants appear independently in the mixture; more realistic consideration such as correlation among odorants will be discussed later in the Discussion section.

For given odor statistics (characterized by N, n and σc), and a given number of nonlinear sensors M, we can compute and optimize the input-output mutual information I(W|N, n, σc; M) with respect to all the M × N elements in the sensitivity matrix W. We found that the optimal sensitivity matrix W is “sparse”: only a fraction (ρw) of its elements have non-zero values [sensitive, shown as the colored elements in Fig. 2(a)], and the rest are insensitive [the black elements in Fig. 2(a)], with essentially zero values of Wij. The sparsity in the optimal W was not found in the previous theoretical study [46] mainly due to the oversimplified binary ORN response function used there. From the histogram of ln(Wij) shown in Fig. 2(b), it is clear that elements in the optimal sensitivity matrix fall into two distinctive populations: the insensitive population that has practically zero sensitivity [note the log scale used in Fig. 2(b)], and a sensitive population with a finite sensitivity. For the cases when the odor concentration follows a log-normal distribution, the distribution of the sensitive (nonzero) elements Ps(w) can be fitted well with a log-normal distribution as shown in Fig. 2(b).

FIG. 2.
  • Download figure
  • Open in new tab
FIG. 2.

Statistics of the optimal sensitivity matrix elements from theory and comparison with experiments. (a) Heatmap of a typical optimal sensitivity matrix from our model. Color indicates the value of ln(Wij), black indicates the “inactive” or negligible interactions. (b) Histogram of all the Wij values from our model. It shows a bimodal distribution: an insensitive part with near zero Wij and a sensitive part with nonzero Wij. The distribution of the sensitive elements can be fitted by a log-normal distribution. (c) Experimental data from fly larva and mouse. Left: the fraction of sensitive odorant-receptor interactions ρw estimated in experiments for fly larva [42] and mouse [56]. Right: the histogram of sensitive Wij, Ps(w), for fly larva and mouse. Model parameters are N = 100, M = 30, n = 2, σc = 2, μ = 0.

Our main finding here, i.e., sparsity in the odor-receptor sensitivity matrix, is supported by existing experimental measurements. As shown in Fig. 2(c), the sparsity ρw is estimated to be ~ 0.4 for fly larva [42] and ~ 0.1 for mouse [56]. Additionally, the broad distribution of the non-zero sensitivity observed in our model also agree qualitatively with those estimated from experiments [Fig. 2(c), two right panels], which are slightly skewed log-normal distributions.

Besides the distribution of the individual sensitivity matrix elements, we also calculated the row (sensor)-wise and column (odorant)-wise rank-order correlation coefficients (Kendall’s tau, τ) and compared them with those from the same matrix but with its elements shuffled randomly. As shown in the Supplemental Material (SM), we found that both the rows and columns (Fig. S1 in SM) in the optimal matrix have a higher level of orthogonality (and thus independence) than that from random matrices. This orthogonality in the optimal W matrix leads to a higher input-output mutual information than those from the shuffled matrices [see Fig. S2(a) in SM] and a nearly uniform distribution of ORN activity for different odor mixtures [see Fig. S2(b)-(d) in SM].

B. The optimal sparsity depends on odor statistics and the number of sensors

The statistics of the optimal sensitivity matrix elements are characterized by the sparsity ρw defined as the fraction of non-zero elements in W, and the distribution of the sensitive (nonzero) elements, Ps(w), which is further characterized by its mean (μw) and standard deviation (σw). Note that the sparsity parameter ρw is defined in such a way that a smaller value of ρw corresponds to a sparser sensitivity matrix. We investigated systematically how ρw, μw, and σw depend on statistical properties of the odor mixture characterized by N, n, and σc, as well as M, the total number of sensors (ORNs).

We found that as the odor concentration becomes broader with increasing σc, ρw increases [Fig. 3(a)]. This is expected as more receptors with different sensitivities are required to sense a broad range of input concentrations. When we increased the odor mixture sparsity n or the total number of possible odors N, the optimal sensitivity matrix sparsity ρw decreases [Fig. 3(b)&(c)]. In general, as the mapping from odor space to ORN space becomes more “compressed” with larger values of n and/or N, the optimal strategy is to have each receptor respond to a smaller fraction of odorants to avoid saturation. Finally, we gradually increased the number of receptors M with fixed values of N, n, and σc. We found that ρw decreases, i.e., the sensitivity matrix becomes more sparse as the number of sensors M increases [Fig. 3(d)]. This somewhat counter-intuitive result can be understood as the system has more sensors to encode signals, each sensor can respond to a smaller number of odors to avoid interference. For all the cases we studied, when the odor concentrations follow a log-normal distribution, then the distribution of the non-zero sensitivities in the optimal sensitivity matrix follows roughly a log-normal distribution with its mean μw and standard deviation σw depending on the odor statistics (σc, n, N) and the number of ORNs M (see Fig. S3 in SM).

FIG. 3.
  • Download figure
  • Open in new tab
FIG. 3.

Dependence of ρw on the width of odor log concentration σc (a), the input sparsity n (b), the number of total odorants N (d), and the number of receptors M (d). In (a-b), N = 50, M = 13, in (c), M = 13, and in (d), N = 50. Error bars are standard deviation of 40 times simulation.

To verify whether sparsity is a general (robust) feature in the optimal sensitivity matrix, we studied the cases when the odor concentration follows different distributions, such as a symmetrized power-law distribution, Penv(c) ∝ exp(−β| ln c|)[see Fig. S4(a) in the SM for the comparison with log-normal distribution], with different exponent β. For all values of β studied, there is always a finite sparsity ρw < 1 in the optimal sensitivity matrix. As shown in Fig. 4(a), ρw decreases slightly when β increases and the odor concentration distribution becomes narrower, which is consistent with the previous cases when the odor concentration distribution is log-normal [Fig. 3(a)]. However, as shown in Fig. 4(b), the distribution of the sensitive elements, Ps(w), does not follow exactly a log-normal distribution [see Fig. S4 (b) in SM]. In fact, Ps(w) is asymmetric in the ln(w) space with a skewness that depends on β as shown in the inset of Fig. 4(b).

FIG. 4.
  • Download figure
  • Open in new tab
FIG. 4.

The optimal sensitivity matrix for the symmetric power-law odor concentration distribution Penv(c) ∝ exp[−β| ln c|]. (a) The sparsity ρw versus the power-law exponent β. (b) The distribution of the non-zero sensitivities Ps(w) for β = 0.3, 0.7. Inset shows the dependence of the skewness of the distribution on β. Parameters: N = 50, M = 13, n = 3, σ0 == 10−3.

Taken together, our results suggest that sparsity in the sensitivity matrix is a robust feature for nonlinear compressed sensing problems. This theoretical finding is supported by and explains existing experiments in olfactory systems [42, 56]. Our study also showed that the nonzero sensitivities follow a broad distribution whose exact shape, mean, and variance depend on odor statistics and total number of ORNs.

C. The origin of sparsity in the optimal sensitivity matrix

Given the constraint that the number of sensors is much smaller than the possible number of odorants, i.e., M ≪ N, each sensor needs to respond (sense) to multiple types of odorant molecule so that all odorant molecules can be sensed by at least one sensor. However, in an odor mixture with a few types of odorant molecules, two or more odorants in the mixture can bind with the same sensor and interfere with each other, e.g., by saturating the nonlinear sensor. The probability of interference increases with the sparsity of the sensitivity matrix. This tradeoff between sensing multiple odorants and the possible interference determines the sparsity in the optimal sensitivity matrix. We demonstrate this tradeoff and its effect more rigorously by developing a mean-field theory (MFT) as described below.

We begin with the simplest case where many receptors sense only one odorant (N = 1, M ≫ 1) where obviously there is no interference. As first proposed by Laughlin [22], the optimal coding scheme is for the M receptors to distribute their sensitivities according to the input concentration distribution so that the output distribution is uniform. For the case when the distribution of the odorant concentration is log-normal with a standard deviation σc, the optimal sensitivity distribution P1(w) that maximizes H(r) is also approximately a log-normal distribution: Embedded Image where the mean μw = 0 and the variance Embedded Image increases with the variance Embedded Image of logarithmic concentration distribution. More importantly, we show analytically that in general the coding capacity I1 increases logarithmically with the number of receptors M when M ≫ 1 (see SM for details), which is verified by simulation results as shown in Fig. 5(a): Embedded Image

FIG. 5.
  • Download figure
  • Open in new tab
FIG. 5.

The tradeoff between increasing single odorant information and interference among multiple odorants. (a) The differential entropy with one odorant, I1, versus the number of receptors M for different width (σc) of odor log-concentration distribution. I1 increases monotonically with M but it only grows logarithmically with M for large M (dashed line). (b) Differential entropy I2 for the case with two odorants in the mixture with their concentrations following the same log-normal distribution with width σc. I2 depends non-monotonically on the fraction of sensitive receptors ρw(= m/M) with a maximum (marked by the dashed lines) at Embedded Image that depends on σc, which is shown in the inset.

This means that sparsity ρw = 1, i.e., all sensitivities should be nonzero because there is no interference when only one type of odorant molecule (n = 1) is present in the mixture. However, it is important to note that the maximum mutual information only increases weakly (logarithmically) for large M.

We next consider the case where two odorants are sensed by multiple receptors (N = 2, M ≫ 1). Let’s denote the number of receptors that respond to each odorant as m(m ≤ M) and the sparsity ρw = m/M. If each odorant is sensed by a disjoint set of receptors, the total differential entropy will simply double the amount for a single odorant: I2(m) = 2I1(m). However there is a finite probability p = m/M = ρw, that a given receptor in one set will also respond to the other odorant. Therefore, on average there are m × p = m2/M receptors whose output is “corrupted” due to interference between two different odorants in a given mixture. We can write down the differential entropy as Embedded Image where I1(m) is the maximum differential entropy for one odor [Eq. (5)] and ΔI is the marginal loss of information (entropy loss), which can be approximated by ΔI ≈ α(I1(m + 1) − I1(m)) ≈ α∂I1(m)/∂m where α ≤ 1 is the average fraction of information loss for a “corrupted” sensor. We can then obtain the optimal value of m by maximizing I2(m) with respect to m. For m ≪ M, the interference effect is small, so I2(m) ≈ 2I1(m), which increases with m logarithmically according to Eq. (5). As m increases, the interference effect given by the second term on the RHS of Eq. (6) increases with m, which is faster than the slow logarithmic growth of 2I1(m). This leads to a peak of I2(m) at an optimal value of m = m* < M or a sparsity of the sensitivity matrix ρw = m*/M < 1 [Fig. 5(b)].

In the MFT, we can compute the olfactory coding and interference by ignoring the weak rank-order correlation in the optimal sensitivity matrix and assuming the distributions for the optimal sensitivity matrix elements are i.i.d. In particular, we used the following approximation for the distribution of the sensitivity matrix W: Embedded Image where ρw is the matrix sparsity, and Ps(Wij) is a smooth distribution function, which is approximated here as a log-normal distribution with mean μw and standard deviation σw as given in Eq. (4). The mean differential entropy of ORN response pattern Embedded Image which is averaged over the distribution of the sensitivity matrix W, can be maximized with respect to the parameters ρw, μw, and σw (see SM for details). The resulting optimal parameters agree with our direct numerical simulations qualitatively with a sparsity ρw < 1 that increases with the width of the input distribution σc (see Fig. S5 in SM).

D. The optimal sparse sensitivity matrix enhances downstream decoding performance

The response patterns of ORNs form the internal representation of external odor stimuli that the higher (downstream) regions of the brain can use to infer the odor information for controlling the organism’s behavior. In previous sections, we focused on understanding the statistical properties of the optimal sensitivity matrix W that maximize mutual information between odor input and ORN output. Here in this section, we test whether the optimal sensitivity matrix can enhance the downstream decoding performance by examining two specific learning tasks: classification and reconstruction.

Task I: classification

The goal of the classification task is to infer the category of odor mixture such as the odor valence by training with similar odor stimuli. Classification is believed to be carried out by the Drosophila olfactory circuit, which we describe briefly here. After odor signals are sensed by ~ 50 ORNs, they are relayed by the projection neurons (PNs) in antennal lobes to a much larger number of Kenyon cells (KCs) in the mushroom body (MB), as illustrated in Fig. 6(a). Each of the ~ 2000 KCs in MB on average receives input from ~ 7 randomly selected PNs [57]. A single GABAergic neuron (APL) at each side of the brain can be activated by the KCs and inhibits KCs globally [58]. Such random expansion and global inhibition enable sparse and decorrelated representation of odor information in MB [57, 59, 60]. The large number of KCs in MB then converge to a few (only 34) mushroom body output neurons (MBONs) [61], which project to the other brain regions and drives attractive or repulsive behavior [62]. Olfactory learning is mainly mediated by the dopaminergic neurons (DANs) which controls the synaptic weights between KCs and MBONs [63].

FIG. 6.
  • Download figure
  • Open in new tab
FIG. 6.

Maximum entropy coding facilitates olfactory learning and classification. (a) Schematics of the neural circuitry for olfactory learning in fly, see text for detailed description. (b) A simplified model of the fly olfactory system shown in (a) for learning the valence of odor stimuli, where the effect of dopaminergic neurons (DAN) is replaced by simple plastic weights from KC to MBON. (c) Odors are organized as clusters of size ΔS and are randomly assigned with an odor valence. The decoding network receives the response pattern of ORNs and classifies them into the right categories. 100 clusters were drawn and each cluster contains 50 variations, resulting in 5000 odor stimuli among which 80% were used as training data and the rest were used as testing data. (d) Classification performance with respect to the sparsity of sensitivity matrix. Best performance appears at around ρw = 0.6, within the 95% maximum entropy region. Parameters: N = 100, M = 10, n = 3, σc = 2, σ0 = 0.05, ΔS = 0.1, 500 KC units and two odor categories. Error bars are standard deviation from 40 simulations.

To mimic the properties of MB, our model “classifier” network, as illustrated in in Fig. 6(b), contains a high dimensional mixed-layer (KCs). For simplicity, we consider a single readout neuron. Each KC unit in the mixed-layer pools the ORNs with a fixed random, sparse matrix. Only the synaptic weights from the KCs to the readout neuron are plastic. To consider the variability of natural odors, we assumed that odor stimuli fall into clusters whose centers represent corresponding typical odor stimuli. Members in a given cluster are variations to the centroid [64]. The radius of a cluster ΔS characterizes the variability of a specific odor mixture. Centroids were drawn from Penv(c) with randomly assigned labels (attractive or aversive). Members inside each cluster were generated by adding noise of size ΔS, which results in clouds of points in the odor space [Fig. 6(c)] with each cloud having a randomly assigned label (see SM for details).

The synaptic weights from the KCs to the readout neuron are trained by using a simple linear discriminant analysis (LDA) method although other linear classification algorithms such as support vector machine (SVM) would also work. After training, the performance of the “classifier” is quantified by the accuracy of classification on the testing dataset.

To test effects of different coding schemes on the classification performance, we vary the distribution of the sensitivity matrix elements by changing the sparsity ρw without changing the distribution of the non-zero sensitivity matrix elements (e.g., the log-normal distribution with fixed mean and variance). The output of the coding process r(c, W) serves as the input for the “classifier” network and the classifier error is computed for different values of ρw. As shown in Fig. 6(d), we find that the best performance is achieved near ρw = 0.6, which belongs to the range of ρw with large mutual information between odor input and the ORN/PN response [shaded region in Fig. 6(d)]. Changing parameters such as M, n and number of categories give similar results (see Fig. S6 in SM). In line with recent studies which show that sparse highdimensional representation facilitates downstream classification [64, 65], our results suggest that maximum entropy coding at the ORNs/PNs level may enhance classification by retaining maximum odor mixture information in a form that can be decoded by the KCs through random expansion.

Task II: reconstruction

This goal of the reconstruction task is to infer (decode) both the composition and the exact concentrations of all odorant components in an odor mixture from the sensor responses. This more stringent task is motivated directly by the original compressed sensing problem in computer science, its relevance to the olfactory systems will be discussed later in the Discussion section.

As illustrated in Fig. 7(a), the output of the coding process r(c, W) serves as the input for the downstream reconstruction network. Here, we used a generic feedforward artificial neural network (ANN) with a few (1-5) hidden layers and a output layer that has the same dimension N as the odor space. We trained the ANN with a training set of sparse odor mixtures drawn from the odor distribution Penv(c), and tested its performance by using new odor mixtures randomly drawn from the same odor distribution. Denote the reconstructed odor vector as ĉ and a binary vector ξ associated with c, i.e., ξi = 1 if ci ≠ 0, otherwise, ξi = 0. Due to the sparse nature of odor mixture and the wide concentration range, the reconstruction error 𝓛 is defined as the sum of “identity error” 𝓛1 and “intensity error” 𝓛2: Embedded Image where Embedded Image with Embedded Image and Embedded Image determined from training (supervised learning).

FIG. 7.
  • Download figure
  • Open in new tab
FIG. 7.

Maximum entropy coding facilitates signal reconstruction. (a) Schematic of the reconstruction task. The decoding feed-forward network with multiple hidden layers (blue nodes) receives the response r of the peripheral sensors (red nodes) as ‘input’ and produces the output ĉ to reconstruct (infer) the original signal c. (b) The minimum reconstruction error on the testing set is achieved around sparsity level ρw = 0.6 of W, within the 95% maximum entropy region (shaded area).(c) Comparison of the original ci and reconstructed ĉi for three different sparsity of W: I. ρw = 0.1, II. ρw = 0.6, III. ρw = 0.95, which are labeled by the dashed lines in panel (b). Parameters used are: N = 100,M = 20,n = 2,σc = 2, σ0 = 0.05, two hidden layers, each with 100 units. Error bars in (b) are standard deviation from 40 times simulation.

The reconstruction error depends on the coding matrix W, in particular its sparsity ρw, as shown in Fig. 7(b). Pair-wise comparisons of non-zero concentrations in the original and reconstructed odor mixtures for three different coding regimes are shown in Fig. 7(c) (see Fig. S7 in SM for a direct comparison of the whole reconstructed and original odor vectors).The best performance is achieved around ρw = 0.6, within the region where sparse W enable nearly maximum entropy coding (shaded region), this property is insensitive to the number of hidden layers in the reconstruction network (see Fig. S8 in SM). Our results show that the optimal entropy code provides an efficient representation of the high-dimensional sparse data so that the downstream machine learning algorithms can achieve high reconstruction accuracy.

E. Optimal coding strategy for ORNs with a finite basal activity

So far, we have only considered the case where the neuron activity is zero in the absence of stimulus and odorants only activate the ORs/ORNs. It has been widely observed that some ORNs show substantial spontaneous activities, and some odorants can act as inhibitors to suppress the activities of neurons they bind to [41, 66, 67], as shown in Fig. 8(a). The presence of an inhibitory odorant can shift a receptor’s dose-response curve to an excitatory odorant, thereby diminishing the sensitivity of the receptor to excitatory odorants[68]. It is then natural to ask what is the optimal design of the sensitivity matrix to maximize coding capacity if odorants can be either excitatory or inhibitory. To answer this question, we used a two-state model to characterize both odor-evoked excitation and inhibition [68]. Now, the interaction between the odorant j and ORN i has two possibilities – it can be either excitatory with a sensitivity Embedded Image or inhibitory with a sensitivity Embedded Image. The normalized response of i-th ORN to odor mixture c is Embedded Image where γ determines the basal activity by r0 = 1/(1 + γ), nA and nI are the number of excitatory and inhibitory odorants to the i-th receptor, and ηi is a small Gaussian white noise.

FIG. 8.
  • Download figure
  • Open in new tab
FIG. 8.

The optimal sensitivity matrix for ORNs with a finite basal activity and comparison with experiments. (a) Schematic of ORN response to excitatory (blue region) and inhibitory odorants (red region). Note that the neuron has a finite acitivity r0 in the absence of any stimulus. (b) Heatmap of a typical optimal W from our model, with the size of the elements indicating the strength of excitatory (blue) and inhibitory (red) interactions. (c) Both the excitatory and inhibitory interactions in optimal W can be well approximated by log-normal distributions (solids lines). (d) The fraction of inhibitory interaction ρi increases the basal activity r0 nearly linearly (upper panel). The differential entropy I also increases with r0 (lower panel). The shaded region shows the range of r0 corresponding to the fraction of inhibitory interaction estimated from experiments [41], which coincide with the range of r0 where the differential entropy increases sharply with r0. (e) The distributions of the estimated relative excitatory and inhibitory receptor-odor sensitivities from experimental data for the fly [41]. Both distributions can be well fitted by log-normal distributions. (f) The correlation between the number of odorants that inhibit an ORN with the ORN’s spontaneous activity obtained from experimental data on fly [41] (upper panel) and mosquito [69] (lower panel). Each point corresponds to an ORN, the line is the linear fit and shaded region is the 0.95 confidence interval. Model parameters used are N = 50, M = 10, n = 2, σc = 2, with 40 repeated simulations. In (d), error bars are small, comparable to the size of the symbols. In (b) and (c), r0 = 0.18.

Our simulations show that with a finite spontaneous activity, the receptor array achieves maximum entropy coding by assigning a certain number of inhibitory interactions in the sensitivity matrix [Fig. 8(b)]. The strength (sensitivity) of both the excitatory and inhibitory elements follow (approximately) log-normal distributions [Fig. 8(c)]. The fraction of inhibitory interaction (ρi) in the optimal W is roughly proportional to the spontaneous activity of ORN r0, with only a slight deviation when r0 → 0 and r0 → 1 [Fig. 8(d, upper panel)]. Interestingly, as r0 → 0, ρi approaches a finite value that is related to the fraction of zero sensitivity elements (1 − ρw) we studied in the previous sections for ORNs without a spontaneous activity. As the basal activity increases, the coding capacity increases rapidly at first and quickly plateaus around r0 = 0.3 [Fig. 8(d, lower panel)]. The increase of coding capacity can be understood intuitively by considering that the effective dynamic range of receptors increases in the presence of inhibition. Odor-evoked inhibition enables receptors to work bi-directionally and avoid saturation when responding to many odorants simultaneously.

To verify our theoretical results, we have analyzed the statistics of the sensitivities for the excitatory and inhibitory interactions obtained from the experimental data in fly by Hallem and Carlson [41] as well as in mosquito by Carey et al [69]. As shown in Fig. 8(e) for the fly data, both the excitatory and inhibitory sensitivities follow log-normal distributions, which are consistent with our model results shown in Fig. 8(c). The mosquito data shows very similar results (see Fig. S9 in SM). Our theory also showed that the fraction of inhibitory interaction ρi increases with the basal activity r0 as shown in Fig. 8(d, upper panel). We have tested this theoretical result from the experimental data. As shown in Fig. 8(f), the number of inhibitory odor-ORN interaction for an ORN shows a strong positive correlation with its basal activity for both fly and mosquito, which is in agreement with our theoretical prediction. Finally, we note that the relative basal activity 〈r0〉 from the experimnetal data [41] is smaller than 0.16 (see SM for detailed analysis), where the differential entropy rises sharply with r0 as highlighted by the shaded region in Fig. 8(d, lower panel). Although an even higher spontaneous activity towards r0 = 0.5 can further increase the coding capacity, the gain is diminishing, while the metabolic cost increases drastically in maintaining the spontaneous activity [70]. Thus, an optimal basal activity would be expected in the shaded region of Fig. 8(d) due to the tradeoff between coding capacity and energy cost.

SUMMARY AND DISCUSSIONS

In this paper, we studied how a relatively small number of nonlinear sensors (ORNs) with a limited dynamic range can optimize the transmission of high dimensional but sparse information in the environment. We found that the optimal sensitivity matrix elements follow a bi-modal distribution. For neurons without a basal activity, the sensitivity matrix is sparse – a neuron only responds to a fraction ρw(< 1) of odorants with its sensitivities (to these odorants) following a broad distribution and it is insensitive to the rest (1 − ρw) fraction of the odorants. This sparsity in the odor-ORN sensitivity matrix is a direct consequence of the finite dynamic range of the realistic nonlinear ORNs, which are different from the linear sensors in the conventional compressed sensing problem. For neurons with a finite basal activity r0, the optimal sensitivity distribution is also bi-modal with a fraction ρi of the odor-neuron interaction inhibitory and the rest (1 − ρi) fraction of the odor-neuron interaction excitatory and ρi increases with r0. Details of the odor-receptor sensitivity distribution depend on the odor mixture statistics and the sensor characteristics, but the bi-modal distribution is robust. By investing the effects of different coding schemes on the downstream decoding/learning tasks, we showed that the maximum entropy code (representation) of the external signal enhances the performance of downstream reconstruction and classification tasks.

Connection to experiments and testable predictions

Our primary finding - the sparsity in the odor-receptor sensitivity matrix W - seems to be consistent with existing experimental measurements of receptor-odor sensitivity matrices in different organisms [Fig. 2(c)]. Although the natural odor environment varies for different organisms, it is interesting to see that the broad distribution of the non-zero sensitivity observed in our model is consistent with the sensitivity matrices estimated from experiments in fly larva, mouse, adult fly, and mosquito [Fig. 2(c) and Fig. 8(e)&(f)]. The optimal coding strategy, if exists, would be the result of evolution. Thus, our theory may be tested by comparing olfactory systems in different species. In particular, our theory predicts that the sparsity parameter ρw decreases with the number of ORNs M [Fig. 3(d)], which can be tested by measuring the sparsity in the odor receptor sensitivity matrices in different organisms.

The relatively high level of spontaneous activity in ORNs has long been thought to only play a role in the formation of topographic map during development [71]. A recent study shows that odor-evoked inhibition can code olfactory information that drives the behavior of the fly [68]. Our results provide a quantitative explanation for the advantage of having certain level of spontaneous basal activity and odor-evoked inhibitions in odor coding. For neurons with a finite basal activity, our theory predicts that the fraction of odorants that inhibit the neuron increases with the basal activity of the neuron. The data from adult fly and mosquito are consistent with this prediction [Fig. 8(f)]. However, powerful high throughput techniques such as Calcium imaging, which only indirectly measure the odor-ORN interaction, seem to be incapable of detecting inhibitory interaction [42]. Therefore, more large scale direct measurements using electrophysiological methods such as those done for Drosophila [41, 72] and mosquito [69] should be carried out to test our predictions in different organisms.

By considering how the coding capacity of ORNs changes with basal activity [Fig. 8(d)] and the associated extra energy cost [70], one can hypothesize the existence of an “optimal” r0. Our result suggests that as the number of sensors increases, the benefit of having basal activity diminishes, hence, the “optimal” r0 should decrease as the number of sensory neurons increases. Indeed, this is consistent with the fact that E. coli has 5 chemoreceptors [73] which work bi-directionally with a high basal activity r0 ≈ 1/3 − 1/2 [74], and r0 in mouse is smaller than that in fly [67]. Of course, more experiments across different organisms with different numbers of sensory neurons are needed to test this hypothesis.

Future Directions

In this study, we assumed that odor information is contained in the instantaneous spiking rate of ORNs, and did not consider adaptation dynamics. Although adaptation plays an important role in all sensory systems [75], it happens in a relatively slower time scale than the time required for animals to detect and respond to odor stimuli [76, 77]. In general, sensory adaptation shifts the response function of the sensory neuron according to the background stimulus concentration and it leads to a larger but still finite effective dynamic range without changing the qualitative characteristics of the input-output response curve [75, 78]. Therefore, even though ORN level adaptation can further increase coding capacity at a slightly longer time scale as shown recently by Kadakia and Emonet [79], we do not expect it to qualitatively affect the optimal coding strategy found here. It remains an interesting question to understand how neuronal dynamics such as adaptation can be used for coding time-dependent odor signals.

We have used reconstruction and classification as two learning tasks to demonstrate the advantage of having maximum entropy coding at the ORN level. While the classification task has clear biological relevance, it is unclear to what extent animals need to infer the concentrations of individual odorants in an odor mixture. The perception of odors has been thought as synthesis, i.e., odorant mixture is perceived as a unit odor [16]. Nevertheless, the performance of the reconstruction task indicates that most of the information about the odor mixture including the identities and concentrations of individual odorants in a sparse mixture can potentially be extracted from the activity pattern of ORNs, which is consistent with the experimental finding that mice after training can detect a target odorant in odor mixtures with up to 16 different odorants [80]. In this work, we focused only on the optimal coding strategy for the peripheral ORNs. In the fly olfactory system, odorants that elicit very similar ORN response patterns can be represented by very distinct patterns of KCs [41, 60]. It remains an interesting open question whether and how the architecture of the ORN/PN to KC network optimizes the odor information transmission to enhance precision of downstream learning and decision-making.

In conventional compressed sensing theory with linear sensors, a random measurement matrix enables accurate reconstruction of sparse high dimensional input signals [37, 40]. By using prior information about the input, a better sensory matrix can be designed [81, 82]. In many cases, the optimal matrix maximizes the entropy of compressed representation [83]. Unlike the linear CS problem where the measurement matrix is known and can be used directly for reconstructing the sparse input signal by using the L1-minimization algorithm, reconstruction in the nonlinear CS problem studied here has to be done by learning without prior knowledge of the sensitivity matrix. Despite this difference, our results suggest that with nonlinear sensors, the sparse optimal sensory matrix that maximizes information transmission enables better learning and more accurate reconstruction. This general observation and the limit of reconstruction in nonlinear CS should be examined with more rigorous analysis and larger numerical simulations.

Finally, in our study, we considered the simplest case where odorants appear independently in odor mixtures. However, even in this simplest case, we have found weak but statistically relevant “orthogonal” structure in the optimal sensitivity matrix [Fig. 2(c) and Fig. S1 in SM]. In naturally occurring odor mixtures, co-occurrence of odorants in different odor sources are common. For example, odorants that are products in the same biochemical reaction pathway, i.e., fermentation, are likely to appear together [2, 84]. Although odorant-evoked ORN response patterns are not simply determined by the molecular structure, some very similar odorants do trigger similar ORN response patterns [41]. On the other hand, ORNs and their responses to different odorants can be correlated due to structural similarities in their receptor proteins. It would be interesting to explore how such correlations among ORNs and odorant molecules as well as co-occurrences among different odorants in odor mixtures can affect the optimal coding strategy at the olfactory periphery in a future study.

ACKNOWLEDGMENTS

We thank Xiaojing Yang, Guangwei Si, Jingxiang Shen, Louis Tao, and Roger Traub for helpful discussions and comments. The work was supported by the Chinese Ministry of Science and Technology (Grant No. 2015CB910300) and the National Natural Science Foundation of China (Grant No. 91430217). The work by YT is partially supported by a NIH grant (R01-GM081747).

References

  1. [1].↵
    Robert A Raguso, “Wake up and smell the roses: the ecology and evolution of floral scent,” Annual Review of Ecology, Evolution, and Systematics 39, 549–569 (2008).
    OpenUrlCrossRefWeb of Science
  2. [2].↵
    Andreas Dunkel, Martin Steinhaus, Matthias Kotthoff, Bettina Nowak, Dietmar Krautwurst, Peter Schieberle, and Thomas Hofmann, “Nature’s chemical signatures in human olfaction: a foodborne perspective for future biotechnology,” Angewandte Chemie International Edition 53, 7124–7143 (2014).
    OpenUrlCrossRef
  3. [3].↵
    Ivo Beyaert and Monika Hilker, “Plant odour plumes as mediators of plant-insect interactions,” Biological Reviews 89, 68–81 (2014).
    OpenUrlCrossRef
  4. [4].↵
    Chih-Ying Su, Karen Menuz, and John R Carlson, “Olfactory perception: receptors, cells, and circuits,” Cell 139, 45–59 (2009).
    OpenUrlCrossRefPubMedWeb of Science
  5. [5].↵
    Kazushige Touhara and Leslie B Vosshall, “Sensing odorants and pheromones with chemosensory receptors,” Annual review of physiology 71, 307–332 (2009).
    OpenUrlCrossRefPubMedWeb of Science
  6. [6].↵
    Gustavo Glusman, Itai Yanai, Irit Rubin, and Doron Lancet, “The complete human olfactory subgenome,” Genome research 11, 685–702 (2001).
    OpenUrlAbstract/FREE Full Text
  7. [7].↵
    Christophe Verbeurgt, Françoise Wilkin, Maxime Tara-bichi, Françoise Gregoire, Jacques E Dumont, and Pierre Chatelain, “Profiling of olfactory receptor gene expression in whole human olfactory mucosa,” PloS one 9, e96333 (2014).
    OpenUrlCrossRefPubMed
  8. [8].↵
    Cornelia I Bargmann, “Comparative chemosensation from receptors to ecology,” Nature 444, 295 (2006).
    OpenUrlCrossRefPubMedWeb of Science
  9. [9].↵
    John P. McGann, “Poor human olfaction is a 19th-century myth,” Science 356, eaam7263 (2017).
    OpenUrlAbstract/FREE Full Text
  10. [10].↵
    Caroline Bushdid, Marcelo O Magnasco, Leslie B Vosshall, and Andreas Keller, “Humans can discriminate more than 1 trillion olfactory stimuli,” Science 343, 1370–1372 (2014).
    OpenUrlAbstract/FREE Full Text
  11. [11].↵
    Richard C Gerkin and Jason B Castro, “The number of olfactory stimuli that humans can discriminate is still unknown,” Elife 4, e08127 (2015).
    OpenUrlCrossRefPubMed
  12. [12].↵
    Tali Weiss, Kobi Snitz, Adi Yablonka, Rehan M Khan, Danyel Gafsou, Elad Schneidman, and Noam Sobel, “Perceptual convergence of multi-component mixtures in olfaction implies an olfactory white,” Proceedings of the National Academy of Sciences 109, 19959–19964 (2012).
    OpenUrlAbstract/FREE Full Text
  13. [13].↵
    Nicolas Y Masse, Glenn C Turner, and Gregory SXE Jefferis, “Olfactory information processing in drosophila,” Current Biology 19, R700–R713 (2009).
    OpenUrlCrossRefPubMedWeb of Science
  14. [14].↵
    Bettina Malnic, Junzo Hirono, Takaaki Sato, and Linda B Buck, “Combinatorial receptor codes for odors,” Cell 96, 713–723 (1999).
    OpenUrlCrossRefPubMedWeb of Science
  15. [15].
    JJ Hopfield, “Odor space and olfactory processing: collective algorithms and neural implementation,” Proceedings of the National Academy of Sciences 96, 12506–12511 (1999).
    OpenUrlAbstract/FREE Full Text
  16. [16].↵
    Gilles Laurent, “A systems perspective on early olfactory coding,” Science 286, 723–728 (1999).
    OpenUrlAbstract/FREE Full Text
  17. [17].↵
    Venkatesh N Murthy, “Olfactory maps in the brain,” Annual review of neuroscience 34, 233–258 (2011).
    OpenUrlCrossRefPubMedWeb of Science
  18. [18].
    Veit Grabe, Amelie Baschwitz, Hany KM Dweck, Sofia Lavista-Llanos, Bill S Hansson, and Silke Sachse, “Elucidating the neuronal architecture of olfactory glomeruli in the drosophila antennal lobe,” Cell reports 16, 3401–3413 (2016).
    OpenUrl
  19. [19].↵
    Olivier Gschwend, Nixon M Abraham, Samuel Lagier, Frédéric Begnaud, Ivan Rodriguez, and Alan Carleton, “Neuronal pattern separation in the olfactory bulb improves odor discrimination learning,” Nature neuroscience 18, 1474 (2015).
    OpenUrlCrossRefPubMed
  20. [20].↵
    Ronald L Davis, “Olfactory memory formation in drosophila: from molecular to systems neuroscience,” Annu. Rev. Neurosci. 28, 275–302 (2005).
    OpenUrlCrossRefPubMedWeb of Science
  21. [21].↵
    Horace B Barlow, “Possible principles underlying the transformations of sensory messages,” (1961).
  22. [22].↵
    Simon Laughlin, “A simple coding procedure enhances a neuron’s information capacity,” Zeitschrift für Natur-forschung c 36, 910–912 (1981).
    OpenUrl
  23. [23].↵
    Ralph Linsker, “Self-organization in a perceptual network,” Computer 21, 105–117 (1988).
    OpenUrlCrossRef
  24. [24].↵
    Joseph J Atick, “Could information theory provide an ecological theory of sensory processing?” Network: Computation in neural systems 3, 213–251 (1992).
    OpenUrlCrossRefWeb of Science
  25. [25].↵
    David J Field, “Relations between the statistics of natural images and the response properties of cortical cells,” Journal of the Optical Society of America A 4, 2379–2394 (1987).
    OpenUrlCrossRef
  26. [26].
    Joseph J Atick and A Norman Redlich, “Towards a theory of early visual processing,” Neural Computation 2, 308–320 (1990).
    OpenUrlCrossRef
  27. [27].
    Yang Dan, Joseph J Atick, and R Clay Reid, “Efficient coding of natural scenes in the lateral geniculate nucleus: experimental test of a computational theory,” Journal of Neuroscience 16, 3351–3362 (1996).
    OpenUrlAbstract/FREE Full Text
  28. [28].↵
    Anthony J Bell and Terrence J Sejnowski, “The “independent components” of natural scenes are edge filters,” Vision research 37, 3327–3338 (1997).
    OpenUrlCrossRefPubMedWeb of Science
  29. [29].↵
    Daniel L Ruderman and William Bialek, “Statistics of natural images: Scaling in the woods,” in Advances in neural information processing systems (1994) pp. 551–558.
  30. [30].↵
    Michael S Lewicki, “Efficient coding of natural sounds,” Nature neuroscience 5, 356 (2002).
    OpenUrlCrossRefPubMedWeb of Science
  31. [31].↵
    Naama Brenner, William Bialek, and Rob de Ruyter Van Steveninck, “Adaptive rescaling maximizes information transmission,” Neuron 26, 695–702 (2000).
    OpenUrlCrossRefPubMedWeb of Science
  32. [32].↵
    Odelia Schwartz and Eero P Simoncelli, “Natural signal statistics and sensory gain control,” Nature neuroscience 4, 819 (2001).
    OpenUrlCrossRefPubMedWeb of Science
  33. [33].↵
    Xaq Pitkow and Markus Meister, “Decorrelation and efficient coding by retinal ganglion cells,” Nature neuroscience 15, 628 (2012).
    OpenUrlCrossRefPubMed
  34. [34].↵
    Rafi Haddad, Rehan Khan, Yuji K Takahashi, Kensaku Mori, David Harel, and Noam Sobel, “A metric for odorant comparison,” Nature methods 5, 425 (2008).
    OpenUrlCrossRefPubMed
  35. [35].↵
    Andreas Keller, Richard C Gerkin, Yuanfang Guan, Amit Dhurandhar, Gabor Turu, Bence Szalai, Joel D Mainland, Yusuke Ihara, Chung Wen Yu, Russ Wolfinger, et al., “Predicting human olfactory perception from chemical features of odor molecules,” Science 355, 820–826 (2017).
    OpenUrlAbstract/FREE Full Text
  36. [36].↵
    Hany KM Dweck, Shimaa AM Ebrahim, Tom Retzke, Veit Grabe, Jerrit Weiβflog, Ales Svatoš, Bill S Hansson, and Markus Knaden, “The olfactory logic behind fruit odor preferences in larval and adult drosophila,” Cell reports 23, 2524–2531 (2018).
    OpenUrl
  37. [37].↵
    Emmanuel J Candes and Terence Tao, “Decoding by linear programming,” IEEE transactions on information theory 51, 4203–4215 (2005).
    OpenUrl
  38. [38].↵
    Emmanuel J Candes, Justin K Romberg, and Terence Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 59, 1207–1223 (2006).
    OpenUrl
  39. [39].↵
    Surya Ganguli and Haim Sompolinsky, “Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis,” Annual review of neuroscience 35, 485–508 (2012).
    OpenUrlCrossRefPubMedWeb of Science
  40. [40].↵
    Emmanuel J Candès and Michael B Wakin, “An introduction to compressive sampling,” IEEE signal processing magazine 25, 21–30 (2008).
    OpenUrlCrossRefWeb of Science
  41. [41].↵
    Elissa a. Hallem and John R. Carlson, “Coding of Odors by a Receptor Repertoire,” Cell 125, 143–160 (2006).
    OpenUrlCrossRefPubMedWeb of Science
  42. [42].↵
    Guangwei Si, Jessleen K Kanwal, Yu Hu, Christopher J Tabone, Jacob Baron, Matthew Berck, Gaetan Vignoud, and Aravinthan DT Samuel, “Structured odorant response patterns across a complete olfactory receptor neuron population,” Neuron (2019).
  43. [43].↵
    Yilun Zhang and Tatyana O Sharpee, “A robust feedforward model of the olfactory system,” PLoS computational biology 12, e1004850 (2016).
    OpenUrl
  44. [44].↵
    Kamesh Krishnamurthy, Ann M Hermundstad, Thierry Mora, Aleksandra M Walczak, and Vijay Balasubramanian, “Disorder and the neural representation of complex odors: smelling in the real world,” arXiv preprint arXiv:1707.01962 (2017).
  45. [45].↵
    Vijay Singh, Martin Tchernookov, and Vijay Balasubramanian, “What the odor is not: Estimation by elimination,” Bulletin of the American Physical Society (2019).
  46. [46].↵
    David Zwicker, Arvind Murugan, and Michael P Brenner, “Receptor arrays optimized for natural odor statistics,” Proceedings of the National Academy of Sciences 113, 5570–5575 (2016).
    OpenUrlAbstract/FREE Full Text
  47. [47].↵
    Vijay Singh, Nicolle R Murphy, Vijay Balasubramanian, and Joel D Mainland, “A competitive binding model predicts nonlinear responses of olfactory receptors to complex mixtures,” arXiv preprint arXiv:1805.00563 (2018).
  48. [48].↵
    Gautam Reddy, Joseph D Zak, Massimo Vergassola, and Venkatesh N Murthy, “Antagonism in olfactory receptor neurons and its implications for the perception of odor mixtures,” eLife 7, e34958 (2018).
    OpenUrl
  49. [49].↵
    Thomas M Cover and Joy A Thomas, Elements of information theory (John Wiley & Sons, 2012).
  50. [50].↵
    Xue-Xin Wei and Alan A Stocker, “Mutual information, fisher information, and efficient coding,” Neural computation 28, 305–326 (2016).
    OpenUrlCrossRef
  51. [51].
    Chris R Sims, “Efficient coding explains the universal law of generalization in human perception,” Science 360, 652–656 (2018).
    OpenUrlAbstract/FREE Full Text
  52. [52].↵
    Gašper Tkačik and William Bialek, “Information processing in living systems,” Annual Review of Condensed Matter Physics 7, 89–117 (2016).
    OpenUrl
  53. [53].↵
    Nikolaus Hansen and Andreas Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary computation 9, 159–195 (2001).
    OpenUrlCrossRefPubMedWeb of Science
  54. [54].↵
    Nikolaus Hansen, “The cma evolution strategy: a comparing review,” in Towards a new evolutionary computation (Springer, 2006) pp. 75–102.
  55. [55].↵
    Antonio Celani, Emmanuel Villermaux, and Massimo Vergassola, “Odor landscapes in turbulent environments,” Physical Review X 4, 041015 (2014).
    OpenUrl
  56. [56].↵
    Harumi Saito, Qiuyi Chi, Hanyi Zhuang, Hiroaki Matsunami, and Joel D Mainland, “Odor coding by a mammalian receptor repertoire,” Sci. Signal. 2, ra9–ra9 (2009).
    OpenUrlAbstract/FREE Full Text
  57. [57].↵
    Sophie JC Caron, Vanessa Ruta, LF Abbott, and Richard Axel, “Random convergence of olfactory inputs in the drosophila mushroom body,” Nature 497, 113–117 (2013) .
    OpenUrlCrossRefPubMedWeb of Science
  58. [58].↵
    Andrew C Lin, Alexei M Bygrave, Alix De Calignon, Tzumin Lee, and Gero Miesenböck, “Sparse, decorrelated odor coding in the mushroom body enhances learned odor discrimination,” Nature neuroscience 17, 559–568 (2014) .
    OpenUrlCrossRefPubMed
  59. [59].↵
    Glenn C Turner, Maxim Bazhenov, and Gilles Laurent, “Olfactory representations by drosophila mushroom body neurons,” Journal of neurophysiology 99, 734–746 (2008).
    OpenUrlCrossRefPubMedWeb of Science
  60. [60].↵
    Robert AA Campbell, Kyle S Honegger, Hongtao Qin, Wanhe Li, Ebru Demir, and Glenn C Turner, “Imaging a population code for odor identity in the drosophila mushroom body,” Journal of Neuroscience 33, 10568–10581 (2013).
    OpenUrlAbstract/FREE Full Text
  61. [61].↵
    Yoshinori Aso, Daisuke Hattori, Yang Yu, Rebecca M Johnston, Nirmala A Iyer, Teri-TB Ngo, Heather Dionne, LF Abbott, Richard Axel, Hiromu Tanimoto, et al., “The neuronal architecture of the mushroom body provides a logic for associative learning,” Elife 3, e04577 (2014).
    OpenUrlCrossRefPubMed
  62. [62].↵
    David Owald, Johannes Felsenberg, Clifford B Talbot, Gaurav Das, Emmanuel Perisse, Wolf Huetteroth, and Scott Waddell, “Activity of defined mushroom body output neurons underlies learned olfactory behavior in drosophila,” Neuron 86, 417–427 (2015).
    OpenUrlCrossRefPubMed
  63. [63].↵
    Paola Cognigni, Johannes Felsenberg, and Scott Waddell, “Do the right thing: neural network mechanisms of memory formation, expression and update in drosophila,” Current opinion in neurobiology 49, 51–58 (2018).
    OpenUrlCrossRef
  64. [64].↵
    Baktash Babadi and Haim Sompolinsky, “Sparseness and expansion in sensory representations,” Neuron 83, 1213–1226 (2014).
    OpenUrlCrossRefPubMed
  65. [65].↵
    Ashok Litwin-Kumar, Kameron Decker Harris, Richard Axel, Haim Sompolinsky, and LF Abbott, “Optimal degrees of synaptic connectivity,” Neuron 93, 1153–1164 (2017).
    OpenUrl
  66. [66].↵
    Joby Joseph, Felice A Dunn, and Mark Stopfer, “Spontaneous olfactory receptor neuron activity determines follower cell response properties,” Journal of Neuroscience 32, 2900–2910 (2012).
    OpenUrlAbstract/FREE Full Text
  67. [67].↵
    Timothy Connelly, Agnes Savigner, and Minghong Ma, “Spontaneous and sensory-evoked activity in mouse olfactory sensory neurons with defined odorant receptors,” Journal of neurophysiology 110, 55–62 (2013).
    OpenUrlCrossRefPubMedWeb of Science
  68. [68].↵
    Li-Hui Cao, Dong Yang, Wei Wu, Xiankun Zeng, Bi-Yang Jing, Meng-Tong Li, Shanshan Qin, Chao Tang, Yuhai Tu, and Dong-Gen Luo, “Odor-evoked inhibition of olfactory sensory neurons drives olfactory perception in drosophila,” Nature communications 8, 1357 (2017).
    OpenUrl
  69. [69].↵
    Allison F Carey, Guirong Wang, Chih-Ying Su, Laurence J Zwiebel, and John R Carlson, “Odorant reception in the malaria mosquito anopheles gambiae,” Nature 464, 66 (2010).
    OpenUrlCrossRefPubMedWeb of Science
  70. [70].↵
    Jeremy E Niven and Simon B Laughlin, “Energy limitation as a selective pressure on the evolution of sensory systems,” Journal of Experimental Biology 211, 1792–1804 (2008).
    OpenUrlAbstract/FREE Full Text
  71. [71].↵
    Ai Nakashima, Haruki Takeuchi, Takeshi Imai, Harumi Saito, Hiroshi Kiyonari, Takaya Abe, Min Chen, Lee S Weinstein, C Ron Yu, Daniel R Storm, et al., “Agonist-independent gpcr activity regulates anterior-posterior targeting of olfactory sensory neurons,” Cell 154, 1314–1325 (2013).
    OpenUrlCrossRefPubMedWeb of Science
  72. [72].↵
    Scott A Kreher, Dennis Mathew, Junhyong Kim, and John R Carlson, “Translation of sensory input into behavioral output via an olfactory system,” Neuron 59, 110–124 (2008).
    OpenUrlCrossRefPubMedWeb of Science
  73. [73].↵
    T.W. Grebe and J. Stock, “Bacterial chemotaxis: The five sensors of a bacterium,” Current Biology 8, R154–R157 (1998).
    OpenUrlCrossRefPubMed
  74. [74].↵
    T.S. Shimizu, Y. Tu, and H.C. Berg, “A modular gradient-sensing network for chemotaxis in escherichia coli revealed by responses to time-varying stimuli,” Molecular systems biology 6 (2010).
  75. [75].↵
    Barry Wark, Brian Nils Lundstrom, and Adrienne Fairhall, “Sensory adaptation,” Current opinion in neurobiology 17, 423–429 (2007).
    OpenUrlCrossRefPubMedWeb of Science
  76. [76].↵
    Alexander Egea-Weiss, Christoph J Kleineidam, Paul Szyszka, et al., “High precision of spike timing across olfactory receptor neurons allows rapid odor coding in drosophila,” iScience 4, 76–83 (2018).
    OpenUrl
  77. [77].↵
    Quentin Gaudry, Elizabeth J Hong, Jamey Kain, Benjamin L de Bivort, and Rachel I Wilson, “Asymmetric neurotransmitter release enables rapid odour lateralization in drosophila,” Nature 493, 424–428 (2013).
    OpenUrlCrossRefPubMed
  78. [78].↵
    Carlotta Martelli, John R Carlson, and Thierry Emonet, “Intensity invariant dynamics and odor-specific latencies in olfactory receptor neuron response,” Journal of Neuroscience 33, 6285–6297 (2013).
    OpenUrlAbstract/FREE Full Text
  79. [79].↵
    Nirag Kadakia and Thierry Emonet, “Front-end weber-fechner gain control enhances the fidelity of combinatorial odor coding,” bioRxiv, 475103 (2018).
  80. [80].↵
    Dan Rokni, Vivian Hemmelder, Vikrant Kapoor, and Venkatesh N Murthy, “An olfactory cocktail party: figure-ground segregation of odorants in rodents,” Nature neuroscience 17, 1225 (2014).
    OpenUrlCrossRefPubMed
  81. [81].↵
    Yair Weiss, Hyun Sung Chang, and William T Freeman, “Learning compressed sensing,” in Snowbird Learning Workshop, Allerton, CA (Citeseer, 2007).
  82. [82].↵
    David L Donoho, Adel Javanmard, and Andrea Montanari, “Information-theoretically optimal compressed sensing via spatial coupling and approximate message passing,” IEEE transactions on information theory 59, 7434–7464 (2013).
    OpenUrl
  83. [83].↵
    Amit Ashok, Liang-Chih Huang, and Mark A Neifeld, “Information optimal compressive sensing: static measurement design,” JOSA A 30, 831–853 (2013).
    OpenUrl
  84. [84].↵
    Suzan Mansourian and Marcus C Stensmyr, “The chemical ecology of the fly,” Current opinion in neurobiology 34, 95–102 (2015).
    OpenUrlCrossRefPubMed

References

  1. [1].↵
    V. Singh, N. R. Murphy, V. Balasubramanian, and J. D. Mainland, arXiv preprint arXiv:1805.00563 (2018).
  2. [2].↵
    G. Reddy, J. D. Zak, M. Vergassola, and V. N. Murthy, eLife 7, e34958 (2018).
    OpenUrl
  3. [3].↵
    L.-H. Cao, D. Yang, W. Wu, X. Zeng, B.-Y. Jing, M.-T. Li, S. Qin, C. Tang, Y. Tu, and D.-G. Luo, Nature communications 8, 1357 (2017).
    OpenUrl
  4. [4].↵
    G. Si, J. K. Kanwal, Y. Hu, C. J. Tabone, J. Baron, M. Berck, G. Vignoud, and A. D. Samuel, Neuron (2019).
  5. [5].↵
    E. a. Hallem and J. R. Carlson, Cell 125, 143 (2006).
    OpenUrlCrossRefPubMedWeb of Science
  6. [6].↵
    H. Saito, Q. Chi, H. Zhuang, H. Matsunami, and J. D. Mainland, Sci. Signal. 2, ra9 (2009).
    OpenUrlAbstract/FREE Full Text
  7. [7].↵
    A. F. Carey, G. Wang, C.-Y. Su, L. J. Zwiebel, and J. R. Carlson, Nature 464, 66 (2010).
    OpenUrlCrossRefPubMedWeb of Science
  8. [8].↵
    R. A. Ince, B. L. Giordano, C. Kayser, G. A. Rousselet, J. Gross, and P. G. Schyns, Human brain mapping 38, 1541 (2017).
    OpenUrl
  9. [9].↵
    S. Singh and B. Pøczos, arXiv preprint arXiv:1702.07803 (2017).
  10. [10].↵
    Z. Szabó, The Journal of Machine Learning Research 15, 283 (2014).
    OpenUrl
  11. [11].↵
    N. Hansen and A. Ostermeier, Evolutionary computation 9, 159 (2001).
    OpenUrlCrossRefPubMedWeb of Science
  12. [12].↵
    N. Hansen, in Towards a new evolutionary computation (Springer, 2006) pp. 75–102.
  13. [13].↵
    G. C. Turner, M. Bazhenov, and G. Laurent, Journal of neurophysiology 99, 734 (2008).
    OpenUrlCrossRefPubMedWeb of Science
  14. [14].↵
    S. J. Caron, V. Ruta, L. Abbott, and R. Axel, Nature 497, 113 (2013).
    OpenUrlCrossRefPubMedWeb of Science
  15. [15].↵
    A. C. Lin, A. M. Bygrave, A. De Calignon, T. Lee, and G. Miesenböck, Nature neuroscience 17, 559 (2014).
    OpenUrlCrossRefPubMed
  16. [16].↵
    S. X. Luo, R. Axel, and L. Abbott, Proceedings of the National Academy of Sciences 107, 10713 (2010).
    OpenUrlAbstract/FREE Full Text
  17. [17].↵
    K. Krishnamurthy, A. M. Hermundstad, T. Mora, A. M. Walczak, and V. Balasubramanian, arXiv preprint arXiv:1707.01962 (2017).
  18. [18].↵
    R. A. Campbell, K. S. Honegger, H. Qin, W. Li, E. Demir, and G. C. Turner, Journal of Neuroscience 33, 10568 (2013).
    OpenUrlAbstract/FREE Full Text
Back to top
PreviousNext
Posted March 10, 2019.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
The optimal odor-receptor interaction network is sparse in olfactory systems: Compressed sensing by nonlinear neurons with a finite dynamic range
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
The optimal odor-receptor interaction network is sparse in olfactory systems: Compressed sensing by nonlinear neurons with a finite dynamic range
Shanshan Qin, Qianyi Li, Chao Tang, Yuhai Tu
bioRxiv 464875; doi: https://doi.org/10.1101/464875
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
The optimal odor-receptor interaction network is sparse in olfactory systems: Compressed sensing by nonlinear neurons with a finite dynamic range
Shanshan Qin, Qianyi Li, Chao Tang, Yuhai Tu
bioRxiv 464875; doi: https://doi.org/10.1101/464875

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (3514)
  • Biochemistry (7367)
  • Bioengineering (5346)
  • Bioinformatics (20324)
  • Biophysics (10044)
  • Cancer Biology (7776)
  • Cell Biology (11352)
  • Clinical Trials (138)
  • Developmental Biology (6453)
  • Ecology (9980)
  • Epidemiology (2065)
  • Evolutionary Biology (13356)
  • Genetics (9373)
  • Genomics (12611)
  • Immunology (7725)
  • Microbiology (19102)
  • Molecular Biology (7465)
  • Neuroscience (41153)
  • Paleontology (301)
  • Pathology (1235)
  • Pharmacology and Toxicology (2142)
  • Physiology (3178)
  • Plant Biology (6879)
  • Scientific Communication and Education (1276)
  • Synthetic Biology (1900)
  • Systems Biology (5328)
  • Zoology (1091)