## Abstract

This paper introduces a comprehensive mechanistic model of a neuron with plasticity that explains the creation of engrams, the biophysical correlates of memory. In the context of a single neuron, this means clarifying how information input as time-varying signals is processed, stored, and subsequently recalled. Moreover, the model addresses two additional, long-standing, specific biological problems: the integration of Hebbian and homeostatic plasticity, and the identification of a concise learning rule for synapses.

In this study, a biologically accurate Hodgkin-Huxley-style electric-circuit equivalent is derived through a one-to-one mapping from the known properties of ion channels. The dynamics of the synaptic cleft, which is often overlooked, is found to be essential in this process. Analysis of the model reveals a simple and succinct learning rule, indicating that the neuron functions as an internal-feedback adaptive filter, which is commonly used in signal processing. Simulation results confirm the circuit’s functionality, stability, and convergence, demonstrating that even a single neuron without external feedback can function as a potent signal processor.

The article is interdisciplinary and spans a broad range of subjects within the realm of biophysics, including neurobiology, electronics, and signal processing.

**Significance statement** Mechanistic neuron models with plasticity are crucial for understanding the complexities of the brain and the processes behind learning and memory. These models provide a way to study how individual neurons and synapses in the brain change over time in response to stimuli, allowing for a more nuanced understanding of neuronal circuits and assemblies. Plasticity is a crucial aspect of these models, as it represents the ability of the brain to modify its connections and functions in response to experiences. By incorporating plasticity into these models, researchers can explore how changes at the synaptic level contribute to higher-level changes in behavior and cognition. Thus, these models are essential for advancing our understanding of the brain and its functions.

**PhySH 2023 terms** Neuroplasticity, Learning, Memory, Synapses.

**MeSH 2023 terms** Neuronal Plasticity [G11.561.638], Association learning [F02.463.425.069.296], Memory [F02.463.425.540], Synaptic transmission [G02.111.820.850]

## Introduction

How does the brain remember? This classical question has recently received considerable attention focusing on the central nervous system’s handling of engrams [40, 33, 21, 13, 23]. These involve coordinated synaptic changes, mandating a cell-wide coherent explanation of multisynaptic plasticity. However, such explanations have encountered difficulties despite massive research efforts by experimental and theoretical methods. This paper suggests that these problems can be resolved by mapping the neuron to an equivalent circuit and then showing that this circuit implements an adaptive filter (fig. 1).

Experimental methods typically investigate neurons’ responses to stimuli and various biological manipulations such as ion channel blocking and genetic modifications. Since the first discovery of synaptic plasticity [4], experiments have revealed a diversity of overlapping and interacting plasticity mechanisms [26, 45]. A limitation of experiments *in vitro* is that crucial parameters such as temperature, membrane potential, and calcium concentration often transcend their physiological ranges. Experiments *in vivo*, on the other hand, are degraded by external disturbances, such as irrelevant signals from connected neurons.

The theoretical approach can be said to study how plasticity *should* work. In this case, the principal obstacle is to find biologically plausible mechanisms that match the theory. A significant theoretical contribution showed that classical Hebbian plasticity alone leads to the saturation of synaptic weights and the ensuing loss of information [39, 3]. Subsequent experiments demonstrated the existence of additional, homeostatic mechanisms that prevent distortion and stabilize synaptic plasticity [49, 48].

Because of the challenges posed by the diversity of plasticity mechanisms and the scarcity of biologically plausible models, the timing and integration of homeostatic and Hebbian plasticity is an open issue [24]. Therefore, a novel approach is chosen here, modeling a neuron as an electric-circuit equivalent in the spirit of Hodgkin and Huxley’s seminal axon model [19] while strictly adhering to known properties of neuronal ion channels to ensure biological veracity. The resulting circuit can be interpreted mechanistically as a variant of a *Least Mean Square* (LMS) *adaptive filter*, a versatile device well-known in the field of signal processing [18, 17]. This interpretation takes advantage of the rich theory developed for adaptive filters. It explains precisely and quantitatively how the neuron modifies its synapses in orchestration to store time-variable functions or *signals* as required by procedural memory engrams.

The proposed model being mechanistic is essential here because such models are superior to empirical or phenomenological models by providing a more comprehensive and detailed understanding of the underlying processes and mechanisms that give rise to the observed phenomena [6].

### Organization of this paper

The paper’s main topic is deriving the equivalent circuit and adaptive filter model from established knowledge about neuronal ion channels. For a mechanistic model, it is imperative to select an appropriate level of description that is adequately detailed yet not overly complex to provide a functional explanation and address the three specific problems under consideration. To achieve this, the paper first reviews the established function of inhibitory and excitatory synaptic ion channels to a level allowing for a direct translation into an electric network. By this conversion, insights from a century of experience with electronic circuits can be leveraged, along with the ability to identify circuit patterns or “motifs.” The approach is conservative in that it does not assume the existence of as-yet-undiscovered biological mechanisms.

The paper’s main conclusion is that a single neuron can be abstractly characterized as an adaptive filter, a powerful and fundamental component in signal processing. The basic principles of adaptive filters are, therefore, briefly reviewed. An adaptive filter’s function in its fullest generality is to determine how a reference input can be expressed in terms of a given set of input components.

Two experiments are performed to further support the claim that the neuron operates as an adaptive filter. The first experiment demonstrates the circuit’s ability to approximate an inhibitory signal *y*(*t*) by appropriately weighting excitatory inputs *x*_{k}(*t*). The second experiment confirms that action potentials function as clock pulses (“strobes”), triggering synaptic weight changes.

The results section presents the convergence and stability outcomes of the experiments diagrammatically, followed by an explanation of how the model in its adaptive filter capacity addresses the three specific issues concerning engrams, Hebbian-homeostatic plasticity, and the synaptic learning rule.

Subsequently, the discussion section introduces related work and explores some implications of viewing the neuron as an adaptive filter.

The investigation spans a time frame from milliseconds to minutes, encompassing short-term plasticity (STP) and early long-term plasticity (LTP) while excluding late LTP due to its reliance on nuclear processes and its consolidating function.

In summary, this article models a neuron’s primary biochemical information processing pathways as equivalent electric circuits, reviews the adaptive filter concept, and employs it to describe the neuron’s overall function. The model’s efficacy is demonstrated through two simulation experiments, substantiating the neuron’s capacity to operate as an adaptive filter. These results support the proposed model’s validity and potential for advancing research in this field.

## Modeling the neuron

### Overall structure of a neuron

This subsection provides a detailed description of the structure and function of a generic, glutamatergic neuron, highlighting its key components, synaptic types, and their roles in signal transmission and plasticity.

The target neuron is a generic, glutamatergic neuron equipped with AMPA (*α*-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptors (AMPARs) and NMDA (N-methyl-D-aspartate) receptors (NMDARs). This kind of neuron has been extensively studied and is representative of a substantial fraction of neurons in the central nervous system (CNS) [47], typical examples of which are the hippocampal neurons where plasticity was first demonstrated [4].

The primary components of a neuron include the dendrites, which receive inputs from presynaptic neurons; the soma, which aggregates the contributions from dendrites; and the axon, which transmits the result to other neurons (fig. 2). Axons can branch into axon collaterals carrying identical signals. Synapses, the contact points between axons and dendrites, are of two types: inhibitory and excitatory. They convert incoming stochastically rate-coded sequences of action potentials (APs), or more tersely, PFM (pulse-frequency modulated) spiketrains [36], into postsynaptic currents that alter the membrane potential, the voltage difference between the neuron’s interior and exterior. At the axon initial segment (AIS), this potential is converted back into a spiketrain for output via the axon.

Ultrastructural studies reveal that inhibitory synapses are typically situated proximally to the soma, either directly on a dendrite or the soma itself. In contrast, excitatory synapses tend to be positioned more distally, connecting with dendrites via spines—small protrusions on the dendrites. These spines are associated with plasticity, indicating that excitatory synapses are generally plastic, whereas inhibitory synapses are non-plastic. These principles are occasionally referred to as Gray’s rules [14, 38, 47, 16].

### Inhibitory synapse mapping

The focus of this subsection is the functioning of an inhibitory synapse in response to an action potential, including the involvement of neurotransmitters and receptors. The subsection presents the corresponding equivalent electric-circuit model that reflects this process, inspired by Hodgkin and Huxley’s axon model [19].

When action potentials reach the axon terminal (fig. 3), the membrane potential depolarizes (increases), causing voltage-gated calcium channels (Ca_{V}) to open (1). The calcium ion influx triggers the release of the neurotransmitter *γ*-aminobutyric acid (GABA) from nearby vesicles (2) into the synaptic cleft. GABA binds to GABA type A receptors (GABAAR) on the postsynaptic neuron, opening the receptor channel to chloride ions (3) [43]. These ions are negatively charged and hyperpolarize (reduce) the membrane potential. A direct translation of these biological processes into a circuit equivalent for the inhibitory synapse is shown in fig. 4.

The model adopts Hodgkin and Huxley’s view of gated ion channels as voltage-controlled conductances. Because they function as ideal field-effect transistors (FETs) in practice, the schematic uses modern transistor symbols. The GABAAR is defined by the equation *I*_{DS} = *γ V*_{DS}*V*_{G}, where the constant *γ* = *γ*_{GABAAR} is the

transistor’s gain (fig. 5A), and *I*_{DS} and *V*_{DS} are the channel current and voltage, respectively. *V*_{G} is the gate voltage representing the GABA concentration.

A single transistor is chosen to represent the entire population of GABAARs at one synapse. Overall, the circuit inverts an incoming train of positive voltage pulses to negative current pulses and filters them through a lowpass filter before integrating them into the membrane potential. More specifically, an action potential arriving at the axon terminal travels the GABA pathway and gates a chloride-ion current at the GABAAR.

The resistor *R*_{z} represents the transport processes that circulate the chloride back out of the cell. The signal is filtered on its way to the soma by a lowpass filter *R*_{i}*C*_{i} composed of the spino-dendritic axial resistance *R*_{i} and capacitance *C*_{i}. The series capacitance *C*_{h} compactly represents the homeostatic machinery that maintains the neuron’s mean internal potential at a biologically comfortable level for the neuron. The inhibitory synapse’s output into the dendrite or soma is a negative current pulse, the inhibitory postsynaptic current (IPSC).

### Excitatory synapse mapping

This subsection examines the functioning of an excitatory synapse, including the roles of calcium ions, glutamate, AMPARs, NMDARs, synaptic plasticity, and the translation of these biological processes into an equivalent electric-circuit model.

The functioning of an excitatory synapse (fig. 6) is similar to that of an inhibitory synapse, but the plasticity associated with spines adds complexity to the model. After lowpass filtering, excitatory input pulses increase the postsynaptic membrane potential, and the synapse’s efficacy (gain) is modified depending on the input’s magnitude and the current membrane potential.

In more detail, the arriving action potential enables calcium ions to enter the presynaptic terminal (1) and trigger the release of the neurotransmitter glutamate (2). Glutamate binds to AMPARs on the postsynaptic neuron, opening the channels to positively charged sodium ions (3), which depolarize the membrane potential.

Additionally, glutamate activates NMDARs responsible for the neuron’s plasticity [20, 15, 47]. The increased membrane potential repels an external magnesium ion blocking the NMDAR (4) [37, 30], allowing a sample of the external calcium [Ca^{2+}]_{e} in the synaptic cleft to enter the postsynaptic cell (5). This calcium influx regulates *the number* of AMPARs, constituting the synaptic weight (efficacy) through a cascade of downstream reactions [38, 20].

Fig. 7 shows an electric-circuit equivalent for the excitatory synapse. Again, this is a direct translation of the biochemical processes of the excitatory synapse in fig. 6. A voltage pulse (1), representing the presynaptic action potential and corresponding glutamate release (2), gates injectio of a positive current through the AMPAR. This current (3) undergoes lowpass filtering en route to the soma with a cutoff frequency of *f*_{c} = 1*/*(2*πR*_{e}*C*_{e}), which may vary substantially across different synapses within the same neuron. The culmination of this process is an excitatory postsynaptic current (EPSC) that is integrated with other EPSCs and inhibitory postsynaptic currents (IPSCs) by the membrane capacitance *C*_{m} of the soma and proximal dendrites. The ensuing (AC, alternating current) membrane potential *v*_{m} is electrotonically propagated throughout the cell (4).

The NMDAR, modeled here as a dual-gate transistor, senses the membrane potential with one gate, whereas the other (“strobe”) recognizes glutamate activation, enabling synaptic weight modification. The NMDAR transistor samples the calcium concentration in the synaptic cleft, which—importantly—is depleted when a presynaptic action pulse arrives, because the pulse causes presynaptic calcium channels to open, consuming some of the synaptic cleft’s calcium content [5]. The cleft acts as a calcium buffer and lowpass filters the calcium-encoded signal with the time constant *R*_{s}*C*_{s}.

The capacitor *C*_{w} represents the intracellular pathway from calcium entry to AMPAR regulation (5) [47]. The voltage spanning this capacitor corresponds to the synaptic weight, which is defined by the number of AMPARs and hence, is invariably non-negative.

The NMDAR’s critical function is to act as a *multiplier* of the input and the feedback, represented as the external calcium concentration [Ca^{2+}]_{e} and the membrane potential *v*_{m}, respectively. *C*_{h2} and *R*_{h} represent homeostasis of the calcium pathway. The diode in parallel with *C*_{w} ensures that the synaptic weight is non-negative.

The complete equivalent circuit for the neuron can be formed by combining the circuits illustrated in fig. 4 and fig. 7. However, before proceeding to show that the complete circuit implements an adaptive filter, the next subsection offers a brief review of such filters.

### Internal structure and operation of an adaptive filter

This subsection concisely reviews the fundamental adaptive filter (fig. 8), which can be thought of as a procedure or algorithm. Its principal function is to find weights *w*_{1}, *w*_{2}, …, *w*_{n} such that the weighted sum Σ*w*_{k}*x*_{k} of candidate or component signals *x*_{1}, *x*_{2}, …, *x*_{n} approximates a reference signal *y*. The component signals may originate from different sources or be derived from a single input *x* using a delay line or a filter bank, as shown in the figure.

Depending on how the filter is connected, it can perform a variety of essential signal processing tasks such as model creation, inverse model creation, prediction, and interference cancellation [18]. Particularly relevant for biological systems is a configuration suggested to address the sensorimotor association problem, or the process by which the brain learns which neuron is connected to which muscle [34].

The Least Mean Squares (LMS) algorithm (algorithm 1) [17], also known as the Widrow-Hoff LMS rule, is a method for updating the weights of an adaptive filter. It operates in discrete time steps *t* = *t*_{1}, *t*_{2}, …, where at each step it calculates the error feedback *z*, which is the difference between the weighted sum of the input signals *x*_{k} and the reference signal *y*. Then, it updates all the weights *w*_{k} by subtracting the associated feedback corrections, which are calculated as ∆*w*_{k} = *ε zx*_{k}, where *ε* is a learning rate. This learning rate is a positive constant, and its selection involves a balance between the convergence speed and stability against noise.

The convergence of the adaptive filter can be understood intuitively as follows: Suppose that some weight *w*_{j} is slightly too large and that the corresponding input *x* _{j} is positive. Then the error *z* will also tend to be positive and will be fed back to cause a reduction of the weight *w*_{j} by *εzx*_{j}. A similar argument can be used when instead *w*_{j} is too small or *x* _{j} is negative. Proving the convergence of the weights formally can be difficult in a general case, but the LMS rule has proven to be robust in practical applications [18].

The adaptive filter can be interpreted differently, depending on one’s perspective. A biologist might see it as a system that maintains a balance between excitatory and inhibitory inputs [41, 9, 46]. On the other hand, a physicist might view the filter as performing a wavelet transform of the signal *y* using wavelets *x*_{k} [29], with the weights serving as transform coefficients.

### Understanding the neuron as an adaptive filter

Here it is established that the neuron’s equivalent circuit operates as an adaptive filter suggesting that the neuron also embodies this functionality.

### The neuron’s equivalent circuit as an adaptive filter

Interpreting the neuron as an adaptive filter is greatly simplified by modeling the neuron as an equivalent electric circuit. The combination of the synapse circuits in fig. 4 and fig. 7 into a circuit equivalent for the neuron is shown in fig. 9. This circuit converts the spiketrain input to membrane potential. The subsequent ouput conversion of the membrane potential to an output spiketrain and the application of an activation function *ϕ*(*z*) are omitted here because a mechanistic model for them has been presented elsewhere [36] and does not directly influence the input conversion.

The side-by-side comparison of the adaptive filter, as shown in fig. 8 and the neuron model presented in fig. 9 offers detailed agreement, indicating that both the circuit and, by extension, the neuron implement a variant of the LMS algorithm (algorithm 2). The match between the circuit and the adaptive filter is corroborated below by illustrating how the circuit realizes the summation operations, error feedback, and weight updates. Furthermore, an explanation is provided for the scenario where component inputs are redundant or linearly dependent, a common condition for biological neurons.

The LMS algorithm, full neuron version with activation function included. The inputs are assumed to already be lowpass filtered.

### Summation operations

When comparing the functional blocks in fig. 8 with those in fig. 9, it is evident that the summation operations in the adaptive filter align with the addition of currents in the neuron’s equivalent circuit, as Kirchhoff’s law dictates. This law states that the sum of currents entering a junction must equal the sum of currents leaving it, mirroring the summation process in the adaptive filter.

### Error feedback

A rapid error feedback signal, labeled by *z* in fig. 8 and fig. 7, is essential for the functioning of the adaptive filter, as is visible in algorithms 1 and 2. This feedback is provided by the membrane potential *v*_{m} created by the total of the IPSC and EPSC currents passing through the impedance consisting of the membrane resistance *R*_{m} in parallel with the membrane capacitance *C*_{m}. The feedback signal accesses all synapses within the neuron via their connections to the soma. The lowpass filtering by *R*_{m}*C*_{m} introduces a decay or “forget” factor *λ*, ≤0 *λ <* 1, on line 2 of algorithm 2, slightly generalizing upon algorithm 1, which would have *λ* = 0. In the biological neuron, the propagation of the *z* signal is nearly instantaneous due to its electrotonic conduction through the cytosol.

### Weight updates

The adaptive filter updates its weights *w*_{k} by the product of the inputs *x*_{k} and the error feedback *z*. The update uses a clever trick which stands out when viewing the involved circuitry, *i*.*e*., the plasticity circuitry of the excitatory synapse in fig. 7. The weight *w* is represented as a charge held by the capacitor *C*_{w}. The product of the input *x* and error *z* should update this weight. However, whereas the error is readily available in the circuit as the membrane potential *v*_{m}, the signal *x* on the glutamate pathway is PFM encoded and is unusable for the update in this form. Although it is lowpass filtered in the dendrite and soma, it is directly summed into the membrane potential and is unavailable separately. *Fortunately, a lowpass-filtered version of x is available as the calcium concentration* [Ca^{2+}]_{e} *in the synaptic cleft*. Thanks to this additional copy of *x*, the NMDAR transistor in the circuit and the ion channel in the neuron can crucially “compute”—pass a charge proportional to—the weight update by multiplying the calcium concentration representing *x* with the membrane potential *v*_{m} representing *z*. Experiment 1, described below, validates the above process.

### Redundant and linearly dependent candidate inputs

In engineering contexts, the decomposition of a signal *x* into components frequently relies on techniques such as a bandpass-filter bank or a Fast Fourier Transform. These methods ensure orthogonality, or at least linear independence, of the components *x*_{k}. This independence is a critical requirement to guarantee the uniqueness of the weights. However, such a systematic decomposition may not be feasible from a biological perspective, resulting in identical reference inputs possibly giving rise to different sets of synaptic weights. In the case of redundant component inputs, weights will converge (settle) towards a linear subspace rather than a specific point. Correlated component inputs can potentially slow the convergence of the original LMS algorithm. This is because weights are updated simultaneously, which may lead to overshooting and oscillations. Here, evolution has provided an elegant solution for neurons because each synapse is updated individually and asynchronously by its own glutamate strobe signal (fig. 7), demonstrated in experiment 2 (cf. the “for” statement in algorithm 1 with the “when” statement in algorithm 2).

### Implications of the neuron operating as an adaptive filter

The neuron behaving as an adaptive filter allows us to address the three key concerns brought up in the introduction: the engram process of memory storage and retrieval, the unification of Hebbian and homeostatic plasticity, and establishing a universal rule for synaptic plasticity. The proposed solutions to these problems are presented in the results section below. More generally, the adaptive filter provides a valuable conceptual model for understanding multi-neuron assemblies as it remains unaffected by the specifics of signal encoding, facilitating a succinct mathematical representation of the neurons [35].

The following subsection conducts a series of experiments that confirm the functioning of the circuit as an adaptive filter.

### Experiment design

Two sets of experiments were carried out to explore and validate model properties.

In the first experiment, the stability and convergence of the model were examined. The neuron model was composed of one inhibitory synapse and two excitatory synapses (*n* = 2 in fig. 9). The task of the circuit was to determine the weights *w*_{1} and *w*_{2} so that the weighted sum of spiketrains 2 and 3 corresponded to spiketrain 1. The inputs were Pulse Frequency Modulated (PFM) spiketrains, effectively inhomogeneous Poisson processes, modulated by sine waves with a modulation depth of 67% (fig. 10). The modulations for the first experiment were 1 Hz and 2 Hz for spiketrains 2 and 3, respectively. The reference input, spike-train 1, began with a modulation of 1 Hz but switched to 2 Hz after 150 s.

The second experiment aimed to study the model’s behavior in the presence of redundant input. In this experiment, a sine wave of 1 Hz modulated the inhibitory input, and a wave of 2 Hz modulated the first excitatory input (*x*_{1}). The remaining five excitatory synapses *x*_{2}, … *x*_{6} (*n* = 6 in fig. 9) received redundant input. During the first run, these inputs were synchronized, receiving the same spiketrain modulated at 1 Hz. In the second run, spiketrains 3-7 were all modulated by 1 Hz but generated independently, mimicking the behavior of biological neurons, making them asynchronous.

The component values used in these experiments are provided in table 1, and they roughly align with physiological values. Regardless, the circuit is robust and not particularly sensitive to parameter variations. The experiments were conducted using an electronic-circuit simulator [27].

## Results

### Experiment results

The first experiment demonstrates the convergence of weights *w*_{1} and *w*_{2}. Initially, with the inhibitory input signal *y* modulated by a sine wave of 1 Hz, the ratio *w*_{2}*/w*_{1} approaches zero as it should. This is because the input signal *x*_{1} is also modulated by a sine wave of 1 Hz, coinciding with the reference input, while the input signal *x*_{2} is modulated by a sine wave of 2 Hz, which is orthogonal to *y*. However, after 150 s, the modulation of *y* changes to 2 Hz, which instead coincides with the input signal *x*_{2}. This time, the inverse ratio *w*_{1}*/w*_{2} approaches zero. Fig. 11A depicts this convergence for two different values of NMDAR gain *γ*_{NMDAR}. Here, low and high gain correspond to 3*·*10^{−5}*A/V* ^{2} and 10^{−4}*A/V* ^{2}, respectively. The diagram shows that the circuit strives to enhance the weight of the excitatory input that aligns in frequency with the inhibitory input, while concurrently decreasing the weight of the other excitatory input that doesn’t match in frequency.

The second experiment shows what happens for multiple redundant excitatory inputs. In the first case, all the excitatory inputs are identical, so all strobe pulses are synchronous (dashed traces). In the second case, the same sine wave of 2 Hz modulates the excitatory inputs, but otherwise, they are independent, so the strobe pulses are asynchronous (solid traces). The experiment shows faster convergence for asynchronous strobes.

### Solutions to the three specific problems considered

This paper has suggested that a neuron functions and can be conceptualized as an adaptive filter with internal feedback. Such a neuron model enables straightforward solutions, presented below, to the three problems posed in the introduction.

#### 1. How does the neuron manage engrams?

According to the adaptive-filter model, memories are stored as synaptic weights. More precisely speaking, if the information is input in the form of an inhibitory signal *y*(*t*), the engram is formed in the neuron adaptating weights *w*_{k} so that *y* is balanced by the weighted sum Σ_{k}*w*_{k}*x*_{k} of the excitatory signals *x*_{k}(*t*).

In principle, memory is subsequently retrieved whenever signals *y* and *x* are received by the neuron and it computes and outputs the prediction error *z*, which relies on the weights. Alternatively, memory can be recalled by temporarily holding *y* at zero, whereby the neuron will output the approximation Σ_{k}*w*_{k}*x*_{k≈} *y* assembled by the same linear combination of excitatory signals.

#### 2. Is there a universal synaptic learning rule?

The synaptic learning rule can be expressed as a variation of the Least Mean Squares (LMS) learning rule, with the constraint that weights cannot be negative (cf. line 3 of algorithm 2):
In this equation, *w* denotes the number of AMPARs (synaptic weight), *z* represents the membrane potential *v*_{m} (error feedback), and *x* signifies the local synaptic cleft calcium concentration [Ca^{2+}]_{e} (excitatory input). The learning rate *ε* depends on several biological parameters but is perhaps most directly controlled by the gain *γ* of the NMDAR.

#### 3. How do homeostatic and Hebbian plasticity balance?

The Hebbian-homeostatic balance emerges from the synaptic learning rule (1), inherently providing stability and subsuming both Hebbian and homeostatic plasticity (fig. 12). The NMDAR directly implements the multiplication *zx*, and because the parameters *x* and *z* describe the signed deviations from the steady-state averages (homeostatic equilibria), the LMS rule offers automatic stabilization. In the case of spike-timing-dependent plasticity (STDP), the model indicates that “backwash” membrane potential fluctuations caused by the output spike will bias weight changes in favor of input spikes preceding output spikes.

## Discussion

The neuron uses membrane potential feedback during adaptation to adjust the excitatory synapse weights. This adjustment strives to balance inhibitory and excitatory input. Alternatively, this process can be described as the neuron’s attempt to predict the inhibitory input by excitatory input—the membrane potential encodes the *prediction error* [44]. Signal processing and control theory often refer to prediction error as the fundamental concept *innovation* [22]. It has frequently been discussed in neuroscience under different names, including *novelty* [25], *unexpectedness* [2], *decorrelation* [8], *surprise* [10], and *saliency* [50],

The critical operation for the plasticity of the neuron is the multiplication of the prediction error feedback *z*, represented by the membrane potential *v*_{m}, with the excitatory input *x* available from the synaptic cleft external calcium concentration [Ca^{2+}]_{e}. Given the existence of this non-linear multiply mechanism, linear mechanisms can adjust a suitable homeostatic equilibrium or zero offset (*x*_{0}, *z*_{0}) by processes involving voltage-gated calcium channels (*zx*_{0}) and metabotropic glutamate receptors (*z*_{0}*x*).

A difference between a neuron and a classical adaptive filter is that the neuron’s weights cannot be negative. This is not a limitation because feeding a candidate signal −*x* together with its negation *x* achieves the same effect as a signed weight [7]. Incorporating such negations could be a function of the numerous local inhibitory neurons in the central nervous system. Somewhat unexpectedly, this restriction to non-negative weights proves to be an advantage, as it enhances the expressive capabilities of neural *populations* [35].

Two salient features distinguishing the proposed model are the explicit dynamics of the synaptic cleft and the dual-purpose utilization of glutamate for both direct information transfer and as a strobe signal that facilitates weight adjustment. The necessity for a strobe input arises from the fact that if NMDARs were continuously active, weights would be diluted towards zero, thereby resulting in information loss. It is crucial for plasticity that weights change only when there is meaningful input—that is, when activated by glutamate [20].

The circuit equivalent assumes NMDARs operate at the same speed as AM-PARs. In reality, NMDARs are slower and produce a burst of openings when triggered by glutamate, effectively performing a kind of low-pass filtering. The model does not explicitly incorporate this property because the lowpass-filtered calcium input already accounts for the slowdown.

Several researchers have put forth adaptive filters as models for neuronal circuits in the cerebellum, utilizing *external* feedback [11, 52, 42]. Nevertheless, low-latency feedback is pivotal for the performance of an adaptive filter as it sets the maximum signal frequency content. External feedback is considerably slower than internal feedback by several orders of magnitude (for pyramidal neurons, see, for example, [32, 1]).

The idea of a neuron functioning as a self-contained adaptive filter has been hypothesized [34, 28]. However, to our knowledge, the model presented here is the first wholly mechanistic model based exclusively on the known properties of ion channels.

When interpreting the circuits in fig. 4 and fig. 7 from an electrical engineering perspective, it appears that evolution has crafted an elegant, robust, and minimalist solution. From a pure signal processing standpoint, the stability of neuronal functions strongly suggests the existence of feedback. The loop delay in this feedback must be short, pointing towards electrotonic propagation. Given that the output spikes are too infrequent to provide swift feedback, the membrane potential is the sole feasible choice.

The neuron appears to employ the biochemical equivalent of alternating current (AC) signals for signaling, while the direct current (DC) level is maintained to ensure an appropriate metabolic level. It is challenging to conceive a more compact arrangement of components capable of achieving such a sophisticated signal-processing function. Evolution has yielded a truly elegant solution, employing current summation for feed-forward and voltage for feedback. The dual purpose of the glutamate pulse, serving as both PFM encoded input and strobe, is ingenious.

Widrow and Hoff [51] initially introduced the abstract, high-level neuron model ADALINE (short for ADAptive LInear NEuron), which drew inspiration from the McCulloch-Pitts neuron model [31]. This work predates the experimental discovery of ion channels by several years. Regrettably, Widrow and Hoff eventually abandoned ADALINE as a neuron model. Nevertheless, it became the foundation of the adaptive filter, which experienced dramatic advancements within the domain of signal processing.

The LMS learning rule is known under a variety of names in different contexts. In the field of artificial neural networks, it is often referred to as the “delta rule,” whereas in statistical learning theory, as the “covariance rule.” These names all refer to the same concept: an iterative method for adjusting the weights of a learning model to minimize the mean square error ∥*z∥* between the model’s prediction, which is the weighted sum of *x*_{k}, and the actual data *y*.

This model does not include the process by which the neuron converts the membrane potential into the output spiketrain, including the activation function because it has been comprehensively addressed in a recent publication, which mechanistically explains this output process [36]. The current paper can be seen as completing the picture of the neuron by providing a mechanistic explanation of the input process—the conversion back to internal potential from spiketrains, including the plasticity.

## Conclusions

Neuroscience research in many fields depends on detailed mechanistic knowledge of how neurons decode, process, store, and encode information. Examples of such fields are neural implants, interoception, and artificial intelligence, but progress in these fields has struggled with empirical and oversimplified neuron models.

This paper provides a complete state-of-the-art mechanistic model of a neuron’s signal processing, including the plasticity, in the milliseconds-to-minutes range. The model explains at the ion channel level how neurons convert input spiketrains to internal potential, including the adjustments of their synaptic efficacy. Crucial components of the model are the inclusion of synaptic cleft dynamics, the arrangement of internal feedback, and the multiple functions of the glutamate neurotransmitter. It is shown that the recording of an engram can be identified with the weight adjustments of an adaptive filter. The neuron strives to balance the inhibitory and excitatory inputs. After adaptation, it can be regarded as an inhibitory input predictor, delivering the prediction error as output.

The mechanistic abstraction of the neuron as an adaptive filter constitutes an essential link to the realm of conceptual spaces [12] interposed between the cognitive and biological levels. It reduces the need for spiking-level simulations and simplifies the understanding of large assemblies and networks of neurons, elaborated in-depth in [35].

## Data availability statement

The datasets generated and analyzed during the current study are available from the author on reasonable request.

## Author contributions

The single author is responsible for all aspects of this research.

## Declaration of Interests

The author declares no competing interests.

## Acknowledgments

This research was funded in part by the European Commission FP7 project THE (“The Hand Embodied”) under grant agreement 248587.

## Footnotes

↵* The author declares no competing financial interests.

neuronplasticity{at}drnil.com

This version of the manuscript has been updated with numerous minor clarifications and corrections.

## References

- [1].↵
- [2].↵
- [3].↵
- [4].↵
- [5].↵
- [6].↵
- [7].↵
- [8].↵
- [9].↵
- [10].↵
- [11].↵
- [12].↵
- [13].↵
- [14].↵
- [15].↵
- [16].↵
- [17].↵
- [18].↵
- [19].↵
- [20].↵
- [21].↵
- [22].↵
- [23].↵
- [24].↵
- [25].↵
- [26].↵
- [27].↵
- [28].↵
- [29].↵
- [30].↵
- [31].↵
- [32].↵
- [33].↵
- [34].↵
- [35].↵
- [36].↵
- [37].↵
- [38].↵
- [39].↵
- [40].↵
- [41].↵
- [42].↵
- [43].↵
- [44].↵
- [45].↵
- [46].↵
- [47].↵
- [48].↵
- [49].↵
- [50].↵
- [51].↵
- [52].↵