Associative memory networks for graph-based abstraction

Our cognition relies on the ability of the brain to segment hierarchically structured events on multiple scales. Recent evidence suggests that the brain performs this event segmentation based on the structure of state-transition graphs behind sequential experiences. However, the underlying circuit mechanisms are only poorly understood. In this paper, we propose an extended attractor network model for the graph-based hierarchical computation, called as Laplacian associative memory. This model generates multiscale representations for communities (clusters) of associative links between memory items, and the scale is regulated by heterogenous modulation of inhibitory circuits. We analytically and numerically show that these representations correspond to graph Laplacian eigenvectors, a popular method for graph segmentation and dimensionality reduction. Finally, we demonstrate that our model with asymmetricity exhibits chunking resembling to hippocampal theta sequences. Our model connects graph theory and the attractor dynamics to provide a biologically plausible mechanism for abstraction in the brain.

hippocampus 32,33 . To our knowledge, our model describes the first theoretical results that connect associative memory networks and graph theory, providing a biologically plausible dynamical mechanism for the hierarchical abstraction in the brain.

Laplacian associative memory model
Laplacian associative memory (LAM) is a novel class of Hopfield-type recurrent network model [11][12][13]20,23 . Let us define a network of binary units ( ) ( = 1, ⋯ , ; 0 ≤ ( ) ≤ 1) with the dynamics: where are synaptic weights and Θ( ) is a step function (Θ( ) = 1 if > 0, otherwise Θ( ) = 0 ). We assume that each memory item (e.g. sensory stimuli, places or events) is represented by a 0-1 binary random memory pattern ( = 1, ⋯ , ; = 1, ⋯ , ) with sparsity (P� = 1� = ). We set synaptic weights from these memory patterns as where ̃= − −1 ∑ and = (1 − ) . The term represents auto-association within each item, where is Kronecker delta and is a modifiable parameter that determines the strength of auto-association. On the other hand, is hetero-associative weights between memory items and ( = 0 ). The parameter ≥ 0 gives an additional global inhibitory effect 13 . In short, this network stores multiple cell assemblies ( memory patterns) through auto-associative Hebbian learning and further links them through hetero-associative Hebbian learning (Figure 1, left). Assuming Hebbian learning between successively activated cell assemblies 22,25 , we construct hetero-associative weights from a normalized adjacency matrix of a state-transition graph, or generally, other graphs such as semantic relationships (see Methods for the detail). Thus, the structure of hetero-associative links is a graph reflecting the statistical structure behind experiences in which we may find some communities.
LAM can be regarded as the generalization of previous associative memory models. When > 0 and all are zero, LAM is analogous to conventional Hopfield-type model storing biased memory patterns [11][12][13] . If only adjacent items are associated ( , +1 = +1, > 0, and all other are zero) so that associative links form a one-dimensional chain, the model coincides with an associative memory model for a temporal sequence 20,23 . However, unlike those previous models, LAM can also take other arbitrary hetero-associative link structures.
Furthermore, we do not restrict the parameter to be positive, allowing inhibitory autoassociation. We found unique behaviors of LAM mostly in the regime of negative autoassociation, which has not been extensively investigated previously.
We clarify biological interpretation of the model by decomposition of excitatory and inhibitory components. As in a previous work 23 , we can decompose the weights as In the range −1 < < max , decomposed weights E , L and G (always non-negative values) represent excitatory connections, pattern-specific local inhibition and non-selective global inhibition, respectively (see Methods for the detailed description). Therefore, LAM can be regarded as a circuit with local and global inhibition, in which the parameter determines the ratio between strengths of two types of inhibitory circuits (Figure 1, right).
Biologically, the difference of may correspond to anatomical inhomogeneity of interneurons. Otherwise, the balance of inhibition can be changed through inhomogeneous modulation of interneurons by acetylcholine 30,31 .

Multiscale representation of community structures in LAM
To demonstrate representions in LAM, we tested three representative graph structures. The first one is the graph used previously to study how humans segment temporal sequences obeying probabilistic state-transition rules 6,7 (Figure 2a). The second is karate club network 34 which is a popular dataset for testing community detection methods in graph theory ( Figure   2f). The third is a graph representing the structure of compartmentalized rooms (Figure 2k) which is often used as the state-transition graph for reinfocement learning 5,[8][9][10] . For each graph, we assigned a random binary pattern to each node and constructed a LAM network by setting hetero-associative weights from adjacency matrices of the graph. We initialized the activity pattern of LAM with one of memory patterns (trigger stimulus corresponding to each node), and simulated dynamics of the network for sufficiently long time to converge to an attractor state. We regard that attractor pattern as a neural representation of the node.
For each attractor state, we calculated an index called as pattern overlaps to evaluate the degree of retrieval of each memory pattern: Large positive values mean the significant activation of the memory pattern. Furthermore, we calculated pattern correlation between attractor patterns obtained from different trigger nodes.
LAM converges to various attractor patterns depending on trigger nodes and the value of auto-associative weight . Generally, memory recall (large positive maximum pattern overlap) was observed in the parameter region > −1 (Figure 2d . This result demonstrates that LAM generates mixed representations for communities in the hetero-associative links by partially recalling multiple memory patterns simultaneously in attractor states. Accordingly, representations for nodes within a community are highly correlated, which agrees with experiments 6,7 .

Theoretical relationship between LAM and graph Laplacian
Next, we analyze the mathematical mechanism behind representations of LAM. We found that the representations are related to graph Laplacian (GL). GL is a matrix defined for a graphical structure and its eigenvectors are used for various applications. One popular application is graph segmentation (also called as community detection) because it has been shown that signs of elements in GL eigenvectors indicates optimal two-fold segmentation of a graph 27,28 (examples are shown in Figure 3a-c).GL eigenvectors give segmentation in various levels depending on their eigenvalues (small eigenvalue corresponds to the coarse resolution with large communities), thus combinations of multiple eigenvectors enables multi-level segmentation. This property is utilized for image segmentation 27,28 for example. In another aspect, GL eigenvectors is also used for nonlinear dimensionality reduction 29 , which gives low-dimensional representations for nodes (data points) in which the structure is represented through similarity. As for the connection to neural representations, GL eigenvectors become grid-like code in the homogenous space, and their distortion brought by inhomogeneity fits with experimental observation of grid cells, and predictive spatial representations in hippocampus can be eigendecomposed into GL eigenvectors 10 . See Methods for the definition of GL and a brief review of mathematical properties.
We performed a formal theoretical analysis to show the relationship between LAM and GL.
With symmetric normalization of hetero-associative weights (see Methods for the detail) which gives quantitatively same results as results shown above (Supplementary Figure 1), we can define an energy function of the model as As in the conventional Hopfield model 11 , dynamics of LAM monotonically decreases this energy (Supplementary Figure 2). We consider a vector of pattern overlaps = ( 1 , … , ) T and a vector rescaled by degrees of the graph � , then we can rewrite the energy function as where is GL for the hetero-associative link structure (the state-transition graph) and is a degree matrix, 0 is the mean activity level in the network. Here we find the minimization of � T � under the constraint of � T � , which is the same objective with graph segmentation 27 and graph-based dimensionality reduction 29 , for which GL eigenvectors give optimal solutions. Therefore, we can expect GL eigenvectors appear in the rescaled pattern overlap vector � through the energy minimization of LAM. Furthermore, we derived that a GL eigenvector with an eigenvalue is activated in the pattern overlap vector in the condition < + 1 (see Methods). Noting that the minimum eigenvalue of GL is always zero and smaller eigenvalue corresponds to coarser graph segmentation, this result indicates that representations of the largest community (the eigenvector with the second smallest eigenvalue, which is called as Fiedler vector) appears in LAM when is slightly higher than -1. As increases, eigenvectors with higher eigenvalues are also activated, thus it is expected that represented communities get smaller. This analysis fits with the results shown in the previous section, especially the similarity between pattern overlaps in ≈ 1 and Alternatively, we can also interpret the energy minimization above as the combination of two conflicting optimization. First, the minimization of � T � is equivalent to the minimization of differences between pattern overlaps for two strongly connected cell assemblies 29 . This results in smoothing (or diffusion) on the graph, which leads to non-sparse solutions, observed as mixed representations of multiple cell assemblies. Second, the term − � T � is same as the conventional Hopfield model, which basically leads to the activation of a single memory pattern. Minimization of the mean activity 0 also helps to create sparse activity patterns. Therefore, the latter part of the energy function acts for the sparsification, that is, reduction of the number of activated memory patterns. In sum, the energy function is composed of two components for smoothness and sparsity, respectively, and the value + 1 determines the trade-off. If < −1, the effect of sparsification vanishes, thus no pattern is preferentially active. The number of activate patterns is maximiazed when is slightly higher than -1 because of the strong smoothing effect. As increases in the region > −1, Based on the relationship with GL, we tested graph-based image segmentation by LAM, which is one of the well-established applications of GL 27 . We assigned a random binary pattern to each pixel and defined hetero-associative links between pixels based on spatial proximity and similarity of RGB values similarly to the previous study 27 . LAM successfully extracted large segments corresponding to a GL eigenvector when the auto-associative weight was close to −1 (Supplementary Figure 3). When was increased, LAM extracted relatively small segments (Supplementary Figure 3). These results show that LAM is also applicable to non-ideal graphs constructed from real-world data.

Finding subgoals by graph-based representations and novelty detection
One of important applications of GL eigenvectors is to find appropriate subgoals for hierarchical reinforcement learning 8,9 . In this framework, a set of primitive actions (called as options) are optimized through learning to reach subgoals. Desirable subgoals are "bottlenecks" shared by many trajectories on the state-transition graph. GL eigenvectors have been used to find bottlenecks through graph segmentation. We tested whether representations in LAM can also be used for subgoal finding by comparing results between LAM and GL.
To identify bottlenecks, we calculated "novelty index" of each node, which measures the expected changes of representations caused by the movements from a node to surrounding nodes (see Methods for the mathematical definition). In hierarchical reinforcement learning, subgoals are treated as pseudo-reward for agents. It is biologically natural to treat novelty as pseudo-reward because dopamine cells are activated by not only reward but also novelty 36 .
With GL, we constructed low-dimensional representations of nodes from GL eigenvectors with low eigenvalues (Laplacian eigenmap) 29 . On the other hand, with LAM, activity patterns in attractor states are directly used as representations.
With GL, the novelty index successfully detected nodes located at the bottlenecks provides a more biologically plausible mechanism based on a neural network.

Chunked sequential activities in asymmetric LAM
So far, we have analyzed attractor patterns in LAM with symmetric links. Next, we show dynamic properties of asymmetric LAM. We constructed asymmetric LAM with a ring-shape graph in which link weights were slightly stronger in one direction than in the opposite direction ( Figure 5a). We simulated the neural activity while continuously changing the value of the auto-associative weight (Figure 5b, top). The network showed a sequential activity in which embedded memory patterns are consecutively retrieved at a variable speed ( Figure   5b, bottom). Rapid state transitions selectively occurred when the value of became negative and close to -1 (Figure 5d), at which the distribution of pattern overlaps was maximally expanded (Figure 5c). These results indicate that negative auto-associative weights in asymmetric LAM not only generate macroscopic representations for large communities but also increases the sensitivity to asymmetricity in synaptic weights and facilitates sequential transitions across memories.
Motivated by this dynamic property and the relationship between LAM and GL, we examined whether the sequential activities in asymmetric LAM are chunked according to the communities in hetero-associative links. For simulations, we specifically focused on hippocampal theta sequences in which chunking was experimentally observed 32 . We assumed a virtual animal running on a ring-shape track, and we modeled the hippocampus of the animal by asymmetric LAM with a ring-shape hetero-associative link structure ( Figure   6a). We simulated neural activities of LAM with the fixed parameter = −0.9 while regularly stimulating a cell assembly encoding the current location of the animal. In the simulation, we obtained rhythmic sequential activities along the ring (Figure 6b), which are a simplified model of theta sequences.
In this model, we tested three hetero-associative link structures (see Supplementary Figure   4 for the detail of structures). In a uniform ring structure without chunks ( Figure 6c which are over-represented by hippocampal place cells 38,39 . These results demonstrates that LAM provides a unified mechanism for graph-based representations 6,7 and chunking of sequential activities 32 .

Discussion
In this paper, we proposed Laplacian associative memory (LAM), an extension of the Hopfieldtype network models to compute community structures in hetero-associative links. While structural segmentation has been attempted by hierarchical networks with different time constants 40 , our model provides a novel framework for multiscale information processing in a single network and accounts for experimentally observed graph-based representations 6,7 .
Furthermore, we showed that LAM with asymmetricity can generate chunked sequential activities which reproduced experimentally observed chunking of theta sequences 32 . Notably We used graph-based representations in LAM for subgoal finding in HRL 8,9 . Another way to perform reinforcement learning with graph-based representations is to use successor representation 46 . Successor representation gives prediction of near-future state occupancy from current states that is useful for value estimation, and has been shown to be consistent with many experimental findings in hippocampal information representations 10  In asymmetric LAM, we found that negative auto-associative weights facilitate sequential transitions across memory patterns. Previously, we found that negative auto-association significantly increases the sensitivity of correlated attractors to external perturbation 23 . We speculate that changes in the propagation speed presented here depends on a similar mechanism. If the auto-association is strongly positive, attractors are stable and invulnerable to directional biases in link weights. However, as gets close to -1, the attractors are gradually instabilized and become sensitive to weight biases and external perturbations. This property suggests that macroscopic representations are dynamic in the brain, and are unlikely to serve for robust working memory in the brain as conventional attractor networks 20,24,25 .
We found that both local bottlenecks and over-representations induce the chunking of sequential activities in asymmetric LAM. The over-representation model is particularly interesting because it accounts for the role of salient landmarks and rewards which are overrepresented by place cells 38,39 . We may be able to apply a ring-shape structure with two overrepresentations for modeling typical experiments in which animals run back and forth on a 1-D track to get rewards at both ends, considering that many place cells are directionselective in such experimental setting 47 . In contrast, to our knowledge, how bottlenecks affect hippocampal sequential activities has not been tested experimentally. An adequate design of bottlenecks does not seem to be trivial in spatial navigation tasks because animals may recognize spatial bottlenecks as salient landmarks which would be over-represented in the brain. A proper design of the task structure requires a careful control of saliency of each state.
The simple model with the asymmetric LAM produced sequential activities similar to chunked hippocampal theta sequences 32 ( Figure 6). However, the hippocampal circuits generate more complex oscillatory dynamics, which is also likely to contribute to segmentation. For instance, in hippocampal replays of spatial trajectories, a boundary of chunks (a bifurcating point) in the spatial structure is locked to troughs of LFP power in concatenated sharp wave ripples 33 . Furthermore, hippocampal circuits repeat convergence to and divergence from discrete attractors every gamma cycle during sharp wave ripples 16 .
Our simplified model cannot address the relationship between such complex oscillatory dynamics and segmentation. A detailed network model involving realistic spiking neurons and inhibitory circuits is necessary for studying such the relationship.
Previously, processing hierarchical knowledge in associative memory models was implemented by embedding artificially correlated memory patterns 48 . This model successfully reproduced the dynamics of hierarchical information processing in the temporal visual cortex 49,50 . The relationship between our model with hetero-associative links and the previous model with correlated memory patterns is currently unclear, and worth exploring. If similar graphical computation is possible with correlated memory patterns, the brain may perform graphical computation based on not only temporal association (hetero-associative links in our model) but semantic similarity between items (correlation between memory patterns).
However, we emphasize that our finding that associative memory networks can autonomously compute mathematically well-defined communities in complex graphs is previously not known, because previous models have tested only simple structures and negative auto-association has not been considered.
The mechanism proposed in this paper will give a novel method to solve an arbitrary eigenvalue problem by using associative memory models. In the present model, we constructed hetero-associative weights from normalized adjacency matrices of graphs.
However, the proposed dynamical mechanism to solve eigenvalue problem is generic and does not depend on this specific condition. For example, if we employ covariance matrix between encoded variables for a hetero-associative weight matrix, a network is theoretically expected to perform principal component analysis. Because eigenvalue problems ubiquitously appear in applied mathematics and machine learning, other computational methods may also be mapped to brain functions through the similar mechanism. Our model suggests much more powerful computing ability of associative memory models than previously thought and may provide a bridge across artificial intelligence and brain science.
We are grateful to H. Shiwaku for helpful discussion. We also thank OIST Scientific Computation and Data Analysis section for the technical support for scientific computing.
This work was partially supported by Kakenhi nos. 18H05213 and 19H04994 from the MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan).

Author contributions
T. H. and T. F. conceived the project and wrote the manuscript. T. H. mathematically designed the model, and performed simulations and analyses.

Competing financial interests
The authors declare no competing interests.

Definition and mathematical properties of graph Laplacian
Let us assume a symmetric graph which has an adjacency matrix whose element denotes the existence of an edge with 0 and 1 (unweighted graphs) or the weight of the edge (weighted graphs) between the node and the node . We also define a degree matrix , An important characteristic of graph Laplacian matrix is that its eigenvectors give optimal graph segmentation. Here optimality is defined by the min-cut criterion that prefers a twofold division of a graph obtained by cutting the minimum number of edges. It has been proven that min-cut graph segmentation can be performed by solving the generalized eigenvalue problem = , or equivalently, eigenvectors of normalized graph Laplacian sym and asym27,28 . A sign of each element in the eigenvector indicate a segment that each node should be assigned, and multiple eigenvectors correspond to the two-fold segmentation in various levels, depending on their eigenvalues. The eigenvector with the second smallest eigenvalue (Fiedler vector) is regarded as the best non-trivial solution which corresponds to the largest community structure (which achieves minimum cut) in the graph.
Eigenvectors with larger eigenvalues are suboptimal solutions perpendicular to other eigenvectors, and tend to give subdivision of large communities into subclusters.

Another useful interpretation of graph Laplacian eigenvectors is low-dimensional
representations for nodes in the graph which is called as Laplacian eigenmap 29 . The generalized eigenvalue problem = gives perpendicular solutions for min T subject to T = 1 and the eigenvalue indicates the minimized value. Because of the minimization of T can be regarded as assigning values to nodes such that strongly connected nodes are represented by close values. In this sense, low-dimensional representations constructed from graph Laplacian eigenvectors with low eigenvalues captures the graph structure through their similarity, which is the appropriate property for nonlinear dimensionality reduction.

Construction of hetero-associative weights
In this study, we hypothesize that hetero-associative weight matrix = � � 1≤ ≤ .1≤ ≤ is constructed as − 1 2 − 1 2 (symmetric normalization) or −1 (asymmetric normalization) where and are the adjacency matrix and the degree matrix of the graph, respectively.
As in graph Laplacian, two normalization yields qualitatively same results. However, the symmetric normalization model enables us formal theoretical analyses and the asymmetric normalization model gives biologically plausible interpretation of the model.
Asymmetrically normalized weights directly correspond to the transition probability matrix for random walk on the graph 28 , thus they can be naturally learned in sequential experiences through Hebbian learning at the state transition 22 Using In a random walk on a state-transition graph, = P( [ − 1] = | [ ] = ) corresponds to an element in the normalized adjacency matrix −1 .

Decomposition of excitatory and inhibitory synaptic weights
With the asymmetric normalization model, synaptic weights can be decomposed as Here, we used the constraint ∑ =1 = 1 and approximated ∑ =1 ≈ 1 . All decomposed weights are always non-negative. The component E , L , and G are excitatory connections (which reflects the structure of cell assemblies), assembly-specific local inhibition, and non-selective global inhibition, respectively.

Analysis of the energy function of LAM
Here we consider symmetric normalization model in which hetero-associative links are constructed as = − 1 We define a pattern overlap a pattern overlap vector = ( 1 , … , ) T , and the mean activity 0 = −1 ∑ =1 . Then, we can rewrite the energy function as The matrix sym = − is symmetric normalized graph Laplacian 27 if we regard the heteroassociative weight matrix as the normalized adjacency matrix. By rescaling by the degree matrix of the hetero-associative links as � = − 1 2 , we further obtain where is unnormalized graph Laplacian.
To see the relationship with graph Laplacian eigenvectors more quantitatively, we expand the overlap vector by a linear combination of eigenvectors of symmetric normalized graph Laplacian (with corresponding eigenvalues ) as Then, the energy function can be written as If = 0, the minimization of this energy requires ≠ 0 if < + 1, which gives the approximate threshold of the activation of an eigenvector in the representation (note that the actual threshold can be shifted because of > 0).

Turing instability analysis of LAM
For the analysis, we first replace the step function Θ( ) in Eq. (1) by a differentiable monotonically increasing function f( ) that converges to Θ( ) in the limit of → ∞ (e.g. a logistic function f( ) = (1 + exp(− )) −1 ). As in the main text, we define pattern overlaps From Eq. (1)(2), dynamics of overlaps can be obtained as Next, with vectors = ( 1 , … , ) T , � = �̃1, … ,̃� T , and the hetero-associative weight matrix = � � 1≤ ≤ .1≤ ≤ , we obtain a vector representation as where is an identity matrix. When and are sufficiently large and memory patterns are random, = is an equilibrium point for this dynamical equation. Furthermore, in that condition, the matrix 1 ∑ � � T =1 becomes a correlation matrix for random memory patterns, which can be approximated by an identity matrix. Thus, we obtain the following equation by linearizing f( ) around = : Here, we defined = − (this is either symmetric or asymmetric normalized graph Laplacian). Finally, we expand with eigenvectors ( = 1, ⋯ , ) of the matrix as Substituting this into the linearized equation yields where is an eigenvalue for . This equation has non-trivial solutions ( ≠ 0) only if = ( + 1 − ) −1 f ′ (0) − 1 , which gives exponential growth rates along each eigenvector around = . If there exists a positive growth rate, the network becomes instable along the corresponding eigenvectors; otherwise the network is stabilized at = .
In the limit of → ∞ (f( ) → Θ( )), the sign of is solely determined by the sign of + 1 − . This result suggests that the overlap vector is activated (instabilized) along theth eigenvector of the graph Laplacian matrix if > − 1 ( is the eigenvalue for the -th eigenvector).

Simulations of the network model
In numerical simulations, we used decomposed asymmetric normalization model unless specified otherwise. We first initialized activities by one of memory patterns ( [0] = ) and updated activities by a discretized version of Eq. (1): where [ ] is an external input applied in simulations in Figure 6. We used = 0.01 for simulations with symmetric graphs (Figure 2 Attractor patterns of the network model with symmetric graphs were obtained by simulations of 3,000 time-steps. Simulations of sequential activities in Figure 5 was performed for 10,000 time-steps. Simulations of sequential activities in Figure 6 were performed for 30,000 time-steps and repeated three times using different random seeds for each setting.
We averaged mean pattern overlaps at each location and correlations between mean pattern overlaps over those three trials. We truncated negative mean pattern overlaps to zero in this calculation.
We counted the number of active patterns by counting the number of that satisfies > 0.05 and > 1 2 max .
We note that images shown in figures are ones before down-sampling. We constructed link weights by the same way with Shi & Marik (2000) 27 : where vectors and denotes the RGB value (normalized between 0 and 1) and the spatial location of pixel , respectively. Parameters were I = 0.1, X = 4, = 5. After setting values, we performed asymmetric normalization of weights to get hetero-associative weights.

Definition of the novelty index for subgoal finding
First, we calculate a representation of each node by either constructing low-dimensional vectors from graph Laplacian eigenvectors (Laplacian eigenmap) 29 or simulating an attractor pattern of LAM triggered by a memory pattern of each node ( ). We define similarity between representations of two nodes as ( , ) by either cosine similarity between two representations of Laplacian eigenmap, or correlation between attractor patterns of LAM. The novelty index of a node is defined as where → denotes the transition probability from to in random walk on the graph (which is equivalent with elements in −1 ). The novelty index ( ) spans from 0 to 1 and indicates the expected change of information representations that an agent experience in the transition from the node .

Asymmetric Laplacian associative memory and the model of a virtual animal
To construct asymmetric hetero-associative weights, we converted symmetric graphs into mutually connected asymmetric weighted graphs. We set the weight of asymmetric links in the biased direction (including diagonal connections) to 110, and the weight for opposite directions to 90. All links horizontal to the biased direction (radial connections) was 100. After constructing adjacency matrices, we performed asymmetric normalization as in symmetric graphs.
In the simulation in Figure 6, In the bottleneck model and the over-representation model, we connected additional nodes at the side of the uniform ring-shape graph (as shown in Supplementary Figure 4). We did not stimulate patterns corresponding to additional nodes. We calculated the pattern overlap for each location by averaging over nodes in the central ring and additional nodes at the same location.

Eigenvalues of successor representation
Successor representation is defined for a pair of states and ′ as where indicates a discount factor. We consider a matrix of successor representation , a transition probability matrix , and asym = − is an asymmetric normalized graph Laplacian. Eigenvectors and eigenvalues of asym are defined as and L , respectively.
Then, they satisfy