Thoughtful faces: inferring internal states across species using facial features

Animal behaviour is shaped to a large degree by internal cognitive states, but it is unknown whether these states are similar across species. To address this question, we developed a virtual reality setup in which mice and macaques engage in the same naturalistic visual foraging task. We exploited the richness of a wide range of facial features extracted from video recordings during the task, to train a Markov-Switching Linear Regression (MSLR). By doing so, we identified, on a single-trial basis, a set of internal states that reliably predicted when the animals were going to react to the presented stimuli. Even though the model was trained purely on reaction times, it could also predict task outcome, supporting the behavioural relevance of the inferred states. The identified states were comparable between mice and monkeys. Furthermore, each state corresponded to a characteristic pattern of facial features, highlighting the importance of facial expressions as manifestations of internal cognitive states across species.


Introduction
In the wild, all mammals show similar behaviour: they all hunt or forage for food, sleep, mate, avoid predators, and explore their environment, to name just a few.None of these behaviours can be simply explained as a passive reaction to environmental input; rather, they are crucially shaped by dynamic fluctuations in internal states such as satiety, alertness, curiosity or attention (1, 2).So, if fundamental behaviours are comparable across species, how similar are the internal states that drive them?Is 'attention' in a monkey the same as 'attention' in a mouse?
The common approach to investigate internal states has been a reductionist one: highly restrictive tasks featuring simplified stimuli and requiring narrow behavioural repertoires (e.g.button presses), with little room for fluctuations over time (3-5).What's more, experimental paradigms diverge widely depending on the species under study.For example, attention studies in primates typically require the subject to fixate on a central fixation point while paying attention to a peripheral stimulus that might briefly or subtly change its appearance (6, 7).Attention studies in rodents, on the other hand, typically use the 5-choice serial reaction time task (5CSRTT), in which the subject is required to scan a row of five apertures for the presentation of a brief light stimulus, and then navigate towards the light source (8, 9).Even though the behaviour associated with high attention, i.e. short reac-45 tion times and accurate responses, is the same in both cases, identify the spontaneous emergence of internal cognitive and emotional states (22,23).
In this study, we leverage these technological breakthroughs to infer and directly compare the internal states of two species commonly studied in neuroscience -macaques and mice.Specifically, we combine a highly immersive and naturalistic VR foraging task with a state-of-the-art deep learning tool that allows for precise, automated tracking of behavioural features.The features extracted in this way then serve as inputs to a Markov-Switching Linear Regression (MSLR) model (24), which finally captures time-varying internal states across trials.
Importantly, such single-trial inference of internal states is only meaningful if the behavioural markers it relies on are not indirectly tracking the concrete motor outputs required for task performance.If the behavioural markers directly reflected task-related motor output (e.g.preparatory paw movements), then internal states inferred from this behaviour might be expected to trivially predict task performance.For instance, lack of preparatory paw movements might trivially predict a miss trial.To ensure that the behavioural parameters we chose would truly reflect internal processing, we focused on the animals' facial expressions.
While facial expressions have long been thought to only play a role in highly visual and social species like monkeys and humans (25-28), recent work has highlighted that also less social, less visual species like mice exhibit meaningful facial expressions (22,29).As such, behaviourally relevant facial expressions seem to be much more evolutionarily preserved than previously expected (22,29,30).More specifically, they seem to reflect fundamental emotions like pleasure, pain, disgust and fear in a way that is not only consistent within one species, but also readily translatable across species (31,32).This argues for an evolutionary convergent role of facial expressions in reflecting (and potentially communicating) emotions.
Unlike these previous studies on the relation between facial expressions and emotions, here we for the first time analyse facial expressions in mice and monkeys that occur spontaneously, in the absence of a pre-defined emotional context.Such spontaneously occurring behavioural states have so far mainly been tracked using single facial features to identify isolated cognitive states, for instance by quantifying attention via pupil size, both in rodents (33, 34) and primates (35)(36)(37)(38).Similarly, eye movements in monkeys and humans (39-41) and whisker movements in mice (42) have been used to track attention and decision-making.By focusing on entire facial expressions beyond individual (often species-specific) features, we aim to for the first time map out the spectrum of spontaneously occurring internal states in a way that is 1) agnostic, i.e. not focused on a specific cognitive process or facial feature, and 2) directly comparable across species.
Our approach of using facial expressions to infer internal states from natural behaviour constitutes a drastic move away from the classical approach of imposing internal states through restrictive behavioural paradigms (e.g.cued attentional shifts).By tying the results of this approach back to known relationships between internal states and overt be-   1. Experimental setup and computational pipeline.A) Macaques and mice were seated inside a large dome on the inside of which a VR was projected via a curved mirror (top).They were rewarded for moving towards a spike-shaped leaf compared to a round-shaped leaf (bottom).B) As the animals were engaged in the task, behavioural data were collected: movements of the trackball (top and bottom) and videos of their faces (middle).C) Trackball movements were translated into paths through the virtual environment (top and bottom), from which reaction times were determined (see Methods).Individual facial features were automatically detected from the videos and tracked over time (middle).D) Facial features entered two separate MSLR models (one for each species), which yielded, for every trial, a predicted reaction time and internal state probabilities.

task-relevant information available yet, presumably all of the
The coexistence of several hidden states opens up the question whether task performance is dominated by a single state at any given moment, or if several states co-exist continuously.After fitting the model parameters, we used the model to identify the animal's internal state on a trial-by-trial basis.
Note that the model does not allow for the animal to be in multiple states at the same time; rather, it gives us probabilities telling how confident we can be about the state the animal is in on each trial.Specifically, we computed the posterior probability over states on each trial given all past and future observations.The probabilities of each state over time suggest that the model is highly confident about what state the animal is in on each trial (Fig. 2B, bottom row).These observations were confirmed by the highly bimodal distribution of these probabilities for both species (Fig. 2C).Crucially, in monkeys, this separation between high-certainty (p s ≈ 1) and low-certainty (p s ≈ 1/n s ) trials was particularly pronounced, while in mice, state probabilities were somewhat more mixed.Quantifying the single-trial certainty as measuring its difference with the uniform distribution -through the Kullback-Leibler divergence (KL)-corroborated these findings (Fig. 2D; Mann-Whitney U-test: p = 1.11 • 10 −274 ).
As such, the hidden states identified by our model seem to reflect largely mutually exclusive behavioural modes that animals switch into and out of.Given how consistently trials were dominated by one state, we chose to binarize hidden state outcomes by assigning each trial to its most probable hidden state.

C. State dynamics.
To explore if the hidden states showed attributes that could be reflective of internal cognitive states, we first characterized their temporal dynamics.To this end, we examined the frequency of state transitions in both species.The state transition matrices, which show how likely a trial of a given hidden state is followed by a trial of any (other or same) state (Methods -Markov-Switching Linear Regression), revealed high values along the diagonal for macaques, indicating stable states that switched rather rarely.
In mice, the diagonal of the transition matrix was slightly less pronounced, suggesting that hidden states in mice were less stable and more prone to transition than in macaques (Fig. 3A).
As a complementary analysis, we computed the dwell time for each state.This quantity is defined as the number of consecutive trials that a given state is occupied for, before transitioning to a different state.Supporting the previous observations, hidden states lasted generally longer in macaques than in mice (Mann-Whitney U-test; n mac = 4092, n mice = 2543 trials, p = 0.0014), suggesting that internal processing may be more steady in macaques (Fig. 3B).This is consistent with previous findings that behavioural dynamics may fluctuate faster in mice (34,51) than monkeys (52).Apart from a genuinely species-driven difference, this observation may also reflect the fact that monkeys are trained more extensively and may therefore have developed more stereotyped behavioural strategies than mice, which were trained more briefly.

Fig. 2. Model performance and state probabilities. A)
Cross-validation performance for various numbers of states, for macaques (left) and mice (right).Circles indicate the maximum CV R 2 and the shaded region extends until the 5 th percentile.For both species, increasing the number of states improves model performance to a plateau at an R 2 ≈ 0.8.Lasso is a regularized Linear Regression (i.e., a MSLR with 1 internal state).The arrows indicate the number of states we selected, based on the maximum difference of the CV performance curve (see Fig S10).Insets show model performance for held out data at the selected number of states; dashed horizontal lines indicate the 99 th percentile of the surrogate performances (see Methods).Note that the shuffled R 2 is negative, because only uncorrelated predictors are expected to be centered at 0, and due to finite sampling effects, there is always a non-zero correlation between the shuffling and the groundtruth.Furthermore, as we are dealing with skewed distributions (see Fig. S1), the null tendency is not captured by the mean, as assumed by the default R 2 .B) Predicted RTs (top) and state probabilities (bottom) for an example stretch of data (left, macaques; right; mice).C) Probabilities of all states over all trials, regardless of state identity (blue, macaques; orange, mice).The bimodal distribution suggests that states are either absent or dominant on any given trial.D) Kullback-Leibler divergence (KL) for monkey (blue) and mouse (orange) internal states.KL quantifies the difference between the posterior state probability under the model and the uniform distribution, normalizing by the number of states.A KL value close to 1 indicates maximally dissimilar distributions (i.e., only one present state at a time), while a value close to 0 indicates indistinguishable distributions (i.e., equally likely states).

Fig. 3. State dynamics. A)
State transition matrices for macaques (left) and mice (right), that show the probability, at any one trial, of transitioning from a certain state (rows) to any other state (columns).Transitions between different states (offdiagonal terms) are more frequent for mice than for macaques.B) Macaques (left) spend more time than mice (right) in the same state, as measured by the dwell time (number of consecutive trials of each state being the most likely one).Individual dots reflect sequences of consecutive trials of a particular state.635 more precise mouth movement than in the original model.All models were further trained and refined to achieve a detection error of less than 2 pixels per tracked key point in all conditions.The macaque raw pupil size recorded by the eye-tracker was Z-scored over time within the training data set.

D. Hidden states as performance states. To link the
To synchronise the video timing with events in the virtual reality environment, we used 32 ms long infrared flashes emitted from an LED mounted near the camera lens.These flashes were then extracted from the face videos to be used as timestamps for synchronisation with DomeVR.Five consecutive flashes indicated the start of a behavioural session; a single flash indicated the start of a trial.
Reaction Time.In our VR setting, where animals move towards one of two stimuli rather than pressing a button or lever, or making an eye movement, we define the reaction time (RT) as the time point of the initial substantial movement directed towards either stimulus.While determining this time point, it is crucial to distinguish between stimulus-related movements and minor positional adjustments.We specifically focus on the first deviation in lateral movement, while excluding forward movement due to its susceptibility to random movements and its task irrelevance.for macaques and mice, respectively).The three distributions largely overlap.
To calculate the RT, we use a sliding window linear regression approach, incorporating a time decay mechanism.This approach enables us to detect non-linearity by examining the coefficient of determination (R 2 ) for each window.A low R 2 value indicates that the data deviate from linearity, and such a deviation can be interpreted as a deviation in lateral movement.
First we compute a linear regression on the time series of lateral VR movement for adjacent sliding windows i and j of a given size (n w ).Then, R 2 i (i.e., R 2 for window i) is calculated as: where l j is the j th element of the lateral movement observed in the second window, li j is the corresponding predicted lateral movement value (based on window i) and l is the mean lateral movement within the second window.As a result, we get an Subsequently, we reverse the sign of the −R 2 array and detect its local maxima.For this, we resort to the definition of extreme points (we have a univariate function in this case): where we have simplified the notation, using −R 2 ≡ r(w).Once we have found the local maxima (L), we further require that they have a minimum prominence (λ).Prominence is a measure of the significance of a peak by comparing the peak to its surroundings: where r(w 0 ) is −R 2 at L 0 and b l and b r are the arrays of left and right bases of the peaks; we are making use of the notation by which r(w) ≡ −R 2 .
For each peak in r(w), we calculate the prominence and discard the ones that are below a given threshold (λ 0 ).The particular value for this threshold was not critical for the overall performance of the algorithm.For the sake of stability, we use multiple window sizes (100, 150, 200 and 250 ms) and combine the results in the following way.For each window k, we have an array of candidate points (x k cand ).Then, we create a vector of weights (w k ∈ R n ) that have a value equal to a Gaussian distribution centered around each candidate point of each window.Mathematically: where B k cand denotes the vicinity of each point in x cand for window k.Finally, the RT is given by:  We have marked the key points that we used as the raw data for our pipeline.For this animal, we track a total of 73 key points.Some of them will be aggregated into centroids of interest, to minimize the influence of noise.B) Same as A), but for the mouse.In this case, we also have a separate model for tracking pupil changes.C) Two example traces of a common feature for both species, over time.D) As described in the Methods, we use trial summaries for each of the face features of interest.Here, we show all of them, after having preprocessed them, for an arbitrary selection of 300 trials.
Once the data streams were aligned, we computed the median location (x, y) of each facial point over the 250 ms window before the stimuli appeared on the dome.This time window was chosen to make sure that all of the facial expressions of the animals are due to internally generated processing, rather than stimulus processing.Different window sizes (particularly: 200, 300 and 500 ms) did not yield any qualitative difference.In addition to the median location, we also computed the total velocity of each facial point.
For both species, we further computed the median pupil size over the same time window.Pupil size is a well-known indicator of arousal and cognitive load, and thus provides valuable information about the internal state of the animal.
This resulted in a set of data points for each trial, corresponding to the median vertical and horizontal location, and total velocity of each of the facial features.These data points serve as the predictors for the MSLR model.Synthetic data and ground truth states.In order to validate the retrieval of states when we do not have access to ground truth ones, we generated a time series of ground truth emissions and states based on the given inputs (by using the same input data as in the main text).To this end, we trained an MSLR model with a known given number of states and sampled some emissions and states sequence from it.We aimed to recover the appropriate number of states with the correct temporal sequence, and to correctly predict the emissions.Figure S4A illustrates the input data (composed of session-concatenated mouse facial features, as described in the main text).In Figure S4C, we show that, once we have selected the appropriate number of states, the model's log-probability does peak at the ground-truth one (dashed vertical line).In Figure S4D, we show a comparison between the true and the inferred states, for some example trials.Although the temporal coincidence of the state transitions is very high, due to the stochastic nature of the model, some state labels might be permuted (i.e., state 1 in our model might correspond to state 0 in the ground truth states).Therefore, in order to quantify state similarities and to account for state-swapping, we one-hot encoded the true and predicted states sequences and correlated all pairs with each other (Figure S4D).There is an almost perfect match (ρ(s true , s pred ) > 0.9) between the true and inferred states (the 99 th percentile of the surrogate correlation distribution was 0.12).

A B D C
Fig. S4.Synthetic states and emissions.A) Performance when varying the number of states.We are able to recover the number of states (vertical dashed line) that generated the ground truth emissions.B) For the selected number of states, log-probability of the fitted parameters; it converges to the ground truth value (horizontal dashed line).C) Some example trials for the true and predicted states.State transitions are correctly captured, but state labels might be permuted.D) Temporal correlation between the one-hot encoded state arrays.There is an almost perfect match in between the predicted and the true states arrays, up to a label permutation.
case, we used the same pipeline as we detailed in the previous sections, but substituted the facial features at the current trial t for the RT of the previous trial (t − 1).As it can be seen from Fig. S5, the facial features model outperforms the ARHMM for all states, for both species.Both face feature models outperform their Autoregressive counterparts, for any number of internal states that we swept over.Nevertheless, it can be seen that the performance gap is smaller in mice than in macaques.This is consistent with the finding that mice are more history dependent than macaques (See Fig. 5 D).
Task performance and internal states.We were interested in investigating whether the inferred internal states were correlated with task performance, even though the model had not been trained on such information.We therefore used the predicted single-trial state probabilities to decode choice, using a simple Logistic Regression model, with a L2 penalty term.After verifying that the model does indeed classify outcome beyond chance level (Fig. S6), we took the weight of each state as a proxy for how related it was to each outcome.Input variable correlation Fig. S7.Input variable correlation.We show that, out of all of the original variables (in lighter colors), we end up discarding one per animal (Left Eyebrow [y], macaques; Eye movement, mice), given that they were highly multi-colinear with some of the other predictors, as measured by the Variance Inflation Factor (VIF).After discarding them and recomputing the VIF, we did not find any alarming colinearity.

Supplementary figures
146 haviour, such as shorter reaction times during focused atten-147 tion, these data-driven, agnostically inferred internal states 148 can be tentatively related to known cognitive processes such 149 as attention and motivation.Importantly, this puts us in the 150 unique position to directly compare inferred internal states 151 across two species.152 Results 153 A. Experimental set-up.To track and compare sponta-154 neously occurring internal states of mice and macaques dur-155 ing the performance of the same naturalistic visual discrim-156 ination task, the animals were placed inside a custom-made 157 spherical dome (Fig. 1A, top).On the inside of the dome, we 158 projected a virtual reality (VR) environment using a custom-159 made toolbox called DomeVR (16).The monkeys navigated 160 through the VR environment manually using a trackball; the 161 mice ran on a spherical treadmill, the movements of which 162 were translated into VR movements (for details, see Methods 163 -Experimental Setup).164 Two monkeys and seven mice were used in this study, 165 comprising 18 and 29 experimental sessions (20459 and 166 12714 trials) respectively.The animals engaged in a sim-167 ple, foraging-based two-choice perceptual decision task, in 168 which they had to approach a target stimulus while avoiding 169 a distractor stimulus, both of which were represented by nat-170 ural leaf shapes integrated in a meadow landscape (Fig. 1A, 171 bottom; see Methods -Behavioral paradigm and Behavioral 172 Training).Their performance on this task was quantified 173 first in terms of trial outcomes: hit (target stimulus reached), 174 wrong (distractor stimulus reached), and miss (neither stim-175 ulus reached); as well as in reaction time (RT).For this, we 176 identified turning points in the animals' running trajectories 177 through the VR to define the moment when an animal deci-178 sively oriented itself towards one of the two potential targets 179 (Fig. 1C; for details, see Methods -Reaction Time).As Fig.

Fig.
Fig.1.Experimental setup and computational pipeline.A) Macaques and mice were seated inside a large dome on the inside of which a VR was projected via a curved mirror (top).They were rewarded for moving towards a spike-shaped leaf compared to a round-shaped leaf (bottom).B) As the animals were engaged in the task, behavioural data were collected: movements of the trackball (top and bottom) and videos of their faces (middle).C) Trackball movements were translated into paths through the virtual environment (top and bottom), from which reaction times were determined (see Methods).Individual facial features were automatically detected from the videos and tracked over time (middle).D) Facial features entered two separate MSLR models (one for each species), which yielded, for every trial, a predicted reaction time and internal state probabilities.

323
identified hidden states more concretely to internal cognitive 324 processing, we set out to investigate how each hidden state 325 related to behavioural outcomes, starting with the RTs that 326 the model was trained to predict.There are two potential 327 scenarios for how the model might partition RT variability: 328 on the one hand, it is possible that each hidden state covers 329 the full range of RTs, but predicts them from a different con-330 stellation of facial features.Alternatively, each hidden state 331 might 'specialize' on predicting specific ranges of RTs.For 332 example, one hidden state might cover facial features that dis-333 tinguish between fast and extremely fast RTs, while another 334 state mainly predicts variations between slower RTs.This 335 second scenario would make it more likely that the identified 336 hidden states reflect genuinely distinct performance states.337Todistinguish between these scenarios, we plotted the overall 338 state-specific RT distributions, pooling trials across all ses-339 sions and animals, for each hidden state (Fig.4A; Fig.S15

340
shows the same plot for individual sessions and animals).The 341 resulting distributions support the second scenario: while one 342 hidden state (state B in both monkeys and mice) covered a 343 rather broad range of RTs, all other states showed a distinct 344 profile of response speeds.This implies that the hidden states 345 relate to distinct performance regimes (in this case in terms of 346 response speed), making them viable candidates for defining 347 specific internal states of cognitive task processing.348 To further probe the possible link of our internal states to 349 known cognitive processes, we related all hidden states to 350 the three possible trial outcomes of the task (hit, wrong, and 351 miss; see Methods -Task performance and internal states).352 Crucially, given that we trained the model to predict RTs, it 353 never received any explicit information about trial outcome.354 Furthermore, RTs were only marginally related to trial out-355 comes (Fig. S1), so that trials with a specific RT would not 356 be significantly more likely to result e.g. in a hit or a miss 357 trial.Finally, as we only used information about facial fea-358 tures in the pre-stimulus phase of the trial to train the model, 359 it cannot reflect stimulus features.360 Even though information about trial outcomes was not part 361 of the MSLR model, the resulting hidden states were con-362 sistently predictive of specific trial outcomes (Fig. 4B

388E.Fig. 4 .
Fig. 4. Internal states and task performance.A) Splitting the RTs over internal states shows large diversity for both macaques (left) and mice (right), from fast reaction-states to extremely slow ones.Individual dots reflect trials.B) Correlations of state probabilities with the three task outcomes (hit, wrong, miss), for macaques (left) and mice (right).Black boxes indicate the states most strongly associated with a certain task outcome.C) Conjunction of RT and excess likelihood of a hit outcome, for all states (blue circles, macaque; orange triangles, mouse).

441
S13 for a summarized visualization).442 One reason why hidden states can predict trial outcomes 443 so accurately despite not being trained on them in any way 444 might be that pre-trial facial features are mostly a trivial con-445 sequence of the animal's trial history.For example, facial 446 features might mainly reflect an animal still drinking reward 447 from the previous trial, which might in turn raise motivation 448 to perform correctly in the upcoming trial.In this case, fa-449 cial features would merely be a particularly convoluted way 450 of quantifying the previous trial outcome, and using it to pre-451 dict upcoming performance, as has been achieved previously 452 (57, 58).To account for this possibility, we trained an Auto-453 Regressive Hidden Markov Model (ARHMM) based on RTs 454 (see Methods -ARHMM for details).As can be seen in Fig. 455 S5, the facial features model outperforms the ARHMM for 456 all states, for both species.

457
As an extra control, we correlated each facial feature with the 458 history of prominent task parameters, specifically two related 459 to the directly previous trial (its outcome, which might affect 460 motivation; and the location of its target, which might predict 461 side biases), and two related to the overall session history 462 (the cumulative amount of reward and the time that passed 463 since the start of the session, as proxies for satiety and fa-

Fig. 5 .
Fig. 5. Informativeness of facial features A)Predictor weights of the facial features for the macaque (top) and mouse (bottom) model in the hit, wrong and miss states (see black boxes in Fig4B).Central circle indicates a predictor weight of zero, inside this circle are negative predictor weights, outside are positive weights.Each state has its own characteristic facial expression pattern.B) Variability of all facial features over states.Although some features contribute more than others, clearly all features contribute to the model distinguishing between the various internal states.thatsimilarly to humans, facial expressions in monkeys and 578

Fig. S1 .
Fig. S1.Reaction Times.Distribution of Reaction Times for macaques (A) and mice (B), split by behavioral outcome; data are pooled over sessions (n = 18 and n = 28

Fig. S1 shows
Fig. S1 shows the distribution of RTs split by trial outcome over sessions, for both species; Fig S2 shows example paths and detected RTs for both species.

Fig. S2 .Fig. S3 .
Fig. S2.Example VR paths Paths are colored according to the normalized running speed.A) Example paths for macaques, with the detected RT as circles.B) Same, but for mice.Facial features.The extraction of the predictors for the MSLR model involves a multi-step process to go from continuous recording time (60 Hz for video data and 500 Hz for the macaque eye-tracker) to trial-based predictions.First, we chose several points of interest on the animals' faces, which are then automatically identified and tracked over time using DeepLabCut (17): Fig. S5.Comparison of the MSLR face model and the reaction time Auto-Regressive HMM.Both face feature models outperform their Autoregressive counterparts, for any number of internal states that we swept over.Nevertheless, it can be seen that the performance gap is smaller in mice than in macaques.This is consistent with the finding that mice are more history dependent than macaques (See Fig.5 D).
Fig. S6.Inferred state probabilities decode outcome beyond chance.We used a normalized version of Mutual Information that already takes chance level into account and sets that as 0.

Fig. S15 .
Fig. S15.RTs over states for all sessions, for the held out test set.For mice, distributions are shown in orange; for macaques, distributions are shown in green-blue.

Fig. S16 .
Fig. S16.Most likely states for all sessions, for the held out test set.For mice, states are shown in orange; for macaques, states are shown in green-blue. ).For 373space was comparable across species (Fig.4C).Both mouse 374 and monkey data seem to generate a hidden state (state A in 375 mice, state C in monkeys) that is associated with fast RTs and 376 largely successful trial outcomes -a performance regime that 377 could be interpreted as globally attentive.Conversely, state 378 C and A in mice and monkeys, respectively, reflects rather 387 engagement.

Table 1 .
Parameter values for the Bayesian parameter optimization procedure.These are independently explored for each number of internal states of the HMM.
(76)tching Linear Regression.Markov-Switching Linear Regression (MSLR) models, which we ran using Dynamax(76), are a powerful tool for modeling time series data that exhibit regime-switching behaviour, where the underlying