Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Geometry of abstract learned knowledge in the hippocampus

Abstract

Hippocampal neurons encode physical variables1,2,3,4,5,6,7 such as space1 or auditory frequency6 in cognitive maps8. In addition, functional magnetic resonance imaging studies in humans have shown that the hippocampus can also encode more abstract, learned variables9,10,11. However, their integration into existing neural representations of physical variables12,13 is unknown. Here, using two-photon calcium imaging, we show that individual neurons in the dorsal hippocampus jointly encode accumulated evidence with spatial position in mice performing a decision-making task in virtual reality14,15,16. Nonlinear dimensionality reduction13 showed that population activity was well-described by approximately four to six latent variables, which suggests that neural activity is constrained to a low-dimensional manifold. Within this low-dimensional space, both physical and abstract variables were jointly mapped in an orderly manner, creating a geometric representation that we show is similar across mice. The existence of conjoined cognitive maps suggests that the hippocampus performs a general computation—the creation of task-specific low-dimensional manifolds that contain a geometric representation of learned knowledge.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Imaging of CA1 neural activity in mice performing the accumulating towers task.
Fig. 2: CA1 neurons jointly encode the position of the mice and accumulated evidence in an evidence-accumulation task.
Fig. 3: Geometric representation of task variables on low-dimensional neural manifolds.
Fig. 4: Sequential activity of CA1 neurons in single trials is predictive of behaviour and explained by the manifold.

Similar content being viewed by others

Data availability

The datasets generated in this study are available from the corresponding authors on reasonable request. Source data are provided with this paper.

Code availability

The code used for all analyses in this study is available on GitHub (https://github.com/BrainCOGS/HPC_manifolds). All other codes are available on reasonable request.

References

  1. O’Keefe, J. & Dostrovsky, J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34, 171–175 (1971).

    Article  PubMed  Google Scholar 

  2. Frank, L. M., Brown, E. N. & Wilson, M. Trajectory encoding in the hippocampus and entorhinal cortex. Neuron 27, 169–178 (2000).

    Article  CAS  PubMed  Google Scholar 

  3. Wood, E. R., Dudchenko, P. A., Robitsek, R. J. & Eichenbaum, H. Hippocampal neurons encode information about different types of memory episodes occurring in the same location. Neuron 27, 623–633 (2000).

    Article  CAS  PubMed  Google Scholar 

  4. Eichenbaum, H., Kuperstein, M., Fagan, A. & Nagode, J. Cue-sampling and goal-approach correlates of hippocampal unit activity in rats performing an odor-discrimination task. J. Neurosci. 7, 716–732 (1987).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Herzog, L. E. et al. Interaction of taste and place coding in the hippocampus. J. Neurosci. 39, 3057–3069 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Aronov, D., Nevers, R. & Tank, D. W. Mapping of a non-spatial dimension by the hippocampal–entorhinal circuit. Nature 543, 719–722 (2017).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  7. Taxidis, J. et al. Differential emergence and stability of sensory and temporal representations in context-specific hippocampal sequences. Neuron 108, 984–998.e9 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. O’Keefe, J. & Nadel, L. The Hippocampus as a Cognitive Map (Clarendon, 1978).

  9. Schuck, N. W. & Niv, Y. Sequential replay of nonspatial task states in the human hippocampus. Science 364, eaaw5181 (2019).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  10. Tavares, R. M. et al. A map for social navigation in the human brain. Neuron 87, 231–243 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Park, S. A., Miller, D. S., Nili, H., Ranganath, C. & Boorman, E. D. Map making: constructing, combining, and inferring on abstract cognitive maps. Neuron 107, 1226–1238.e8 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Rubin, A. et al. Revealing neural correlates of behavior without behavioral measurements. Nat. Commun. 10, 4745 (2019).

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  13. Low, R. J., Lewallen, S., Aronov, D., Nevers, R. & Tank, D. W. Probing variability in a cognitive map using manifold inference from neural dynamics. Preprint at https://doi.org/10.1101/418939 (2018).

  14. Pinto, L. et al. An accumulation-of-evidence task using visual pulses for mice navigating in virtual reality. Front. Behav. Neurosci. 12, 36 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  15. Koay, S. A., Thiberge, S., Brody, C. D. & Tank, D. W. Amplitude modulations of cortical sensory responses in pulsatile evidence accumulation. eLife 9, e60628 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Engelhard, B. et al. Specialized coding of sensory, motor and cognitive variables in VTA dopamine neurons. Nature 570, 509–513 (2019).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  17. MacDonald, C. J., Lepage, K. Q., Eden, U. T. & Eichenbaum, H. Hippocampal “time cells” bridge the gap in memory for discontiguous events. Neuron 71, 737–749 (2011).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Pastalkova, E., Itskov, V., Amarasingham, A. & Buzsáki, G. Internally generated cell assembly sequences in the rat hippocampus. Science 321, 1322–1327 (2008).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  19. Tolman, E. C. Cognitive maps in rats and men. Psychol. Rev. 55, 189–208 (1948).

    Article  CAS  PubMed  Google Scholar 

  20. Bellmund, J. L. S., Gärdenfors, P., Moser, E. I. & Doeller, C. F. Navigating cognition: spatial codes for human thinking. Science 362, eaat6766 (2018).

    Article  ADS  PubMed  Google Scholar 

  21. Eichenbaum, H. in Behavioral Neuroscience of Learning and Memory (eds Clark, R. E. & Martin, S. J.) 101–117 (Springer, 2018).

  22. Constantinescu, A. O., O’Reilly, J. X. & Behrens, T. E. J. Organizing conceptual knowledge in humans with a gridlike code. Science 352, 1464–1468 (2016).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  23. Gallego, J. A., Perich, M. G., Miller, L. E. & Solla, S. A. Neural manifolds for the control of movement. Neuron 94, 978–984 (2017).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Russo, A. A. et al. Motor cortex embeds muscle-like commands in an untangled population response. Neuron 97, 953–966.e8 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. Chaudhuri, R., Gerçek, B., Pandey, B., Peyrache, A. & Fiete, I. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nat. Neurosci. 22, 1512–1520 (2019).

    Article  CAS  PubMed  Google Scholar 

  26. Eichenbaum, H. & Cohen, N. J. Can we reconcile the declarative memory and spatial navigation views on hippocampal function? Neuron 83, 764–770 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Recanatesi, S. et al. Predictive learning as a network mechanism for extracting low-dimensional latent space representations. Nat. Commun. 12, 1417 (2021).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  28. Aronov, D. & Tank, D. W. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system. Neuron 84, 442–456 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  29. Dombeck, D. A., Harvey, C. D., Tian, L., Looger, L. L. & Tank, D. W. Functional imaging of hippocampal place cells at cellular resolution during virtual navigation. Nat. Neurosci. 13, 1433–1440 (2010).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Harvey, C. D., Coen, P. & Tank, D. W. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature 484, 62–68 (2012).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  31. Brunton, B. W., Botvinick, M. M. & Brody, C. D. Rats and humans can optimally accumulate evidence for decision-making. Science 340, 95–98 (2013).

    Article  ADS  CAS  PubMed  Google Scholar 

  32. Gold, J. I. & Shadlen, M. N. The neural basis of decision making. Annu. Rev. Neurosci. 30, 535–574 (2007).

    Article  CAS  PubMed  Google Scholar 

  33. Gill, P. R., Mizumori, S. J. Y. & Smith, D. M. Hippocampal episode fields develop with learning. Hippocampus 21, 1240–1249 (2011).

    Article  PubMed  Google Scholar 

  34. McKenzie, S. et al. Hippocampal representation of related and opposing memories develop within distinct, hierarchically organized neural schemas. Neuron 83, 202–215 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Howard, M. W., Luzardo, A. & Tiganj, Z. Evidence accumulation in a Laplace domain decision space. Comput. Brain Behav. 1, 237–251 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  36. Pnevmatikakis, E. A. et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron 89, 285–299 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  37. Pnevmatikakis, E. A. & Giovannucci, A. NoRMCorre: an online algorithm for piecewise rigid motion correction of calcium imaging data. J. Neurosci. Methods 291, 83–94 (2017).

    Article  CAS  PubMed  Google Scholar 

  38. Skaggs, W. E., McNaughton, B. L. & Gothard, K. M. An information-theoretic approach to deciphering the hippocampal code. In Advances in Neural Information Processing Systems 5 (eds Hanson, S. J. et al.) 1030–1037 (Morgan-Kaufmann, 1993).

  39. Grassberger, P. & Procaccia, I. Measuring the strangeness of strange attractors. Physica D 9, 189–208 (1983).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  40. Stachenfeld, K. L., Botvinick, M. M. & Gershman, S. J. The hippocampus as a predictive map. Nat. Neurosci. 20, 1643–1653 (2017).

    Article  CAS  PubMed  Google Scholar 

  41. Gauthier, J. L. & Tank, D. W. A dedicated population for reward coding in the hippocampus. Neuron 99, 179–193.e7 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Pinto, L. et al. Task-dependent changes in the large-scale dynamics and necessity of cortical regions. Neuron 104, 810–824.e9 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  43. Domnisoru, C., Kinkhabwala, A. A. & Tank, D. W. Membrane potential dynamics of grid cells. Nature 495, 199–204 (2013).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  44. Rich, P. D., Liaw, H.-P. & Lee, A. K. Large environments reveal the statistical structure governing hippocampal representations. Science 345, 814–817 (2014).

    Article  ADS  CAS  PubMed  Google Scholar 

  45. Tenenbaum, J. B., de Silva, V. & Langford, J. C. A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000).

    Article  ADS  CAS  PubMed  Google Scholar 

  46. Yu, B. M. et al. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J. Neurophysiol. 102, 614–635 (2009).

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We thank A. Song and S. Thiberge for assistance with two-photon imaging, S. Stein and S. Baptista for technical support with animal training, M. L. Ioffe for providing code and E. M. Diamanti and B. Engelhard for discussions. This work was supported by NIH grants U01NS090541, U19NS104648 and F32MH119749, as well as the Simons Collaboration on the Global Brain.

Author information

Authors and Affiliations

Authors

Contributions

E.H.N., C.D.B. and D.W.T. designed the experiments. E.H.N. and N.W.F. performed the experiments. E.H.N., M.S. and N.W.F. analysed the data. S.A.K. wrote the code for the processing of the calcium-imaging data. L.P. provided the code for behavioural analysis. J.L.G. provided the code for mutual information calculations. M.S., R.J.L. and S.L. developed the MIND algorithm. E.H.N., M.S., C.D.B. and D.W.T. wrote the paper.

Corresponding authors

Correspondence to Carlos D. Brody or David W. Tank.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature thanks the anonymous reviewers for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended Data Fig. 1 Characterization of CA1 neural variability in the accumulating towers task.

a, Each heat map represents one neuron and the trial-by-trial activity of that neuron in the towers task for left-choice trials. Each row in each heat map is the ΔF/F (normalized within each neuron) of the neuron in that trial. b, Same as in a, but for the alternation task. Note that the single-trial activity appears more variable in the towers task and more reliable in the alternation task, consistent with the results that evidence is also being represented by neurons in the towers task. c, Neural activity (ΔF/F normalized within each neuron) of cells significantly encoding evidence, sorted by activity in half the trials (top) and plotted using the same sorting in the other half of the trials (bottom).

Source data

Extended Data Fig. 2 Place fields in evidence-by-position space.

a, Each heat map shows the mean ΔF/F of a neuron with significant mutual information in E × Y space. b, Scatterplot of the mutual information in RE × Y space versus E × Y space for each cell with significant information in E × Y space (n = 917 neurons). RE is randomized evidence. c, Same as in b, but for E × RY space versus E × Y space. RY is randomized position. d, In total, 29% of imaged neurons had significant mutual information in E × Y space, whereas 16% had significant mutual information only for position and 6% had significant mutual information only for evidence. e, Of the cells with significant mutual information in E × Y space, 89.9% had significantly more information in E × Y space than just place or evidence information alone, whereas 9.8% could not be differentiated from place cells and 0.3% could not be differentiated from evidence cells (Methods). f, The probability of a cell having significant mutual information in E × Y space is significantly greater than the joint probability of a cell being a place cell and a cell being an evidence cell. Two-tailed Wilcoxon signed-rank test, *P = 0.016; n = 7 mice; data are mean ± s.e.m. g, Cells with significant mutual information in E × Y space had 1.7 ± 0.03 (mean ± s.e.m.) firing fields (n = 917 cells).

Source data

Extended Data Fig. 3 Dimensionality of an earlier training stage.

During the training of the towers task, mice proceed through various stages of training. In one of these training stages, mice perform a task nearly identical to the towers task, except that visual cues only show up on one side of the maze. a, The intrinsic dimensionality of the one-side cues task is approximately 4.2 (4.0–4.5; 95% bootstrapped confidence interval). n = 4 mice; data are mean ± 95% bootstrapped confidence intervals. Grey dashed lines illustrate the slope expected for a four-dimensional manifold. b, Intrinsic dimensionality of the one-side cues task is significantly lower than the dimensionality of the towers task. Two-tailed Wilcoxon rank-sum test, *P = 0.042; n = 7 mice (towers task) and n = 4 mice (one-side cues task); data are mean ± s.e.m. c, Choice-specific place cell sequences in the one-side cues task, similar to Fig. 2a. Sequences are divided into left-choice-preferring (top), right-choice-preferring (middle) and non-preferring (bottom) cells. Data are split between left-choice trials (left) and right-choice trials (right). Cells are shown in the same order within each row group. ΔF/F was normalized within each neuron.

Source data

Extended Data Fig. 4 Cross-validation methods and results demonstrating how neural activity from single neurons is captured by coordinated population activity.

a, Illustration of the cross-validation method to calculate the decoding index in Fig. 3c. Data are split for training (solid colours) and testing (shaded colours). With the training data, a map is obtained from ΔF/F to latent dimensions and back. This map is then evaluated on the test data. b, To assess the performance of the map, we concatenate the neuron × time data in the test block and reconstructed test block into two vectors and calculate the correlation coefficient from the elementwise pairwise comparison of the vectors. The correlation coefficient was averaged across 10 individually held-out trials to yield the decoding index. c, Illustration of a similar analysis in which the activity of a single cell is decoded from a manifold fit to the rest of the neural population. One neuron (red) is removed before using MIND to obtain a set of latents. Next, in the training data (solid green), a map is calculated from the manifold to the activity of the held-out neuron. The map is then used to predict the test data (shaded green). The correlation coefficient is calculated as in b and averaged over five folds across 10 individually held-out neurons as the decoding index. d, Example of neural activity from 40 individually reconstructed neurons, in which the activity of each neuron was decoded from the five-dimensional manifold fit to the other cells following procedures in c (comparable to Fig. 3b, for which the method in a and b was used). ΔF/F is normalized to the maximum ΔF/F in the window shown. e, Cross-validated correlation coefficients between the activity of individual neurons in the real and reconstructed data, in which the reconstruction was accomplished with d-dimensional embeddings of the neural manifold. The decoding index is the correlation coefficient between the predicted and real ΔF/F of the held-out ROIs. n = 7 mice; data are mean ± s.e.m.

Source data

Extended Data Fig. 5 Task manifold and neural manifold encode different variables.

a, The visual space of the accumulating towers task across a representative session. Top, the mean luminance of the virtual-reality visual field as a function of position in the T-maze. Bottom, four example frames. Note the high variability of luminance during the cue period, where bright towers are randomly presented on the left and right walls. b, Performing dimensionality reduction on the time series of the pixel values in the raw video stream using MIND reveals a low-dimensional manifold, reflecting the visual sensory structure of the accumulating towers task. Plotting luminance (top) and evidence (bottom) on the manifold reveals that luminance is represented as a smooth gradient, whereas evidence requires memory and is thus absent on the task manifold. c, Same as in b, but showing the neural manifold obtained from the mouse that ran the session (Fig. 3f). Note the absence of a luminance representation, but the emergence of evidence.

Source data

Extended Data Fig. 6 Decoding other variables from the neural manifold.

a, Similar to Fig. 3f, the view angle is plotted as colour on the three-dimensional embedding of the manifold. b, The five latent variables of the neural manifold embedded in a five-dimensional Euclidean space are better predicted by GPR from view angle and evidence values than from view angle and shuffled evidence values. Two-tailed Wilcoxon signed-rank test, *P = 0.016; n = 7 mice; data are mean ± s.e.m. Decoding index is the correlation coefficient between the predicted manifold values and true manifold values, averaged over the five dimensions of the manifold. c, Same as in b, but for decoding manifold values using position and velocity. The addition of velocity to position information significantly improves the decoding of manifold values. Two-tailed Wilcoxon signed-rank test, *P = 0.016; n = 7 mice; data are mean ± s.e.m. d, Same as in b, but for decoding using position and time. The addition of time information does not significantly increase how well manifold values are decoded. Two-tailed Wilcoxon signed-rank test, P = 0.30 (ns, not significant); n = 7 mice; data are mean ± s.e.m. e, We used PCA to separate the correlated and orthogonal dimensions between evidence and view angle and decoded both PC1 (correlated) and PC2 (orthogonal) from the five-dimensional embedding of the manifold. n = 7 mice; data are mean ± s.e.m. The decoding index is the correlation coefficient between the predicted and true principal component values. f, The view angle is better decoded from the neural manifold (five-dimensional embedding) in the towers task (‘Tow’), when evidence is also present, than in the alternation task (‘Alt’) when evidence is not present. Two-tailed Wilcoxon rank-sum test, P = 0.07; n = 7 mice (towers task) and n = 7 mice (alternation task); data are mean ± s.e.m. The decoding index is the correlation coefficient between the predicted and true view angle values. g, Average view angle trajectories, separated between left- and right-choice trials, for the mice in the towers task (n = 7 mice; blue, thin lines) and the alternation task (n = 7 mice; red, thin lines). Thick lines represent averages across mice. h, Average view angle values in the towers task (n = 7 mice; blue, thin lines) and the alternation task (n = 7 mice; red, thin lines) over all trials. Thick lines and shaded area are mean ± 95% bootstrapped confidence interval. i, Accuracy in predicting the upcoming choice (left), the choice of the mouse in the previous trial (centre) and whether the previous trial was rewarded (right) from d-dimensional embeddings of the neural manifold. n = 7 mice; data are mean ± s.e.m.

Source data

Extended Data Fig. 7 Examples of sequences in CA1 neural activity.

a, Schematic to describe how doublets were defined. Orange and green are calcium traces of the cells under consideration. Grey is the calcium trace of a third cell. b, Twenty-five examples of doublets in a single session from one mouse. Each panel shows traces for trials in which the doublet was present. Orange traces are the neural activity from the first cell in the doublet; green traces are the neural activity from the second cell in the doublet. Heat maps represent the normalized neural activity of each cell across all trials in the session.

Source data

Extended Data Fig. 8 Neural activity generated by trajectories through the task.

a, Trajectories through evidence and position in one session of the task. Each thin line represents a fit with a cubic spline to a single trial; thick lines represent fits over all trials in which the mouse was supposed to turn left or right. b, The average change in position and evidence over time across trials in a single session for a set of representative states in evidence and position space. c, Conceptual diagram showing four trajectories through the neural manifold in right-choice trials. Two different doublets are activated because the trajectories pass through their firing fields. d, Shuffling trial IDs within right-choice trials will disrupt doublet activity while maintaining trial-averaged place and choice preferences of each cell.

Source data

Extended Data Fig. 9 Choice-predictive sequences in CA1 neural activity.

a, Distribution of the values in Fig. 4b. b, Distribution of the values in Fig. 4c. c, Distribution of the values in Fig. 4f. d, ROC curves for sequential activity predicted from the five-dimensional embedding of the manifold compared to sequential activity in real data. n = 7 mice. e, Similar to a, but for triplets. Inset shows that triplets are significantly more likely to appear in the real data than in the shuffled dataset in which trial IDs were shuffled. Two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001; n = 34,737 triplets. f, Similar to c, but for triplets, showing that left- and right-choice-predictive triplets from real data are more predictive than triplets obtained from the shuffled dataset in which trial IDs were shuffled. Left inset, left-predictive, n = 1,135 triplets, two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001; right inset, right-predictive, n = 1,755 triplets, two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001. g, Left-choice-predictive triplets are significantly more predictive than instances in which the first two cells in the triplet fire, but the third does not, or when the third cell fires alone. n = 1,135 triplets; two-tailed paired Student’s t-tests, Bonferroni corrected; 1 → 2 → 3 versus 1 → 2 → not 3, ****P < 0.0001; 1 → 2 → 3 versus not 1 → not 2 → 3, ****P < 0.0001; 1 → 2 → not 3 versus not 1→ not 2 → 3, P = 0.78. h, Notably, for left-choice-predictive triplets, in trials in which cells 1 and 2 fire, but cell 3 does not, significantly more trials end with the mouse turning right than the same instances in the shuffled dataset. n = 1,135 triplets, two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001. i, Same as in g, but for right-choice-predictive triplets. n = 1,755 triplets; two-tailed paired Student’s t-tests, Bonferroni corrected; 1 → 2 → 3 versus 1 → 2 → not 3, ****P < 0.0001; 1 → 2 → 3 versus not 1 → not 2 → 3, ****P < 0.0001; 1 → 2 → not 3 versus not 1 → not 2 → 3, P = 1.0. j, Same as in h, but for right-choice-predictive triplets. n = 1,755 triplets; two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001. For box plots, boundaries: 25–75th percentiles; midline, median; whiskers, minimum–maximum.

Source data

Supplementary information

Supplementary Information

This file contains the Supplementary Discussion and Supplementary References.

Reporting Summary

Video 1 Example trials from the mouse’s perspective

Videos from the viewpoint of a mouse showing behavioral trials from the accumulating towers task. The first trial shows an example of the navigational component of the task - the animal initially turns too early and has to correct itself before making it into the correct arm.

Video 2 Example manifold firing fields

Neural activity from 25 example cells recorded in the same imaging session and plotted on the 3-dimensional embedding of the manifold. Similar to Fig. 3d, we only show ΔF/F values that are 3 standard deviations above the mean activity for each cell.

Video 3 Example left- and right-choice trajectories through manifold space

Two trajectories (left-choice trial = blue; right-choice trial = red) through manifold space in one imaging session from one animal. Position (left) and evidence (right) are plotted as color on the 3-dimensional embedding of the manifold. The bar at the bottom represents the time elapsed in the trial (cue and delay periods).

Source data

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nieh, E.H., Schottdorf, M., Freeman, N.W. et al. Geometry of abstract learned knowledge in the hippocampus. Nature 595, 80–84 (2021). https://doi.org/10.1038/s41586-021-03652-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41586-021-03652-7

This article is cited by

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing