Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Time cell encoding in deep reinforcement learning agents depends on mnemonic demands

View ORCID ProfileDongyan Lin, View ORCID ProfileBlake A. Richards
doi: https://doi.org/10.1101/2021.07.15.452557
Dongyan Lin
1Integrated Program in Neuroscience, McGill University, Montreal, Quebec, Canada
2Mila, Montreal, Quebec, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Dongyan Lin
Blake A. Richards
1Integrated Program in Neuroscience, McGill University, Montreal, Quebec, Canada
2Mila, Montreal, Quebec, Canada
3School of Computer Science, McGill University, Montreal, Quebec, Canada
4Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
5Learning in Machines and Brains Program, Canadian Institute for Advanced Research, Toronto, Ontario, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Blake A. Richards
  • For correspondence: blake.richards@mila.quebec
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Supplementary material
  • Preview PDF
Loading

Abstract

The representation of “what happened when” is central to encoding episodic and working memories. Recently discovered hippocampal time cells are theorized to provide the neural substrate for such representations by forming distinct sequences that both encode time elapsed and sensory content. However, little work has directly addressed to what extent cognitive demands and temporal structure of experimental tasks affect the emergence and informativeness of these temporal representations. Here, we trained deep reinforcement learning (DRL) agents on a simulated trial-unique nonmatch-to-location (TUNL) task, and analyzed the activities of artificial recurrent units using neuroscience-based methods. We show that, after training, representations resembling both time cells and ramping cells (whose activity increases or decreases monotonically over time) simultaneously emerged in the same population of recurrent units. Furthermore, with simulated variations of the TUNL task that controlled for (1) memory demands during the delay period and (2) the temporal structure of the episodes, we show that memory demands are necessary for the time cells to encode information about the sensory stimuli, while the temporal structure of the task only affected the encoding of “what” and “when” by time cells minimally. Our findings help to reconcile current discrepancies regarding the involvement of time cells in memory-encoding by providing a normative framework. Our modelling results also provide concrete experimental predictions for future studies.

Competing Interest Statement

The authors have declared no competing interest.

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC-ND 4.0 International license.
Back to top
PreviousNext
Posted July 16, 2021.
Download PDF

Supplementary Material

Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Time cell encoding in deep reinforcement learning agents depends on mnemonic demands
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Time cell encoding in deep reinforcement learning agents depends on mnemonic demands
Dongyan Lin, Blake A. Richards
bioRxiv 2021.07.15.452557; doi: https://doi.org/10.1101/2021.07.15.452557
Reddit logo Twitter logo Facebook logo LinkedIn logo Mendeley logo
Citation Tools
Time cell encoding in deep reinforcement learning agents depends on mnemonic demands
Dongyan Lin, Blake A. Richards
bioRxiv 2021.07.15.452557; doi: https://doi.org/10.1101/2021.07.15.452557

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (4842)
  • Biochemistry (10771)
  • Bioengineering (8031)
  • Bioinformatics (27246)
  • Biophysics (13959)
  • Cancer Biology (11108)
  • Cell Biology (16026)
  • Clinical Trials (138)
  • Developmental Biology (8770)
  • Ecology (13265)
  • Epidemiology (2067)
  • Evolutionary Biology (17337)
  • Genetics (11678)
  • Genomics (15902)
  • Immunology (11011)
  • Microbiology (26031)
  • Molecular Biology (10625)
  • Neuroscience (56448)
  • Paleontology (417)
  • Pathology (1729)
  • Pharmacology and Toxicology (2999)
  • Physiology (4539)
  • Plant Biology (9614)
  • Scientific Communication and Education (1612)
  • Synthetic Biology (2682)
  • Systems Biology (6967)
  • Zoology (1508)