Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Revealing neuro-computational mechanisms of reinforcement learning and decision-making with the hBayesDM package

Woo-Young Ahn, Nathaniel Haines, Lei Zhang
doi: https://doi.org/10.1101/064287
Woo-Young Ahn
1Department of Psychology, The Ohio State University, Columbus, OH
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nathaniel Haines
1Department of Psychology, The Ohio State University, Columbus, OH
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Lei Zhang
2Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Supplementary material
  • Preview PDF
Loading

Abstract

Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories, with which we can disentangle psychiatric conditions into basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assess and potentially diagnose psychiatric patients, and there is growing enthusiasm on RLDM and Computational Psychiatry among clinical researchers. Such a framework can also provide insights into the brain substrates of particular RLDM processes as exemplified by model-based functional magnetic resonance imaging (fMRI) or electroencephalogram (EEG). However, many researchers often find the approach too technical and have difficulty adopting it for their research. Thus, there remains a critical need to develop a user-friendly tool for the wide dissemination of computational psychiatric methods. We introduce an R package called hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks), which offers computational modeling on an array of RLDM tasks and social exchange games. The hBayesDM package offers state-of-the-art hierarchical Bayesian modeling, where both individual and group parameters (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. At the same time, it is extremely user-friendly: users can perform computational modeling, output visualization, and Bayesian model comparisons–each with a single line of coding. Users can also extract trial-by-trial latent variables (e.g., prediction errors) required for model-based fMRI/EEG. With the hBayesDM package, we anticipate that anyone with minimal knowledge of programming can take advantage of cutting-edge computational modeling approaches and investigate the underlying processes of and interactions between multiple decision-making (e.g., goal-directed, habitual, and Pavlovian) systems. In this way, it is our expectation that the hBayesDM package will contribute to the dissemination of advanced modeling approaches and enable a wide range of researchers to easily perform computational psychiatric research within their populations.

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC-ND 4.0 International license.
Back to top
PreviousNext
Posted December 21, 2016.
Download PDF

Supplementary Material

Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Revealing neuro-computational mechanisms of reinforcement learning and decision-making with the hBayesDM package
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Revealing neuro-computational mechanisms of reinforcement learning and decision-making with the hBayesDM package
Woo-Young Ahn, Nathaniel Haines, Lei Zhang
bioRxiv 064287; doi: https://doi.org/10.1101/064287
Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
Citation Tools
Revealing neuro-computational mechanisms of reinforcement learning and decision-making with the hBayesDM package
Woo-Young Ahn, Nathaniel Haines, Lei Zhang
bioRxiv 064287; doi: https://doi.org/10.1101/064287

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (2416)
  • Biochemistry (4774)
  • Bioengineering (3319)
  • Bioinformatics (14626)
  • Biophysics (6617)
  • Cancer Biology (5156)
  • Cell Biology (7402)
  • Clinical Trials (138)
  • Developmental Biology (4340)
  • Ecology (6858)
  • Epidemiology (2057)
  • Evolutionary Biology (9876)
  • Genetics (7328)
  • Genomics (9496)
  • Immunology (4534)
  • Microbiology (12631)
  • Molecular Biology (4919)
  • Neuroscience (28206)
  • Paleontology (198)
  • Pathology (802)
  • Pharmacology and Toxicology (1380)
  • Physiology (2012)
  • Plant Biology (4473)
  • Scientific Communication and Education (974)
  • Synthetic Biology (1295)
  • Systems Biology (3903)
  • Zoology (722)