Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Reaching to sounds in virtual reality: A multisensory-motor approach to re-learn sound localisation

View ORCID ProfileChiara Valzolgher, Grègoire Verdelet, Romeo Salemme, Luigi Lombardi, Valerie Gaveau, Alessandro Farné, Francesco Pavani
doi: https://doi.org/10.1101/2020.03.23.003533
Chiara Valzolgher
1IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
2Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Chiara Valzolgher
  • For correspondence: chiara.valzolgher@inserm.fr
Grègoire Verdelet
1IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Romeo Salemme
1IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
4Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Luigi Lombardi
3Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Valerie Gaveau
1IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Alessandro Farné
1IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
2Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy
4Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Francesco Pavani
1IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
2Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy
3Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

ABSTRACT

When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears and initial head-position with coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. This is particularly important for individuals who experience long-term auditory alterations (e.g., hearing loss, hearing aids, cochlear implants) as well as individuals who have to adapt to novel auditory cues when listening in virtual auditory environments. Until now, several methodological constraints have limited our understanding of the mechanisms involved in spatial hearing re-learning. In particular, the potential role of active listening and head-movements have remained largely overlooked. Here, we overcome these limitations by using a novel methodology, based on virtual reality and real-time kinematic tracking, to study the role of active multisensory-motor interactions with sounds in the updating of sound-space correspondences. Participants were immersed in a virtual reality scenario showing 17 speakers at ear-level. From each visible speaker a free-field real sound could be generated. Two separate groups of participants localised the sound source either by reaching or naming the perceived sound source, under binaural or monaural listening. Participants were free to move their head during the task and received audio-visual feedback on their performance. Results showed that both groups compensated rapidly for the short-term auditory alteration caused by monaural listening, improving sound localisation performance across trials. Crucially, compared to naming, reaching the sounds induced faster and larger sound localisation improvements. Furthermore, more accurate sound localisation was accompanied by progressively wider head-movements. These two measures were significantly correlated selectively for the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for updating altered spatial hearing. Head movements played an important role in this fast updating, pointing to the importance of active listening when implementing training protocols for improving spatial hearing.

  • - We studied spatial hearing re-learning using virtual reality and kinematic tracking

  • - Audio-visual feedback combined with active listening improved monaural sound localisation

  • - Reaching to sounds improved performance more than naming sounds

  • - Monaural listening triggered compensatory head-movement behaviour

  • - Head-movement behaviour correlated with re-learning only when reaching to sounds

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC-ND 4.0 International license.
Back to top
PreviousNext
Posted March 25, 2020.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Reaching to sounds in virtual reality: A multisensory-motor approach to re-learn sound localisation
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Reaching to sounds in virtual reality: A multisensory-motor approach to re-learn sound localisation
Chiara Valzolgher, Grègoire Verdelet, Romeo Salemme, Luigi Lombardi, Valerie Gaveau, Alessandro Farné, Francesco Pavani
bioRxiv 2020.03.23.003533; doi: https://doi.org/10.1101/2020.03.23.003533
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Reaching to sounds in virtual reality: A multisensory-motor approach to re-learn sound localisation
Chiara Valzolgher, Grègoire Verdelet, Romeo Salemme, Luigi Lombardi, Valerie Gaveau, Alessandro Farné, Francesco Pavani
bioRxiv 2020.03.23.003533; doi: https://doi.org/10.1101/2020.03.23.003533

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (3689)
  • Biochemistry (7796)
  • Bioengineering (5675)
  • Bioinformatics (21283)
  • Biophysics (10578)
  • Cancer Biology (8174)
  • Cell Biology (11945)
  • Clinical Trials (138)
  • Developmental Biology (6763)
  • Ecology (10401)
  • Epidemiology (2065)
  • Evolutionary Biology (13866)
  • Genetics (9708)
  • Genomics (13073)
  • Immunology (8146)
  • Microbiology (20014)
  • Molecular Biology (7853)
  • Neuroscience (43056)
  • Paleontology (319)
  • Pathology (1279)
  • Pharmacology and Toxicology (2258)
  • Physiology (3351)
  • Plant Biology (7232)
  • Scientific Communication and Education (1312)
  • Synthetic Biology (2006)
  • Systems Biology (5538)
  • Zoology (1128)