VoICE: A semi-automated pipeline for standardizing vocal analysis across models

Sci Rep. 2015 May 28:5:10237. doi: 10.1038/srep10237.

Abstract

The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization "types" by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Animals
  • Automation
  • Cluster Analysis
  • Finches / physiology
  • Membrane Proteins / deficiency
  • Membrane Proteins / genetics
  • Mice
  • Mice, Inbred C57BL
  • Mice, Knockout
  • Nerve Tissue Proteins / deficiency
  • Nerve Tissue Proteins / genetics
  • Phenotype
  • Speech Acoustics*
  • Vocalization, Animal*

Substances

  • CNTNAP2 protein, mouse
  • Membrane Proteins
  • Nerve Tissue Proteins