RT Journal Article SR Electronic T1 Optimal features for auditory categorization JF bioRxiv FD Cold Spring Harbor Laboratory SP 411611 DO 10.1101/411611 A1 Shi Tong Liu A1 Pilar Montes-Lourido A1 Xiaoqin Wang A1 Srivatsun Sadagopan YR 2018 UL http://biorxiv.org/content/early/2018/12/16/411611.abstract AB Humans and vocal animals use vocalizations (human speech or animal ‘calls’) to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in the production of these sounds and classify them into perceptually distinct categories (‘words’ or ‘call types’). Here, we demonstrate using an information-theoretic approach that production-invariant classification of calls can be achieved by detecting mid-level acoustic features. Starting from randomly chosen marmoset call features, we used a greedy search algorithm to determine the most informative and least redundant set of features necessary for call classification. Call classification at >95% accuracy could be accomplished using only 10 – 20 features per call type. Most importantly, predictions of the tuning properties of putative neurons selective for such features accurately matched some previously observed responses of superficial layer neurons in primary auditory cortex. Such a feature-based approach succeeded in categorizing calls of other species such as guinea pigs and macaque monkeys, and could also solve other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.