TY - JOUR T1 - Optimal features for auditory categorization JF - bioRxiv DO - 10.1101/411611 SP - 411611 AU - Shi Tong Liu AU - Pilar Montes-Lourido AU - Xiaoqin Wang AU - Srivatsun Sadagopan Y1 - 2018/01/01 UR - http://biorxiv.org/content/early/2018/12/16/411611.abstract N2 - Humans and vocal animals use vocalizations (human speech or animal ‘calls’) to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in the production of these sounds and classify them into perceptually distinct categories (‘words’ or ‘call types’). Here, we demonstrate using an information-theoretic approach that production-invariant classification of calls can be achieved by detecting mid-level acoustic features. Starting from randomly chosen marmoset call features, we used a greedy search algorithm to determine the most informative and least redundant set of features necessary for call classification. Call classification at >95% accuracy could be accomplished using only 10 – 20 features per call type. Most importantly, predictions of the tuning properties of putative neurons selective for such features accurately matched some previously observed responses of superficial layer neurons in primary auditory cortex. Such a feature-based approach succeeded in categorizing calls of other species such as guinea pigs and macaque monkeys, and could also solve other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals. ER -