Neural decoding of discriminative auditory object features depends on their socio-affective valence
Résumé
Human voices consist of specific patterns of acoustic features that are considerably enhanced during affective vocalizations. These acoustic features are presumably used by listeners to accurately discriminate between acoustically or emotionally similar vocalizations. Here we used high-field 7T functional magnetic resonance imaging in human listeners together with a so-called experimental 'feature elimination approach' to investigate neural decoding of three important voice features of two affective valence categories (i.e. aggressive and joyful vocalizations). We found a valence-dependent sensitivity to vocal pitch (f0) dynamics and to spectral high-frequency cues already at the level of the auditory thalamus. Furthermore, pitch dynamics and harmonics-to-noise ratio (HNR) showed overlapping, but again valence-dependent sensitivity in tonotopic cortical fields during the neural decoding of aggressive and joyful vocalizations, respectively. For joyful vocalizations we also revealed sensitivity in the inferior frontal cortex (IFC) to the HNR and pitch dynamics. The data thus indicate that several auditory regions were sensitive to multiple, rather than single, discriminative voice features. Furthermore, some regions partly showed a valence-dependent hypersensitivity to certain features, such as pitch dynamic sensitivity in core auditory regions and in the IFC for aggressive vocalizations, and sensitivity to high-frequency cues in auditory belt and parabelt regions for joyful vocalizations.
Domaines
NeurosciencesOrigine | Publication financée par une institution |
---|
Loading...