Adaptation to vocal expressions reveals multistep perception of auditory emotion

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

StandardStandard

Adaptation to vocal expressions reveals multistep perception of auditory emotion. / Bestelmeyer, P.E.; Maurage, P.; Rouger, J. et al.
Yn: Journal of Neuroscience, Cyfrol 34, Rhif 24, 11.06.2014, t. 8098-105.

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

HarvardHarvard

Bestelmeyer, PE, Maurage, P, Rouger, J, Latinus, M & Belin, P 2014, 'Adaptation to vocal expressions reveals multistep perception of auditory emotion', Journal of Neuroscience, cyfrol. 34, rhif 24, tt. 8098-105. https://doi.org/10.1523/JNEUROSCI.4820-13.2014

APA

Bestelmeyer, P. E., Maurage, P., Rouger, J., Latinus, M., & Belin, P. (2014). Adaptation to vocal expressions reveals multistep perception of auditory emotion. Journal of Neuroscience, 34(24), 8098-105. https://doi.org/10.1523/JNEUROSCI.4820-13.2014

CBE

MLA

VancouverVancouver

Bestelmeyer PE, Maurage P, Rouger J, Latinus M, Belin P. Adaptation to vocal expressions reveals multistep perception of auditory emotion. Journal of Neuroscience. 2014 Meh 11;34(24):8098-105. doi: 10.1523/JNEUROSCI.4820-13.2014

Author

Bestelmeyer, P.E. ; Maurage, P. ; Rouger, J. et al. / Adaptation to vocal expressions reveals multistep perception of auditory emotion. Yn: Journal of Neuroscience. 2014 ; Cyfrol 34, Rhif 24. tt. 8098-105.

RIS

TY - JOUR

T1 - Adaptation to vocal expressions reveals multistep perception of auditory emotion

AU - Bestelmeyer, P.E.

AU - Maurage, P.

AU - Rouger, J.

AU - Latinus, M.

AU - Belin, P.

PY - 2014/6/11

Y1 - 2014/6/11

N2 - The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.

AB - The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.

KW - NEUROIMAGING

KW - PSYCHOLOGY

KW - EXPERIMENTAL

U2 - 10.1523/JNEUROSCI.4820-13.2014

DO - 10.1523/JNEUROSCI.4820-13.2014

M3 - Article

VL - 34

SP - 8098

EP - 8105

JO - Journal of Neuroscience

JF - Journal of Neuroscience

SN - 0270-6474

IS - 24

ER -