My interests lie in the automatic analysis and understanding of verbal and nonverbal (vocal) behaviors in human-human and human-machine interaction, and the design of socially interactive technology to support human needs. Using methods from affective computing and social signal processing, I aim to develop socially and affective intelligent interfaces (e.g. virtual conversational agents, social robots) that can recognize and display social and affective signals and to study how humans interact with this new kind of technology. Coming from a background in (computational) paralinguistics and speech analysis, my main focus is on analysing vocal cues in addition to visual cues (e.g. facial expressions, eye gaze) and physiological (e.g. heart rate, galvanic skin response) measurements in social interaction.
Prior to HMI, I was employed by TNO Human Factors in Soesterberg where I worked towards my PhD on automatic emotion recognition in speech and automatic laughter detection. During my master research at Radboud University in Nijmegen, I worked on automatic pronunciation error detection in second language learners’ speech.
social signal processing / affective computing / multimodal interaction / paralinguistics / speech prosody / non-verbal vocalisations / laughter / dialogue / embodied conversational agents / human-human interaction / human-robot interaction / computer-assisted language learning / pronunciation error detection
Most recent news: