When we search online for information, it is becoming more and more common - especially for the younger generation - to do this via 'social agents'. Think of Amazon Alexa, Google Assistant or even robots. However, we still know little about how children experience this. Children are quick to see social agents and robots as their equals and are quick to trust them. This project investigates how we can shape child-robot interaction in a responsible way in this information-seeking environment: this interaction not only focusses on informing in a way that children understand and like, but also pays attention to transparency (where does information come from?), privacy (what personal data are used?), and awareness about the use of AI.
Emotional expression plays a crucial role in everyday functioning. It is a continuous process involving many features of behavioral, facial, vocal, and verbal modalities. Given this complexity, few psychological studies have addressed emotion recognition in an everyday context. Recent technological innovations in affective computing could result in a scientific breakthrough as they open up new possibilities for the ecological assessment of emotions. However, existing technologies still pose major challenges in the field of Big Data Analytics.
4TU NIRICT project on Empathic Lighting | 2016 - 2017
In this project, we collaborate on research at the intersections of technology and humans, in particular, combining insights from computational intelligence, user modeling, personalization and human-computer interaction for lighting installations that can adapt to and influence people’s affective states. We aim to develop affect-adaptive lighting interfaces to be deployed in independent living for seniors. Seniors often experience negative affective states, such as gloominess due to the distance from their families or anxiety when disoriented (e.g. due to dementia).
4TU Humans & Technology Smart Social Systems and Spaces for Living Well (S4) | 2015 - 2022
The 4TU research centre Humans & Technology brings together the social sciences, humanities, and technical sciences. Its goal is excellent research on innovative forms of human-technology interaction for smart social systems and spaces. The research program "Smart Social Systems and Spaces for Living Well" (S4) aims to combine knowledge available from different disciplines, such as computer science, psychology, and industrial design.
Development of a robot for children (4-10 years old) that cannot only perform complex navigation, detection, and manipulation tasks in a cluttered environment, but is also affectively and socially intelligent, engaging and fun in a collaborative task. Detection of children’s affective and social states (e.g., engagement, dominant behavior) in a multiparty robot-children scenario (1 robot and more than 1 child) through (non-verbal) speech analysis.
Development of a socially-intelligent telepresence robot, e.g., semi-autonomously navigating among groups, adaptation to quality of the mediated human-human interaction (elderly people). Detecting and monitoring the quality (e.g., how well is the conversation going, are the interactants in sync, are they disagreeing, do they like each other) of the mediated human-human interaction through (non-verbal) speech analysis.
Integration of technology to sense, analyse, interpret and motivate people who take part in sports and exercises (running) towards a better wellbeing. Detection of the runner’s physical and mental state through speech analysis.
CroMe (Croatian Memories) | 2012 - 2013
Gathering and documenting testimonies on war-related experiences in Croatia’s past, and making these audiovisual testimonies publicly available and searchable through technology. Analysis of verbal and non-verbal behavior of interviewees, for example, by comparing between word usage and prosodic speech parameters, and analysis of sighs in emotionally-colored dialogs.
Development of a Sensitive Artificial Listener, a multimodal dialogue system that can sustain an interaction with a user for some time and that react appropriately to the user’s non-verbal behavior. Analysis of interruptive agents, analysis of generation, detection, and timing of backchannels (listener responses).
EU-FP7 SSPNet (Social Signal Processing Network) | 2009 - 2013
Automatic (multimodal) analysis and detection of social signals, manifested through non-verbal cues, in interaction. Analysis of non-verbal vocalisations such as laughter and sighs in interaction, interruptions, synchrony/mimicry, listener responses in interaction.
BSIK MultimediaN N2 (Multimodal Interaction) | 2005 - 2009
Realizing an excellent user experience during a human-machine interaction by attuning the interaction to the user’s intentions and emotions. Automatic emotion recognition in speech, automatic detection of laughter, multimodal sentiment analysis.
PhD students supervising/supervised
- Ella Velner, PhD student Child-Robot-Media-Interaction (Oct 2019 - ...)
- Michel Jansen, PhD student 4TU Humans and Technology (Oct 2017 - ...)
- Deniece Nazareth, PhD student eScience project Emotion recognition in dementia (Maay 2017 - ...)
- Jaebok Kim, PhD student EU FP7 SQUIRREL (July 2014 - July 2018)
- Roelof de Vries, PhD student COMMIT P3 Sensor-based engagement for improved health (May 2013 - Nov 2018)
- Cristina Zaga, PhD student EU FP7 SQUIRREL (Oct 2014 - March 2021)
PhD dissertation committee
- Wei Xue (2023). Measuring the intelligibility of pathological speech through subjective and objective procedures. (21 March 2023, Radboud University)
- Phoebe Mui (2019). The many faces of smiling: Social and cultural factors in the display and perception of smiles. (18 Dec 2019, Tilburg University)
- Juliane Schmidt Kirsch (2018). Listening for the WHAT and the HOW: Older adults’ processing of semantic and affective information in speech. (5 July 2018, Radboud University Nijmegen)
- Selma Yilmazyildiz (2017). Semantic Free Affective Speech Framework For Social Human-Robot Interaction. (13 Sep 2017, Vrije Universiteit Brussel)
Member of Editorial Boards
- Member of Editorial Board Computer Speech and Language May 2021 - current
- Associate Editor IEEE Transactions on Affective Computing Feb 2019 - current
- ACM Transactions on Intelligent Interactive Systems (TiiS) (inaugural) Board of Distinguished Reviewers 2017-2018
Conference organization / TPC / PC / reviewing (a selection)
- Interspeech General Co-Chair 2025 | Lead Area Chair 2023 | Lead Area Chair 2022 | Lead Area Chair 2021 | Area Chair 2017 | Area Chair 2015 | reviewer since 2011
- ACM International Conference on Multimodal Interaction (ICMI) Program Co-Chair 2024 | Publicity Chair 2023 | Senior PC, Social Media Chair 2021 | General Chair 2020 | Senior PC 2016 | reviewer 2017, 2015, 2014, 2013, 2012
- IEEE International Conference on Affective Computing and Intelligent Interaction (ACII) Program Co-Chair 2022 | Senior PC 2021 | Senior PC, Tutorial Chair 2019 | Senior PC, Social Media Chair 2017 | reviewer 2015, 2013
- ACM/IEEE International Conference on Human-Robot Interaction (HRI) PC 2023 | reviewer 2023, 2021, 2019, 2016, 2015
- HRI Pioneers reviewer 2023, 2022, 2021, 2019, 2018
- IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) reviewer since 2013
- Speech Prosody reviewer 2022, 2021, 2020, 2018, 2016, 2014
- ACM International Conference on Intelligent Virtual Agents (IVA) Senior PC 2016 | Doctoral Consortium Chair 2015 | reviewer 2023, 2022, 2021, 2019, 2018