When we search online for information, it is becoming more and more common – especially for the younger generation – to do this via ‘social agents’. Think of Amazon Alexa, Google Assistant or even robots. However, we still know little about how children experience this. Children are quick to see social agents and robots as their equals and are quick to trust them. This project investigates how we can shape child-robot interaction in a responsible way in this information-seeking environment: this interaction not only focusses on informing in a way that children understand and like, but also pays attention to transparency (where does information come from?), privacy (what personal data are used?), and awareness about the use of AI.
Emotional expression plays a crucial role in everyday functioning. It is a continuous process involving many features of behavioral, facial, vocal, and verbal modalities. Given this complexity, few psychological studies have addressed emotion recognition in an everyday context, let alone in people with dementia. Recent technological innovations in affective computing could result in a scientific breakthrough as they open up new possibilities for the ecological assessment of emotions. However, existing technologies still pose major challenges in the field of Big Data Analytics, especially for special target groups such as older adults with dementia.
The 4TU research centre Humans & Technology (H&T) brings together the social sciences, humanities, and technical sciences. Its goal is excellent research on innovative forms of human-technology interaction for smart social systems and spaces. The research program “Smart Social Systems and Spaces for Living Well” (S4) aims to combine knowledge available from different disciplines, such as computer science, psychology, and industrial design.
In this project, we collaborate on research at the intersections of technology and humans, in particular, combining insights from computational intelligence, user modeling, personalization and human-computer interaction for lighting installations that can adapt to and influence people’s affective states. We aim to develop affect-adaptive lighting interfaces to be deployed in independent living for seniors. Seniors often experience negative affective states, such as gloominess due to the distance from their families or anxiety when disoriented (e.g. due to dementia). For updates check out our website!
Development of a robot for children (4-10 years old) that cannot only perform complex navigation, detection, and manipulation tasks in a cluttered environment, but is also affectively and socially intelligent, engaging and fun in a collaborative task. Detection of children’s affective and social states (e.g., engagement, dominant behavior) in a multiparty robot-children scenario (1 robot and more than 1 child) through (non-verbal) speech analysis.
Development of a socially-intelligent telepresence robot, e.g., semi-autonomously navigating among groups, adaptation to quality of the mediated human-human interaction (elderly people). Detecting and monitoring the quality (e.g., how well is the conversation going, are the interactants in sync, are they disagreeing, do they like each other) of the mediated human-human interaction through (non-verbal) speech analysis.
Integration of technology to sense, analyse, interpret and motivate people who take part in sports and exercises (running) towards a better wellbeing. Detection of the runner’s physical and mental state through speech analysis.
Gathering and documenting testimonies on war-related experiences in Croatia’s past, and making these audiovisual testimonies publicly available and searchable through technology. Analysis of verbal and non-verbal behavior of interviewees, for example, by comparing between word usage and prosodic speech parameters, and analysis of sighs in emotionally-colored dialogs.
Development of a Sensitive Artificial Listener, a multimodal dialogue system that can sustain an interaction with a user for some time and that react appropriately to the user’s non-verbal behavior. Analysis of interruptive agents, analysis of generation, detection, and timing of backchannels (listener responses).
Automatic (multimodal) analysis and detection of social signals, manifested through non-verbal cues, in interaction. Analysis of non-verbal vocalisations such as laughter and sighs in interaction, interruptions, synchrony/mimicry, listener responses in interaction.
BSIK MultimediaN N2 (Multimodal Interaction) 2005 – 2009
Realizing an excellent user experience during a human-machine interaction by attuning the interaction to the user’s intentions and emotions. Automatic emotion recognition in speech, automatic detection of laughter, multimodal sentiment analysis.
- Ella Velner, PhD student Child-Robot-Media Interaction (Oct 2019 – …)
- Michel Jansen, PhD student 4TU Humans & Technology (Oct 2017 – …)
- Deniece Nazareth, phd student eScience project (May 2017 – …)
- Jaebok Kim, phd student SQUIRREL/TERESA (July 2014 – July 2018)
- Roelof de Vries, phd student COMMIT P3 (May 2013 – Nov 2018)
- Cristina Zaga, phd student SQUIRREL (Oct 2014 – …)
- Dr. Meiru Mu, postdoc COMMIT P3 (March 2015 – Dec 2016)