Projects

Responsible child-robot-media interaction (in collaboration with the Netherlands Institute for Sound and Vision, funded by SIDN fonds) 2019 – now

When we search online for information, it is becoming more and more common – especially for the younger generation – to do this via ‘social agents’. Think of Amazon Alexa, Google Assistant or even robots. However, we still know little about how children experience this. Children are quick to see social agents and robots as their equals and are quick to trust them. This project investigates how we can shape child-robot interaction in a responsible way in this information-seeking environment: this interaction not only focusses on informing in a way that children understand and like, but also pays attention to transparency (where does information come from?), privacy (what personal data are used?), and awareness about the use of AI.


Multisensory Designs for People with PIMD 2019 – now
One of the biggest challenges for people with PIMD (profound intellectual and multiple physical disabilities) is communication: people with PIMD often have underdeveloped language ability and are not able to express themselves through spoken language, and they often have difficulty in maintaining their awareness of environmental events. In collaboration with Sensoree and De Parabool, our goal is to co-design a collection of (wearable) multisensory objects that communicate with expressive biosignals. We will design in a responsible way – with and for these clients.


Netherlands eScience Center project on Advancing technology for multimodal analysis of emotion expression in dementia 2017 – now

Emotional expression plays a crucial role in everyday functioning. It is a continuous process involving many features of behavioral, facial, vocal, and verbal modalities. Given this complexity, few psychological studies have addressed emotion recognition in an everyday context, let alone in people with dementia. Recent technological innovations in affective computing could result in a scientific breakthrough as they open up new possibilities for the ecological assessment of emotions. However, existing technologies still pose major challenges in the field of Big Data Analytics, especially for special target groups such as older adults with dementia.


4TU.H&T (4TU Humans & Technology) Smart Social Systems and Spaces for Living Well (S4) 2015 – now

The 4TU research centre Humans & Technology (H&T) brings together the social sciences, humanities, and technical sciences. Its goal is excellent research on innovative forms of human-technology interaction for smart social systems and spaces. The research program “Smart Social Systems and Spaces for Living Well” (S4) aims to combine knowledge available from different disciplines, such as computer science, psychology, and industrial design.


4TU NIRICT on Empathic Lighting 2016 – 2017

In this project, we collaborate on research at the intersections of technology and humans, in particular, combining insights from computational intelligence, user modeling, personalization and human-computer interaction for lighting installations that can adapt to and influence people’s affective states. We aim to develop affect-adaptive lighting interfaces to be deployed in independent living for seniors. Seniors often experience negative affective states, such as gloominess due to the distance from their families or anxiety when disoriented (e.g. due to dementia). For updates check out our website!


EU-FP7 SQUIRREL (Clearing Clutter Bit by Bit) 2014 – 2018

Development of a robot for children (4-10 years old) that cannot only perform complex navigation, detection, and manipulation tasks in a cluttered environment, but is also affectively and socially intelligent, engaging and fun in a collaborative task. Detection of children’s affective and social states (e.g., engagement, dominant behavior) in a multiparty robot-children scenario (1 robot and more than 1 child) through (non-verbal) speech analysis.


EU-FP7 TERESA (Telepresence Reinforcement-learning Social Agent) 2014 – 2017

Development of a socially-intelligent telepresence robot, e.g., semi-autonomously navigating among groups, adaptation to quality of the mediated human-human interaction (elderly people). Detecting and monitoring the quality (e.g., how well is the conversation going, are the interactants in sync, are they disagreeing, do they like each other) of the mediated human-human interaction through (non-verbal) speech analysis.


COMMIT P3 (SENSEI) – 2013 – 2017

Integration of technology to sense, analyse, interpret and motivate people who take part in sports and exercises (running) towards a better wellbeing. Detection of the runner’s physical and mental state through speech analysis.


CroMe (Croatian Memories) 2012 – 2013

Gathering and documenting testimonies on war-related experiences in Croatia’s past, and making these audiovisual testimonies publicly available and searchable through technology. Analysis of verbal and non-verbal behavior of interviewees, for example, by comparing between word usage and prosodic speech parameters, and analysis of sighs in emotionally-colored dialogs.


EU-FP7 SEMAINE (The Sensitive Agent project) 2009 – 2013

Development of a Sensitive Artificial Listener, a multimodal dialogue system that can sustain an interaction with a user for some time and that react appropriately to the user’s non-verbal behavior. Analysis of interruptive agents, analysis of generation, detection, and timing of backchannels (listener responses).


EU-FP7 SSPNet (Social Signal Processing Network) 2009 – 2013

Automatic (multimodal) analysis and detection of social signals, manifested through non-verbal cues, in interaction. Analysis of non-verbal vocalisations such as laughter and sighs in interaction, interruptions, synchrony/mimicry, listener responses in interaction.


BSIK MultimediaN N2 (Multimodal Interaction) 2005 – 2009

Realizing an excellent user experience during a human-machine interaction by attuning the interaction to the user’s intentions and emotions. Automatic emotion recognition in speech, automatic detection of laughter, multimodal sentiment analysis.


Supervision

  • Ella Velner, PhD student Child-Robot-Media Interaction (Oct 2019 – …)
  • Michel Jansen, PhD student 4TU Humans & Technology (Oct 2017 – …)
  • Deniece Nazareth, phd student eScience project (May 2017 – …)
  • Jaebok Kim, phd student SQUIRREL/TERESA (July 2014 – July 2018)
  • Roelof de Vries, phd student COMMIT P3 (May 2013 – Nov 2018)
  • Cristina Zaga, phd student SQUIRREL (Oct 2014 – …)
  • Dr. Meiru Mu, postdoc COMMIT P3 (March 2015 – Dec 2016)