top of page
Gradient
cm-hs-logouk_0.png

Virtual Humans and Social Robots

Embodied AI

Upcoming event: TBA
15:00-16:00 CET

Supported by Human-Centered AI at Utrecht University

AGENDA

UPCOMING EVENTS

Talk XIII 

TBA

PAST EVENTS

cm_hs_avatar_corporate_edited.png

Febuary 2nd, 2023

15:00 -16:00 

​

December 15th, 2022

10:00 -11:00 

​

cm_hs_avatar_corporate_edited.png
cm_hs_avatar_corporate_edited.png

November 24th, 2022

15:00-16:00 

​

October 27th, 2022

15:00-16:00 

​

cm_hs_avatar_corporate_edited.png

October 19th, 2022

15:30-17:00 

​

cm_hs_avatar_corporate_edited.png

September 29th, 2022 

15:00-16:00

cm_hs_avatar_corporate_edited.png

May 25th, 2022 

15:00-16:00

cm_hs_avatar_corporate_edited.png

April 14th, 2022

15:00-16:00

cm_hs_avatar_corporate_edited.png
March 24th, 2022
 15:00-16:00
cm_hs_avatar_corporate_edited.png
March 3rd, 2022
   16:00-17:00
cm_hs_avatar_corporate_edited.png
February 24th, 2022
16:00-17:00
cm_hs_avatar_corporate_edited.png
January 19th, 2022
   16:00-17:00
cm_hs_avatar_corporate_edited.png
November 25th, 2021
   16:00-17:00
cm_hs_avatar_corporate_edited.png
September 24th, 2021
14:30-16:30
May 31st, 2021
14:30-16:30
February 22nd, 2021
14:30-16:30

Talk XII 

Dr. Tom Williams - Secret Agents: The Real and Imagined Inner Lives of Interactive Robots 

Abstract: Robots are Secret Agents. By this I mean that while computer scientists and roboticists may not view their robots as being fully autonomous, interactive, and adaptive, everyday users perceive them as such nonetheless. This creates a whole host of user expectations that may be hard to live up to -- and that may cause problems when violated. In this talk, I'll begin by explaining how robot design choices impact how people perceive robots and expect them to behave. I'll then explain why robots are not simply agents, but are also moral and social agents, and explain the work the MIRRORLab has been doing to understand and address the unique perceptions and expectations that come along with these more nuanced types of agency. Finally, I'll discuss how these two types of agency interact, and the unique challenges that this imposes.

Talk XI 

Dr. Aniket Bera - Designing Emotionally-Intelligent Digital Humans that Move, Express, and Feel Like Us! 

Abstract: The creation of intelligent virtual agents (IVAs) or digital humans is vital for many virtual and augmented reality systems. As the world increasingly uses digital and virtual platforms for everyday communication and interactions, there is a heightened need to create human-like virtual avatars and agents endowed with social and emotional intelligence. Interactions between humans and virtual agents are being used in different areas including, VR, games and story-telling, computer-aided design, social robotics, and healthcare. Designing and building intelligent agents that can communicate and connect with people is necessary but not sufficient. Researchers must also consider how these IVAs will inspire trust and desire among humans. Knowing the perceived affective states and social-psychological constructs (such as behavior, emotions, psychology, motivations, and beliefs) of humans in such scenarios allows the agents to make more informed decisions and navigate and interact in a socially intelligent manner. 

​

Talk X 

Dr. Mike Ligthart - Growing Up Together: Long-Term Child-Robot Relationships

Abstract: Social robots have a lot of potential to support children long-term in the hospital, at school, or even at home. Key for facilitating a sustainable long-term interaction is enabling children to bond with the robot. It is this bond that keeps them coming back and allows them to benefit the most from the social support the robot has to offer. In my research so far, I have developed autonomous social robot behaviors and conversational content that enable child and robot to form a relationship and maintain it for a short while. Children grow up. If we want the robot to truly offer meaningful long-term support, so should the robot. Ideally the robot grows socially, cognitively, physically, and relationally together with the child. To achieve this, we need to develop novel theories, technologies, and methods to create more elaborate, long-term oriented, social capabilities for the robot and a feasible strategy to create personalized multimodal interaction content. In this talk I’ll reflect on my PhD research and look ahead to what’s next.

Talk IX

Dr. Sylvia Xueni Pan - Social Interaction in VR

Abstract: Amongst all human activities, social interaction is one of the most complex and mysterious. Those with better social skills easily thrive in society, whilst those suffering from social function deficits (e.g., social anxiety disorder) struggle with everyday activity. The goal of Xueni Pan's research is therefore to use VR to improve how we socially connect and communicate with each other in a face-to-face setting.

In this talk, she will give several examples of how we use VR for applications in medical communication training, social neuroscience research, and commercial narrative games. Xueni Pan will end her talk with a discussion of the future of social interactions in the Metaverse.

​

​

HAI colloqium

Bipin Indurkhya - Faking Emotions and a Therapeutic Role for Robots and Chatbots 

Abstract:  In recent years, there has been a proliferation of social robots and chatbots that are designed so that users make an emotional attachment with them. Such robots and chatbots can also be used to provide psychotherapy. In this talk, we will start by presenting the first such chatbot, a program called Eliza designed by Joseph Weizenbaum in the mid 1960s. This program did not understand anything, but relied on keyword matches, and a few simple heuristics to keep the conversation flow and provide an illusion of understanding to the user. At that time, Weizenbaum was taken aback by the intensity of emotional attachment users felt towards this program, prompting him to highlight this negative aspect of technology in his thought provoking book "Computer Power and Human Reason". Nowadays, Eliza-like systems and interfaces are used often in social robots and chatbots. We will look at some such systems and argue that they can have a positive and therapeutic effect on the user, and that in some situations at least this kind of robot-human interaction transcends human-human interaction. However, developing and deploying such systems raise a number of ethical issues, some of which we will discuss in this talk.

Speaker bio: Bipin Indurkhya received the master’s degree in electronics engineering from the Philips International Institute, Eindhoven, The Netherlands, in 1981, and the PhD degree in computer science from the University of Massachusetts at Amherst, Massachusetts, in 1985.

Indurkhya is currently a professor of cognitive science with the Jagiellonian University, Cracow, Poland. His main research interests include social robotics, usability engineering, affective computing, and creativity.

​

Talk VIII

Rebecca Stower - HRInterdisciplinary Research. A Psychologist’s Guide to Social Robotics

Abstract: The number of disciplines in HRI is continually increasing, from originally composed predominantly of engineering and computer science, to now encompassing psychology, communication science, linguistics, neuroscience, philosophy, among many others. Alongside this expanding research however, comes a new set of practical challenges and considerations when conducting both into-the-wild and laboratory research that involves both humans and robots. Concerns regarding both the lack of theory-driven research, and questionable research practices are consistently considered as reasons why findings from HRI studies may be contradictory or inconsistent. In this talk, I will highlight different challenges that can be encountered by HRI researchers when designing and implementing HRI studies and discuss different solutions for how to overcome them. In particular, I will focus on best research practices within the context of the replication crisis, and how the open science movement can be applied in the field of HRI. 

​

Talk VII

Prof. Stefan Kopp 

Abstract: Embodied human-agent interaction has made great advancements in the last decades, including interactive learning, dialogue-based communication, or collaborative physical interaction. At the same time, the field is still facing hard challenges in coping with individual differences between users, the task- and context-related complexities of cooperative behavior, or maintaining acceptance over the long term. This has led to a demand for ‘socially aware’ AI and robots, that can understand others at the level of their mental states, can make themselves understandable to others, or can respond to the needs and expectations of humans during collaborative interaction.

 

In his talk, Kopp will discuss how this vision relates to recent trends (e.g human-aware AI) and he will argue that, in addition to the currently predominant data-driven approaches, an embodied cognitive approach is needed, that can account for the embodied-agentive as well cognitive-affective nature of interaction partners, as well as for the dynamic and continuous adaptation processes taking place in situ within and between interaction partners. Kopp will present work towards this vision, including on multimodal behavior generation and incremental coordination of mutual understanding in dialogue.

Talk VI

Dr. Joost Broekens - Affect & emotion in reinforcement learning

Abstract: dr. Joost Broekens will give an overview of where emotion and affect can play a role in the reinforcement learning agent-environment interaction loop. He also highlights some work on simulating emotions based on reinforcement learning and the temporal difference signal.

Talk V
Prof. Eva Wiese - Robots as social agents: insights from neuroscience
Abstract: Social robots as future cohabitants become more and more reality, and at present they are already used as social companions for elderly people or as therapeutical interventions for children with autism spectrum disorder or patients with sensorimotor impairments. Social robots foster collaboration in the workplace, teach math and science in the classroom and facilitate activities in daily lives as friendly assistants in supermarkets and airports. While lots of progress has been made in terms of the technical realization of social robots, their ability to interact with humans in a truly social way is still quite limited. The biggest challenge is to determine how to design social robots that can adapt to the users’ cognitive and technical abilities but are also perceived social companions that understand the needs, feelings, and intentions of their human partners. One way to achieve this goal is using a systematic experimental approach that combines behavioral and physiological neuroscientific methods such as eye tracking or electroencephalography (EEG) with realistic interaction scenarios involving physically embodied social robots. This approach requires an understanding of how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and to use these insights to formulate design principles that make robots attuned to the workings of the human brain. An approach like this adds significantly to the current literature, where subjective ratings are the main tool to assess a robot’s performance and socialness, as well as a user’s satisfaction with the interaction. Although subjective measures are suitable to capture the quality of a given robot design, they are neither able to predict performance in human-robot interaction nor to inform roboticists how to improve given designs in order to attune them to the human cognitive system. Together with other scientists, prof. Eva Wiese put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them as intentional agents that activate areas in the human brain involved in social-cognitive processing. She discusses how attributing mental states to robots can positively affect human-robot interaction by fostering feelings of social connection, empathy and prosociality, and how neuroscientific methods can be used to identify design features that can trigger such attributions. She will also present a series of experiments that show that mind attribution to robots can positively affect performance in human-robot interaction by enhancing low-level social cognitive processes like gaze following or joint attention. Lastly, prof. Eva Wiese will discuss circumstances under which mind attribution to robots might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific insights. 
Talk IV
Prof. Emily Cross - Examining how experience shapes our perceptions of and interactions with embodied robots

Abstract: The ability to perceive and interact with others occurs in an effortless manner, but is underpinned by complex cognitive and neural processes. In this talk, prof. Emily Cross reviews recent evidence from behavioural and brain imaging studies that provides deeper insights into human social cognition and brain function by using social robots as research tools. Specifically, her team examines how prolonged physical and social interactions with embodied robots shapes how we perceive and interact with these agents, and the extent to which we might be able to build social relationships with these machines and perceive them as truly social agents. Through presenting work comparing social perception of humans compared to robots, prof. Emily Cross aim to highlight the importance of examining how perception of and interaction with artificial agents in a social world can reveal fundamental insights about human social cognition. 

Talk III
Dr. Eduard Fosch-Villaronga - Diversity observations in an exoskeleton experiment

Abstract: While robotics in medical care is becoming increasingly prevalent, direct interaction with users raises new ethical, legal, and social concerns. Among them is the problem of designing these robots fit for users coming in a wide variety of shapes, sizes, and genders. Although mentioned in the literature, these concerns have not been reflected in regulatory standards yet. ISO 13482:2014 on safety requirements for personal care robots, the leading technical standard to date, briefly acknowledges that future editions might include more complete data on different categories of people. After more than seven years of its approval and having undergone a revision, those requirements are nonetheless still missing. Based on a week of experimentation with robotic exoskeletons to improve the regulatory framework, we argue that being oblivious to gender and medical condition differences or following a one-size-fits-all approach hides important distinctions and increases the exclusion of specific users. Our observations show that this type of robot operates intimately intertwined with users’ bodies, thus exemplifying how gender and medical condition might introduce dissimilarities in human-robot interaction that, as long as they remain ignored in regulation, may compromise the safety of specific users. We conclude by putting forward particular recommendations to update ISO 13482:2014 to reflect better the broad diversity of users of personal care robots.

Talk II
Dr. Sofía Seinfeld - Neural and behavioral impact of becoming a victim in VR

Abstract: Embodiment in an artificial virtual body can be evoked when certain multisensory principles are fulfilled. When participants see a life-size virtual body from first person perspective, they can experience the temporal illusion that the artificial body is their own real body. It has been shown that the type of artificial body in which embodiment occurs, can differently impact participants’ perceptions and cognition. In this talk, dr. Sofía Seinfeld will discuss a series of studies that evaluate the behavioral and neural impact of embodying intimate partner violence perpetrators in the first person perspective (1PP) of a victim in virtual reality. Specifically, she will explain studies that show how embodiment of male offenders in the body of a female virtual victim, leads to changes in emotion recognition. Moreover, she discusses recent evidence from an fMRI study where it was found that the 1PP of a virtual violent situation seems to influence emotion recognition through modifications in Default Mode Network (DMN) brain activity. Altogether, these results provide further evidence that embodiment in VR might influence socio-cognitive processing and also highlights the potential use of VR to improve current rehabilitation programs for domestic violence. 

Talk I
Prof. Ana Paiva - Engineering Sociality and Collaboration: Humans and Embodied Agents Together

Abstract: Embodied social agents, chatbots or social robots have the potential to change the way we interact with technology. As they start to be more affordable, they will be entering our daily activities, perform different tasks, and thus, partner with humans socially and collaboratively. However, how do we engineer sociality and collaboration? How can we build social robots that are able to “team up” with humans? To research these question we must seek inspiration in what it means to be a member of a team, and build the technology to support hybrid teams. That involves creating in our agents and robots the capabilities  for social understanding, interpersonal communication, and social responsibility.

In this talk Prof. Ana Paiva will discuss how to engineer social robots that act autonomously as members of a team, collaborating with both humans and other robots.  Ana will use three case studies to discuss the challenges, recent results and future directions for the area of artificial embodied teammates.

Panel III
Theme: The international landscape of embodied agents: what can we learn, how can we foster international collaborations across disciplines?

Panelists:
  • Elisabeth André (Professor, Universität Augsburg)
  • Tony Belpaeme (Professor, Ghent University and Plymouth University).
  • Mary Ellen Foster (Senior Lecturer University of Glasgow)
  • Astrid Weiss (Professor, TU Wien)
Panel II
Theme: Understanding the Dutch Landscape of Embodied Agents: what are the needs of the Dutch society? 
​Panelists:
  • Mark Neerincx (Professor, TU Delft and TNO) 
  • Khiet Truong (Assistant Professor, UTwente).
  • Pim Haselager (Professor, Donders Institute)
  • Tibor Bosse (Professor, Radboud University)
Panel I​
Theme: Understanding the Utrecht Case: what are the different viewpoints, challenges and opportunities?
Panelists:
  • Sven Nyholm (Assistant Professor, UU) 
  • Johan Teuring (Professor, UU).
  • Maaike Bleeker (Professor, UU)
  • Ronald Poppe (Associate Professor, UU)
AGENDA
HRI_ProgramImage_edited.png
SPEAKERS
ABOUT US

More about the project

The main objective of this project is to unite researchers of Utrecht University in the field of Embodied AI. The activities concerns the development, evaluation and societal impact of virtual humans and social robots that are capable of engaging in face-to-face social interactions with people using verbal and non-verbal behaviours. These characters have been a topic of interest in different communities including AI, HCI, robotics and graphics as well as humanities and social sciences. Despite significant progress, we are only at the dawning of an emerging field. 

 

We aim to increase visibility, build community and foster interdisciplinary collaborations as well as encourage diversity and inclusion of perspectives and backgrounds. With this goal in mind, we started an initiative called Embodied AI: Virtual Humans and Social Robots in 2020 in connection with the Special Interest Groups (SIGs) Autonomous Intelligent Systems and Social and Cognitive Modelling supported by Human-Centered Artificial Intelligence at Utrecht University. With this initiative, we would like to make a bridge between communities of scholars addressing technical challenges and the ones that focus on human perception of Embodied Agents.

We have three perspectives:​​

Technical/algorithmic perspective: How to automatically generate behavior for socially interactive agents (virtual and robot)? Sensing, decision making and acting loop.

Human-computer interaction perspective:  How people perceive, respond to, and collaborate with robots and anthropomorphic characters in everyday life?

Social Sciences perspective: How to understand human social cognition during real and long-term interactions with artificial agents?

More about the organizers

©UU-Harold van de Kamp-BETA-Portrait_Zerrin-Yumak-2021-BB_08.jpg
Zerrin Yumak, PhD
Assistant Professor at Dept. Information and Computing Sciences at Utrecht University
Ruud Hortensius, PhD
Assistant Professor at Dept. Social, Health & Organisational Psychology at Utrecht University
GetImage.jpeg
me-2015-squared.jpg
Maartje de Graaf, PhD
Assistant Professor at Dept. Information and Computing Sciences at Utrecht University
Sanderijn Kuijvenhoven,
Master Student Artificial Intelligence at Utrecht University
DSC03095 (1).jpg
bottom of page