Virtual Humans and Social Robots Interaction 
Virtual Humans and Social Robots

The main goal of this research is the exploration, design and development of novel methods for real-time human-robot and human-virtual human interactions in realistic 3D telepresence setups. This research empowers the sustainability and presence of distant real users in their absence. For instance, it allows real people to play with distant players (human or computer generated). It can provide support for learning a language or being coached.

More precisely, we focus on group interactions in flexible setups where virtual humans, robots and real users can interact with each other. This should increase the feeling of physical and social presence. We will implement real life scenarios such as playing with a virtual human or robot, providing social support for the elderly or training in medical situations.


Humanoid robot (Nadine) and Virtual Human (Nicole) can offer sustainability and presence in absence of some real people

The research conducted focuses on three main areas:

  • Multi-modal tracking and analysis of users: We will develop novel methods to track and analyse users’ state and actions from different viewpoints, under different lighting conditions and in noisy environments, combining inferences from audio and vision modalities.
  • Autonomous behaviours for group interactions between users, robots and virtual humans: We will develop novel methods to simulate group interactions for virtual humans and robots, making them more responsive and expressive.
  • Personalisation of virtual humans: We will develop novel methods to individualise the virtual humans appearance to specific users.

Task 1.1: Gestures analysis and understanding

Goal

Compared to normal virtual reality conditions, the immersive room provides a more preferable and attractive environment to realize interactions between users and virtual humans. That is, the virtual humans will be projected on the spherical big screen that surrounds the user during the interaction. In this case, the user is able to communicate and interact with the virtual humans located at different positions and orientations simultaneously. Robust tracking and analysis of the users in the immersive room are important research goals. We also plan to implement new methods to robustly detect users falling. We also aim at capturing hand poses from ego-centric views, especially in immersive interaction scenarios. We plan to focus on the task of hand pose estimation from the depth images captured with a wearable head-mounted depth camera. In contrast to a third-person point-of-view camera, a first-person point-of-view wearable camera has exclusive access to first-person activities and is an ideal viewing perspective for analysing fine motor skills such as hand-object manipulation or hand-eye coordination.

Task 1.2: Social state estimation

Goal

The goal of this project is to provide humanoid robots (e.g., Nadine robot) and virtual humans a sense of social awareness, so that they can respond to social situations in a natural and human-like manner. For instance, it would be important for humanoid robots and virtual humans to understand group dynamics, so that they are able to participate in a conversation in a polite manner, e.g., by avoiding frequent interruptions of others, by avoiding long silences, by taking turns naturally, or by making use of backchannels. Apart from that, we intend to integrate sociofeedback into smart glasses and 3D glasses, where feedback about speaking and conversational manners is displayed on the glasses in real-time during conversations. Such systems would augment the reality in that provides guidance about the social situation at hand. The smart glasses play the role of a “social dashboard”, where users are informed about potential inappropriate social behaviours (e.g., speaking mannerisms).

Task 2.1: Affective Interaction and Memory

Goal

Autonomous virtual human is designed to communicate with users in human-like way in the human-computer interaction. Just like people have great difference in their personalities, thoughts, past experience, virtual human should also show its uniqueness to the users. Our goal is to personify virtual human and endow it with affective system and episodic memory.

Future Plan

To build computational episodic memory model for virtual companions, we first need to build metric in episodes, so that we can measure the distance between two episodes. To improve the efficiency of retrieval of episodes, we need to better organize episodes. Hence we will develop an algorithm to cluster episodes into different groups. In retrieval, the retrieval cue is first compared with the representatives of the groups of episodes, then further compared with episodes in the winner group. To enhance the ability of the story-telling of the virtual companions, the virtual companions need to be able to summarize their past experience and report to the users. Hence we will develop an algorithm to generate summery report given past episodes.


Virtual Human Nicole Demo