Research Areas 
Nadine - Social Robot

Nadine is one of the most realistic female humanoid social robot around the world and she looks and act incredibly lifelike—being modelled on Prof Nadia Magnenat Thalman. This robot has a realistic human appearance and a natural skin and hair. She has also very realistic hands. Nadine is a socially intelligent robot who is friendly, greets you back, makes eye contact, and remembers all the nice chats you had with her. She is able to answer questions in several languages, show emotions both in her gestures and in her face depending on the content of the interaction with the user. Nadine can recognize people she has previously met and engage in flowing conversation. Nadine is also fitted with a personality, meaning her mood can sour depending on what you say to her. Nadine has a total of 27 degrees of freedom (dof) for facial expressions and upper body movements. She can recognise anybody she has met, and remembers facts and events related to each person. Nadine is the ideal companion when nobody is there. She can assist people with special needs, read stories, show images, put on skype sessions, send emails, and communicate with the family. She is part of the human assistive new technology which is badly needed as society cannot afford a full time social worker for each person with special needs. She can play the role of a personal, private coach always available when nobody is there.

 


Professor Nadia Magnenat Thalmann and Nadine
Platform

Nadine’s platform is implemented as a classic Perception-Decision-Action architecture. The perception layer is composed of a Microsoft Kinect V2 and a microphone. The perception includes face recognition, gestures recognition and some understanding of social situations. In regards to decision, our platform includes emotion and memory models as well as social attention. Finally, the action layer consists of a dedicated robot controller which includes emotional expression, lips synchronization and online gaze generation.

Specifications

Degrees of Freedom: 27 (7 in the head, 3 in the neck, 3 in the body, 7(x2) in the arms)
Dimensions approx. W551×D886×H1315[mm]
Weight approx. 35[kg]
Rated input voltage/frequency AC100 - 240V
Power consumption approx. 500W

Publications

 

  • A. Beck, Z. Zhang and N. Magnenat Thalmann, Motion Control for Social Behaviors, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 237-256, 2015
  • Z.P. Bian, J. Hou, L.P. Chau and N. Magnenat Thalmann, Fall Detection Based on Body Part Tracking Using a Depth Camera, IEEE Journal of Biomedical and Health Informatics, Vol. 19, No. 2, Pp. 430-439, 2015
  • H. Liang, J. Yuan, D. Thalmann and N. Magnenat Thalmann, AR in Hand: Egocentric Palm Pose Tracking and Gesture Recognition for Augmented Reality Applications, ACM Multimedia Conference 2015 (ACMMM 2015), Brisbane, Australia, 2015
  • H. Liang, J. Yuan and D. Thalmann, Egocentric Hand Pose Estimation and Distance Recovery in a Single RGB Image, IEEE International Conference on Multimedia and Expo (ICME 2015), Italy, 2015
  • J Ren, X Jiang and J Yuan, Quantized Fuzzy LBP for Face Recognition, 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015, Brisbane, Australia, 2015
  • J. Ren, X. Jiang and J. Yuan, Learning LBP Structure by Maximizing the Conditional Mutual Information, Pattern Recognition, Pp. 3180–3190, Vol. 48, Issue 10, 2015
  • J. Ren, X. Jiang and J. Yuan, A Chi-Squared-Transformed Subspace of LBP Histogram for Visual Recognition, IEEE Transactions on Image Processing, Vol. 24, Issue 6, Pp 1893 - 1904, 2015
  • K. Wu and A.W.H. Khong, Sound Source Localization and Tracking, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 55-78, 2015
  • Y. Xiao, et al, Body Movement Analysis and Recognition, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 31-53, 2015
  • Z. Yumak and N. Magnenat Thalmann, Multimodal and Multi-party Social Interactions, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 275-298, 2015
  • J. Zhang, J. Zheng and N. Magnenat Thalmann, PCMD: Personality-Characterized Mood Dynamics Model Towards Personalized Virtual Characters, Computer Animation and Virtual Worlds, 26(3-4): 237-245, 2015
  • J. Zhang, J. Zheng and N. Magnenat Thalmann, Modeling Personality, Mood, and Emotions, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 211-236, 2015
  • Z. Zhang, A. Beck, and N. Magnenat Thalmann, Human-Like Behavior Generation Based on Head-Arms Model for Robot Tracking External Targets and Body Parts, IEEE Transaction on Cybernetics, Vol. 45, No. 8, Pp. 1390-1400, 2015
  • H. Liang, J. Yuan and D. Thalmann, Improved Hand Pose Estimation via Multimodal Prediction Fusion, Computer Graphics International (CGI 2014), Sydney, Australia, 2014
  • H. Liang, J. Yuan and D. Thalmann, Resolving Ambiguous Hand Pose Predictions by Exploiting Part Correlations, IEEE Transactions on Circuits and Systems for Video Technology, Pp. 1, Issue 99, 2014
  • H. Liang and J. Yuan, Hand Parsing and Gesture Recognition with a Commodity Depth Camera, Computer Vision and Machine Learning with RGB-D Sensors, Springer, Pp. 239-265, 2014
  • H. Liang, J. Yuan and D. Thalmann, Parsing the Hand in Depth Images, IEEE Transactions on Multimedia, Pp. 1241 - 1253, 2014
  • J. Ren, X. Jiang, J. Yuan and G. Wang, Optimizing LBP Structure For Visual Recognition Using Binary Quadratic Programming, IEEE Signal Processing Letters, Pp. 1346 – 1350, 2014
  • Y. Xiao, Z. Zhang, A. Beck, J. Yuan and D. Thalmann, Human-Robot Interaction by Understanding Upper Body Gestures, MIT Press Journals - Presence: Teleoperators and Virtual Environments, Vol. 23, No. 2, Pp. 133-154, 2014
  • Z. Yumak, J. Ren, N. Magnenat Thalmann, and J. Yuan, Modelling Multi-Party Interactions among Virtual Characters, Robots, and Humans, MIT Press Journals - Presence: Teleoperators and Virtual Environments, Vol. 23, No. 2, Pp. 172-190, 2014
  • Z. Yumak, J. Ren, N. Magnenat Thalmann and J. Yuan, Tracking and Fusion for Multiparty Interaction with a Virtual Character and a Social Robot, SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence, Shenzhen, China, 2014
  • H. Liang, J. Yuan and D. Thalmann, Model-based Hand Pose Estimation via Spatial-temporal Hand Parsing and 3D Fingertip Localization, The Visual Computer: International Journal of Computer Graphics, Vol. 29, Issue 6-8, Pp. 837-848, 2013
  • G. D. Liu, S. Choudhary, J.Z. Zhang and N. Magnenat Thalmann, Let's Keep in Touch Online: A Facebook Aware Virtual Human Interface, The Visual Computer, Vol. 29, Issue 9 , Pp 871-881, 2013
  • Rajan S. Rashobh and Andy W. H. Khong, A Multichannel Time-domain Subspace Approach Exploiting Multiple Time-delays for Acoustic Channel Equalization, The 38th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013
  • J. Ren, X. D. Jiang and J. Yuan, Relaxed Local Ternary Pattern for Face Recognition, IEEE International Conference on Image Processing (ICIP), Melbourne, Australia, 2013
  • J. Ren, X. Jiang and J. Yuan, Noise-Resistant Local Binary Pattern with an Embedded Error-correction Mechanism, IEEE Transactions on Image Processing, Vol. 22, Issue 10, Pp. 4049-4060, 2013
  • Z. Yumak and N. Magnenat Thalmann, Multi-party Interaction with a Virtual Character and Human-like Robot, The 19th ACM Symposium on Virtual Reality Software and Technology (VRST2013), Singapore, 2013
  • S. Dalibard, N. Magnenat Thalmann and D. Thalmann, Interactive Design of Expressive Locomotion Controllers for Humanoid Robots, The 21st IEEE International Symposium on Robot and Human Interactive Communication (2012 IEEE RO-MAN), Paris, France, 2012
  • H. Liang, J. Yuan and D. Thalmann, Hand Pose Estimation by Combining Fingertip Tracking and Articulated ICP, 11th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI 2012), Singapore, 2012
  • H. Liang, J. Yuan and D. Thalmann, 3D Fingertip and Palm Tracking in Depth Image Sequences, ACM International Conference on Multimedia 2012, Nara, Japan, 2012
Press Releases

Click here to view Press Releases
Research Demos

Click here to view Research Demos
Video News Releases

Click here to view Video News Releases