Research Areas


BeingTogether Centre

BeingTogether Centre has started on 01 October 2016 with a funding amount of SGD 6,000,000 by the National Research Foundation in Singapore. The project duration is four years. The host institution is the Nanyang Technological University (NTU), Singapore. 

This research is co-directed by Prof Nadia Magnenat Thalmann from NTU and Prof Henry Fuchs from the University of North Carolina (UNC) at Chapel Hill.

The global vision of this research is to work on real and virtual humans, social robots, real and virtual objects that can simultaneously interact in a space that is a geometrically consistent blend of different remote real spaces and synthetic environments.

BeingTogether means (1) occupying the same space; and (2) everybody is aware of everyone and can interact with each of them. To achieve this goal, there are two key issues: the building of this blended space, and the mixed society, which lives in this simulated space.

Project 1: A Fusion of Worlds – Immersive Telepresence

This project will achieve a geometrically consistent blend of different remote real spaces and synthetic environments by addressing three kinds of challenges:

(1) Real-time high fidelity scene acquisition;

(2) Practical, unencumbering wearable displays; and

(3) Immersive fusion across sites.

Project 2: Mixed Society of People, Virtual Humans and Social Robots in TelePresence

This project will build low cost robots and achieve the smooth integration of a mixed society in the blended space, with four main challenges:

(1) From any body and face, create the outer shelf of a humanoid robot through 3D virtual modelling and 3D printing;

(2) Each partner (real, virtual, telepresent, robot, autonomous or not) should be characterized by affects (including personality and emotion), memory, and interactions with gestures, facial expressions and dialogue;

(3) Each partner should be aware of all other partners and the envionrment, and able to collaborate and interact, and;

(4) Each partner (except virtual humans) should be able to interact physically with the environment, especially reaching, grasping and exchanging objects.

The two projects are completed with concrete two scenarios including the technical innovations of both projects. We will provide a first demonstrator with Humanoid Nadine robot being a real receptionist and being able to communicate with other people elsewhere and a second demonstrator where children will enjoy the companionship of a social robot to learn and discover new things together. The social robot will also be able to monitor the children as well as communicate directly with each parents’ child. The parents will also be able to guide the robot distantly whenever they like to do it and interfere with the children.


  • L. Ge, Y. Cai, J. Weng and J. Yuan, Hand PointNet: 3D Hand Pose Estimation using Point Sets, IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, Utah, USA, June 18-22, 2018

  • X. Xia, Y. Guan, A. State, T.-J. Cham and H. Fuchs, Towards Efficient 3D Calibration for Different Types of Multi-view Autostereoscopic 3D Displays, Proceedings of Computer Graphics International 2018, Bintan, Indonesia, June 11-14, 2018

  • Z. Fang, J. Yuan and N. Magnenat Thalmann, Understanding Human-Object Interaction in RGB-D videos for Human Robot Interaction, Proceedings of Computer Graphics International 2018, Bintan, Indonesia, June 11-14, 2018

  • L. Tian, N. Magnenat Thalmann, D. Thalmann and J. Zheng, A Methodology to Model and Simulate Customized Realistic Anthropomorphic Robotic Hands, Proceedings of Computer Graphics International 2018, Bintan, Indonesia, June 11-14, 2018

  • L. Ge, H. Liang, J. Yuan and D. Thalmann, Robust 3D Hand Pose Estimation From Single Depth Images Using Multi-View CNNs, IEEE Transactions on Image Processing, Vol. 27, Issue 9, pp. 4422-4436, DOI: 10.1109/TIP.2018.2834824, May 10, 2018

  • J. Zhang, J. Zheng and N. Magnenat Thalmann, MCAEM: mixed-correlation analysis-based episodic memory for companion–user interactions, The Visual Computer, Vol. 34, Issue 6-8, pp. 1129-1141, DOI: 10.1007/s00371-018-1537-3, May 10, 2018

  • Y. Tahir, J. Dauwels, D. Thalmann and N. Magnenat Thalmann, A User Study of a Humanoid Robot as a Social Mediator for Two-Person Conversations, International Journal of Social Robotics, pp 1-14, DOI: 10.1007/s12369-018-0478-3, April 25, 2018

  • L. Ge, H. Liang, J. Yuan and D. Thalmann, Real-time 3D Hand Pose Estimation with 3D Convolutional Neural Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-1, DOI: 10.1109/TPAMI.2018.2827052, April 16, 2018

  • D. Chakraborty, Z. Yang, Y. Tahir, T. Maszczyk, J. Dauwels, N. Magnenat Thalmann, J. Zheng, Y. Maniam, N. Amirah, B.-L. Tan and J. Lee, Prediction of negative symptoms of schizophrenia from emotion related low-level speech signals, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018, Calgary, Alberta, Canada, April 15-20, 2018

  • T. Deng, J. Cai, T. J. Cham and J. Zheng, Multiple consumer-grade depth camera registration using everyday objects,, Image and Vision Computing, Vol. 62, Pp. 1-7, June 2017

  • C. Y. Wong and G. Seet, Workload, awareness and automation in multiple-robot supervision, International Journal of Advanced Robotic Systems, Vol. 14, Issue 3, DOI: 10.1177/1729881417710463, June 2017

  • P. K. Jayaraman, C. W. Fu, J. Zheng, X. Liu and T. T. Wong, Globally Consistent Wrinkle-Aware Shading of Line Drawings, IEEE Transactions on Visualization and Computer Graphics, Vol. PP, Issue 99, pp. 01-01, DOI: 10.1109/TVCG.2017.2705182, May 2017

  • D. Thalmann, Sensors and Actuators for HCI and VR: A Few Case Studies, Frontiers in Electronic Technologies, Springer, pp. 65-83, April 2017

  • N. Magnenat Thalmann, L. Tian and F. Yao, Nadine: A Social Robot that Can Localize Objects and Grasp Them in a Human Way, Frontiers in Electronic Technologies, Springer, pp. 1-23, April 2017

  • Y. Cai, R. Chiew, Z.T. Nay, C. Indhumathi, and L. Huang, Design and development of VR learning environments for children with ASD, Interactive Learning Environments , Vol. 25, Issue 3, pp. 01-12, DOI: 10.1080/10494820.2017.1282877, March 2017

  • D. Xu, Q. Duan, J. Zheng, J. Zhang, J. Cai and T.J. Cham, Shading-based Surface Detail Recovery under General Unknown Illumination, IEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. PP, Issue 99, pp. 01-01, DOI: 10.1109/TPAMI.2017.2671458 February 2017