3D face tracking and 3D face avatars

3D face reconstruction from a single 2D image
3D face reconstruction from a single 2D image

We’re researching to combine state-of-the-art deep learning with traditional, 3D model based computer vision, to achieve real-time face tracking, 3D avatar creation and face analysis, from single images and monocular video streams.

This research is led by Dr. Patrik Huber from 4dface Ltd, in collaboration with the ViSiR research group.

The 4D Face Model (4DFM) base mesh
The 4D Face Model (4DFM) base mesh

As part of this effort, Dr. Huber and 4dface are building a state-of-the-art 3D morphable model, the 4D Face Model (4DFM). The model is a full-head model, enabling us to do face analysis in the whole face region, including ears, neck and forehead. We are also building the model with a large variety of subjects, and especially expressions, so that it is able to accurately model speech, asymmetric lip movements, and a variety of expressions from FACS (Facial Action Coding System). Together with the ViSiR research group, we use the model on various robots for human-computer interaction.

Second, we’re actively working on improving our state-of-the-art 3D face fitting, both to single images, RGB videos, and real-time live streams. As part of this, we’re exploring ways to combine deep-learning with classical, 3D model based computer vision. The research is then used to provide real-time face tracking and face analysis on ViSiR’s robots in real-time human-computer interaction scenarios.

More details and a list of our most recent related publications can be found on Patrik’s PEOPLE page.

Facial landmark detection results for a group of people
Facial landmark detection results for a group of people

Partners: