Human Performance Capture

Performance capture is about bringing digital characters to life by capturing the geometry and motion of real human performances.

Short Summary

Our research covers the full pipeline for data-driven modeling and animation of virtual avatars, which includes real-time markerless acquisition, tracking in non-rigid partial data, deformable shape completion, motion-retargeting, and automatic character rigging.

 

Facial Animation

Facial performance capture and animation is particularly challenging as humans detect even subtle errors due to their familiarity with faces. In comparison to traditional marker-based motion capture we therefore employ real-time 3D acquisition devices to capture facial expressions at high spatial and temporal resolution and investigate accurate non-rigid tracking algorithms to determine full correspondence between frames. We focus our research on real-time performance capture as this enables exiting new applications such as live TV shows or interactive games. The real-time constraint poses additional challenges that we tackle by employing a facial rig (statistical model or blendshapes) to significantly reduce the tracking complexity. Our research prototype is one of the first systems showing that accurate real-time performance capture and facial retargeting is feasible. One key challenge for performance capture is art direct-ability i.e. allowing subsequent modifications of the animations in a post-processing step. This is only possible if facial animations are described by semantically meaningful expressions such as mouth open, eye closed, etc. Current approaches therefore require skilled artists to build an accurate facial rig for instance using blendshapes. We have developed an automated rigging method that given a set of example expressions automatically builds a semantically meaningful facial rig.

 

Publications

 

Dynamic 3D Avatar Creation from Hand-held Video Input

Alexandru Eugen Ichim, Sofien Bouaziz, Mark Pauly
ACM Transactions on Graphics (Proceedings of SIGGRAPH), 2015

Paper Webpage


Online Modeling For Realtime Facial Animation

Sofien Bouaziz, Yangang Wang, Mark Pauly
ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2013

Webpage Paper


Realtime Performance-Based Facial Animation

Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2011

Webpage Paper


Example-Based Facial Rigging

Hao Li, Thibaut Weise, Mark Pauly
ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2010

Paper Webpage


Face/Off: Live Facial Puppetry

Thibaut Weise, Hao Li, Luc Van Gool and Mark Pauly
ACM Siggraph/Eurographics Symposium on Computer Animation 2009, best paper award

Paper


 

Animation Reconstruction

The traditional data-driven approach in animating a character consist of building a geometric template model (e.g. using 3D scanning for static objects) and inferring the motion using MoCap systems. Instead of following the common procedure of capturing motion (typically at lower resolution than the geometry) as a separate process, we investigate several domains in markerless performance capture that require a loose representation of geometric template or no templates at all. As opposed to other methods, our approach captures and tracks the motion of dynamic surfaces at the same resolution as the geometry and therefore fully exploits recent progress in real-time scanning technology. Key motivation of our dense space-time reconstruction is the ability to capture highly complex deformations such as faces, human performances wearing garment and many other interesting dynamic objects. We address several unsolved problems including drift-free tracking through time-varying and topology changing partial data and finding correspondences through humans of different shapes and poses. We explore this ambitious endeavor on several fronts and deal with data captured from a single view real-time structured-light scanner and multi-view data acquisition from photometric stereo techniques. At the core of our findings is a fully automatic, robust and accurate non-rigid registration algorithms which establishes correspondences through partial data that are densely captured in space and time. This robust alignment technique has enabled several applications such as reconstructing a deforming model from a single view, automatic shrink-wrapping of generic face models to custom scans, and unsupervised pose alignment of human bodies.

 

Publications

 

Dynamic 2D/3D Registration

Sofien Bouaziz, Andrea Tagliasacchi, Mark Pauly
Eurographics 2014 Tutorial Notes

Webpage Paper Code


Robust Single-View Geometry and Motion Reconstruction

Hao Li, Bart Adams, Leonidas Guibas, Mark Pauly
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia) 2009

Webpage Paper


Global Correspondence Optimization for Non-Rigid Registration of Depth Scans

Hao Li, Robert W. Sumner, Mark Pauly
Computer Graphics Forum (Proceedings of SGP) 2008

Webpage Paper


 

Faceshift is an LGG/EPFL spin-off based in Technopark Zurich. Their software analyzes the face motions of an actor, and describes them as a mixture of basic expressions, plus head orientation and gaze. This description is then used to animate virtual characters for use in movie or game production. They have astonishing real time tracking and a high quality offline post-processing in a single, convenient application.

Webpage