DECA
Learning an Animatable Detailed 3D Face Model from In-The-Wild Images
Yao Feng*, Haiwen Feng*, Michael J. Black, and Timo Bolkart (*authors contributed equally)
SIGGRAPH 2021
Input image, detailed reconstruction, animation with various poses & expressions
Abstract
While current monocular 3D face reconstruction methods can recover fine geometric details, they suffer several limitations. Some methods produce faces that cannot be realistically animated because they do not model how wrinkles vary with expression. Other methods are trained on high-quality face scans and do not generalize well to in-the-wild images. We present the first approach that regresses 3D face shape and animatable details that are specific to an individual but change with expression. Our model, DECA (Detailed Expression Capture and Animation), is trained to robustly produce a UV displacement map from a low-dimensional latent representation that consists of person-specific detail parameters and generic expression parameters, while a regressor is trained to predict detail, shape, albedo, expression, pose and illumination parameters from a single image. To enable this, we introduce a novel detail-consistency loss that disentangles person-specific details from expression-dependent wrinkles. This disentanglement allows us to synthesize realistic person-specific wrinkles by controlling expression parameters while keeping person-specific details unchanged. DECA is learned from in-the-wild images with no paired 3D supervision and achieves state-of-the-art shape reconstruction accuracy on two benchmarks. Qualitative results on in-the-wild data demonstrate DECA's robustness and its ability to disentangle identity- and expression-dependent details enabling animation of reconstructed faces. The model and code are publicly available.
Video
More Information
- pdf preprint
- supplemental
- code
- DECA Project page at MPI:IS
- For questions, please contact deca@tue.mpg.de
News
- 08/21: Released DECA training code
- 04/21: DECA accepted to SIGGRAPH 2021 (camera ready paper)
- 12/20: Released DECA inference code
- 12/20: Released DECA on arXiv
Referencing DECA
@article{Feng:SIGGRAPH:2021,
title = {Learning an Animatable Detailed {3D} Face Model from In-the-Wild Images},
author = {Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},
journal = {ACM Transactions on Graphics (ToG), Proc. SIGGRAPH},
volume = {40},
number = {4},
pages = {88:1--88:13},
month = aug,
year = {2021},
month_numeric = {8}
}