Deep Graphics Encoder for Real-Time Video Makeup Synthesis from Example
Examples of lipstick transfer from example images usingour proposed method
Abstract:
While makeup virtual-try-on is now widespread, parametrizing a computer graphics rendering engine for synthesizing images of a given cosmetics product remains a challenging task. In this paper, we introduce an inverse computer graphics method for automatic makeup synthesis from a reference image, by learning a model that maps an example portrait image with makeup to the space of rendering parameters. This method can be used by artists to automatically create realistic virtual cosmetics image samples, or by consumers, to virtually try-on a makeup extracted from their favourite reference image.
Video examples:
Video presentation :
Available on the website of AI for Content Creation CVPR Worskshop
Paper:
Paper : CVPR Workshop proceedings
Supplementary Materials : CVPR Workshop supplementary
ArXiv : https://arxiv.org/pdf/2105.06407.pdf \
Bibtex :
@InProceedings{Kips_2021_CVPR,
author = {Kips, Robin and Jiang, Ruowei and Ba, Sileye and Phung, Edmund and Aarabi, Parham and Gori, Pietro and Perrot, Matthieu and Bloch, Isabelle},
title = {Deep Graphics Encoder for Real-Time Video Makeup Synthesis From Example},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2021},
pages = {3889-3893}
}
Model architecture:
Left: training procedure of our model. We sample a graphics parameters vectorgiand render a corresponding image using a renderer R and a random source image X_i. Then, the inverse graphics encoder E is trained to map the image to the space of graphics parameters with minimum error. Right: inference pipeline. A reference image X_ref is passed to the inverse graphics encoder to estimate the corresponding makeup graphics parameters. Then this code can be used as input to the rendering engine, to render the reference makeup on videos in real-time. To facilitate training and increase the proportion of relevant pixels in the image, E is trained on crops of eyes and lips.
Lipstick and eye ehadow synthesis from example image
Qualitative evaluation
Qualitative comparison on lipstick and eye-shadow synthesis against state of the art makeup transfer methods. Our method is capable of reproducing realistic rendering in high resolution for makeup with various colors and textures. The eye-shadow application zone and intensity are not part of the estimated graphics parameters, but set by the user at rendering time according to their preferences.