CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer

Robin KipsPietro GoriMatthieu PerrotIsabelle Bloch



While existing makeup style transfer models perform an image synthesis whose results cannot be explicitly controlled, the ability to modify makeup color continuously is a desirable property for virtual try-on applications. We propose a new formulation for the makeup style transfer task, with the objective to learn a color controllable makeup style synthesis. We introduce CA-GAN, a generative model that learns to modify the color of specific objects (e.g. lips or eyes) in the image to an arbitrary target color while preserving background. Since color labels are rare and costly to acquire, our method leverages weakly supervised learning for conditional GANs. This enables to learn a controllable synthesis of complex objects, and only requires a weak proxy of the image attribute that we desire to modify. Finally, we present for the first time a quantitative analysis of makeup style transfer and color control performance.

Video presentation:

Our presentation at the AIM20 workshop, in conjonction with ECCV.

Video examples:


Paper : ECCV Workshop proceedings
ArXiv :
Supplementary Materials : ECCV Workshop supplementary

R. Kips, P. Gori, M. Perrot, I.Bloch
CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer
AIM20 (ECCV20 Workshop) [arXiv] [BibTeX] [Supplementary Materials]

Model Architecture:

The training procedure of our CA-GAN model. First (a) the generator G estimates an image from a source image and a target makeup color. Secondly (b) the discriminator D estimates the makeup color, skin color and a real/fake classification from the generated image, used to compute the color regression loss L_color, background consistency loss L_bg and adversarial loss L_adv, respectively. Thirdly (c), the source image is reconstructed from the generated one using the makeup color as target. The reconstruction is used to compute the cycle consistency loss L_cycle.


Modification of makeup color along each dimension of the CIE Lab color space, using images from our social media dataset. The color patch on the bottom-right of each image illustrates the target color passed to the model. Our approach generalizes to lips and eyes images with various makeup textures and facial poses.


Our model shows makeup style transfer performances that are equivalent to state of the art models, while obtaining better preservation of the skin color of the source subject.


Other works

Check out our other paper presented at AIM (ECCV 2020):