S. Khodadadeh, Saeid Motiian, Zhe Lin, L. Bölöni, and S. Ghadar

Automatic Object Recoloring Using Adversarial Learning


Cite as:

S. Khodadadeh, Saeid Motiian, Zhe Lin, L. Bölöni, and S. Ghadar. Automatic Object Recoloring Using Adversarial Learning. In IEEE Workshop on Applications of Computer Vision (WACV-2021), pp. 1488–1496, January 2021.

Download:

(unavailable)

Abstract:

We propose a novel method for automatic object recoloring based on Generative Adversarial Networks (GANs). The user can simply give commands of the form ``recolor object to color'', which will be executed without any need of manual edit. Our approach takes advantage of pre-trained object detectors and saliency mask segmentation networks. The segmented mask of the given object along with the target color and the original image form the input to the GAN. The use of cycle consistency loss ensures the realistic look of the results. To our best knowledge, this is the first algorithm where the automatic recoloring is only limited by the ability of the mask extractor to map a natural language tag to a specific object in the image (several hundred object types at the time of this writing). For a performance comparison, we also adapted other state of the art methods to perform this task. We found that our method had consistently yielded qualitatively better recoloring results.

BibTeX:

@inproceedings{Khodadadeh-2021-WACV,
   author = "S. Khodadadeh and Saeid Motiian and Zhe Lin and  L. B{\"o}l{\"o}ni and S. Ghadar",
   title = "Automatic Object Recoloring Using Adversarial Learning",
   booktitle = "IEEE Workshop on Applications of Computer Vision (WACV-2021)",
   year = "2021",
   month = "January",
   pages = "1488-1496",
   abstract = {
   We  propose a novel method for automatic object recoloring based on Generative Adversarial Networks (GANs). The user can simply give commands of the form ``recolor object to color'', which will be executed without any need of manual edit. Our approach takes advantage of pre-trained object detectors and saliency mask segmentation networks. The segmented mask of the given object along with the target color and the original image form the input to the GAN. The use of cycle consistency loss ensures the realistic look of the results.
   To our best knowledge, this is the first algorithm where the automatic recoloring is only limited by the ability of the mask extractor to map a natural language tag to a specific object in the image (several hundred object types at the time of this writing).
   For a performance comparison, we also adapted other state of the art methods to perform this task. We found that our method had consistently yielded qualitatively better recoloring results.
   }
}

Generated by bib2html.pl (written by Patrick Riley, Lotzi Boloni ) on Fri Jan 29, 2021 20:15:22