Unsupervised image-to-image translation by semantics consistency and self-attention
Author:
Affiliation:

The Key Laboratory of Computer Vision and System of Ministry of Education, Tianjin Key Laboratory of Intelligence Computing and Novel Software Technology, Tianjin University of Technology, Tianjin 300384, China

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Unsupervised image-to-image translation is a challenging task for computer vision. The goal of image translation is to learn a mapping between two domains, without corresponding image pairs. Many previous works only focused on image-level translation but ignored image features processing, which led to a certain semantics loss, such as the changes of the background of the generated image, partial transformation, and so on. In this work, we propose a method of image-to-image translation based on generative adversarial nets (GANs). We use autoencoder structure to extract image features in the generator and add semantic consistency loss on extracted features to maintain the semantic consistency of the generated image. Self-attention mechanism at the end of generator is used to obtain long-distance dependency in image. At the same time, as expanding the convolution receptive field, the quality of the generated image is enhanced. Quantitative experiment shows that our method significantly outperforms previous works. Especially on images with obvious foreground, our model shows an impressive improvement.

    Reference
    Related
    Cited by
Get Citation

ZHANG Zhibin, XUE Wanli, FU Guokai. Unsupervised image-to-image translation by semantics consistency and self-attention[J]. Optoelectronics Letters,2022,18(3):175-180

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:October 26,2020
  • Revised:September 17,2021
  • Adopted:
  • Online: April 27,2022
  • Published: