Invertible autoencoder for domain adaptation

Yunfei Teng, Anna Choromanska

Research output: Contribution to journalArticle

Abstract

The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB: A → B and FBA: B → A is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.

Original languageEnglish (US)
Article number20
JournalComputation
Volume7
Issue number2
DOIs
StatePublished - Jun 1 2019

Fingerprint

Invertible
Probability distributions
Learning systems
Cycle
Orthonormal
Learning Systems
Encoder
Neural networks
Mutual Information
Joint Distribution
Preservation
Forcing
Demonstrate
Probability Distribution
Neural Networks
Benchmark
Target
Requirements
Learning
Architecture

Keywords

  • Autoencoder
  • Image-to-image translation
  • Invertible autoencoder

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)
  • Modeling and Simulation
  • Applied Mathematics

Cite this

Invertible autoencoder for domain adaptation. / Teng, Yunfei; Choromanska, Anna.

In: Computation, Vol. 7, No. 2, 20, 01.06.2019.

Research output: Contribution to journalArticle

@article{b52784c78377473e97d3572c73613486,
title = "Invertible autoencoder for domain adaptation",
abstract = "The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB: A → B and FBA: B → A is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.",
keywords = "Autoencoder, Image-to-image translation, Invertible autoencoder",
author = "Yunfei Teng and Anna Choromanska",
year = "2019",
month = "6",
day = "1",
doi = "10.3390/computation7020020",
language = "English (US)",
volume = "7",
journal = "Computation",
issn = "2079-3197",
publisher = "Multidisciplinary Digital Publishing Institute (MDPI)",
number = "2",

}

TY - JOUR

T1 - Invertible autoencoder for domain adaptation

AU - Teng, Yunfei

AU - Choromanska, Anna

PY - 2019/6/1

Y1 - 2019/6/1

N2 - The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB: A → B and FBA: B → A is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.

AB - The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB: A → B and FBA: B → A is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.

KW - Autoencoder

KW - Image-to-image translation

KW - Invertible autoencoder

UR - http://www.scopus.com/inward/record.url?scp=85067027763&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85067027763&partnerID=8YFLogxK

U2 - 10.3390/computation7020020

DO - 10.3390/computation7020020

M3 - Article

VL - 7

JO - Computation

JF - Computation

SN - 2079-3197

IS - 2

M1 - 20

ER -