Bregman learning for generative adversarial networks

Jian Gao, Tembine Hamidou

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper we develop a game theoretic learning framework for deep generative models. Firstly, the problem of minimizing the dissimilarity between the generator distribution and real data is introduced based on f-divergence. Secondly, the optimization problem is transformed into a zero-sum game with two adversarial players, and the existence of Nash equilibrium is established in the quasi-concave-convex case under suitable conditions. Thirdly, a general Bregman-based learning algorithm is proposed to find the Nash equilibria. The algorithm is proved to have a doubly logarithmic convergence time with respect to the precision of the minimax value in potential convex games. Lastly, our methodology is implemented in three application scenarios and compared with several existing optimization algorithms. Both qualitative and quantitative evaluation show that the generative model trained by our algorithm has the state-of-art performance.

Original languageEnglish (US)
Title of host publicationProceedings of the 30th Chinese Control and Decision Conference, CCDC 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages82-89
Number of pages8
ISBN (Electronic)9781538612439
DOIs
StatePublished - Jul 6 2018
Event30th Chinese Control and Decision Conference, CCDC 2018 - Shenyang, China
Duration: Jun 9 2018Jun 11 2018

Other

Other30th Chinese Control and Decision Conference, CCDC 2018
CountryChina
CityShenyang
Period6/9/186/11/18

Fingerprint

Generative Models
Nash Equilibrium
F-divergence
Convex Games
Potential Games
Zero sum game
Convergence Time
Quantitative Evaluation
Dissimilarity
Minimax
Learning Algorithm
Optimization Algorithm
Logarithmic
Generator
Game
Optimization Problem
Scenarios
Learning algorithms
Methodology
Learning

Keywords

  • Bregman learning
  • Convex Optimization
  • Deep Neural Network
  • Game Theory
  • GAN

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Control and Optimization
  • Decision Sciences (miscellaneous)

Cite this

Gao, J., & Hamidou, T. (2018). Bregman learning for generative adversarial networks. In Proceedings of the 30th Chinese Control and Decision Conference, CCDC 2018 (pp. 82-89). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CCDC.2018.8407110

Bregman learning for generative adversarial networks. / Gao, Jian; Hamidou, Tembine.

Proceedings of the 30th Chinese Control and Decision Conference, CCDC 2018. Institute of Electrical and Electronics Engineers Inc., 2018. p. 82-89.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Gao, J & Hamidou, T 2018, Bregman learning for generative adversarial networks. in Proceedings of the 30th Chinese Control and Decision Conference, CCDC 2018. Institute of Electrical and Electronics Engineers Inc., pp. 82-89, 30th Chinese Control and Decision Conference, CCDC 2018, Shenyang, China, 6/9/18. https://doi.org/10.1109/CCDC.2018.8407110
Gao J, Hamidou T. Bregman learning for generative adversarial networks. In Proceedings of the 30th Chinese Control and Decision Conference, CCDC 2018. Institute of Electrical and Electronics Engineers Inc. 2018. p. 82-89 https://doi.org/10.1109/CCDC.2018.8407110
Gao, Jian ; Hamidou, Tembine. / Bregman learning for generative adversarial networks. Proceedings of the 30th Chinese Control and Decision Conference, CCDC 2018. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 82-89
@inproceedings{fd7ee9a3e5aa4c41a8e63c5bf064c144,
title = "Bregman learning for generative adversarial networks",
abstract = "In this paper we develop a game theoretic learning framework for deep generative models. Firstly, the problem of minimizing the dissimilarity between the generator distribution and real data is introduced based on f-divergence. Secondly, the optimization problem is transformed into a zero-sum game with two adversarial players, and the existence of Nash equilibrium is established in the quasi-concave-convex case under suitable conditions. Thirdly, a general Bregman-based learning algorithm is proposed to find the Nash equilibria. The algorithm is proved to have a doubly logarithmic convergence time with respect to the precision of the minimax value in potential convex games. Lastly, our methodology is implemented in three application scenarios and compared with several existing optimization algorithms. Both qualitative and quantitative evaluation show that the generative model trained by our algorithm has the state-of-art performance.",
keywords = "Bregman learning, Convex Optimization, Deep Neural Network, Game Theory, GAN",
author = "Jian Gao and Tembine Hamidou",
year = "2018",
month = "7",
day = "6",
doi = "10.1109/CCDC.2018.8407110",
language = "English (US)",
pages = "82--89",
booktitle = "Proceedings of the 30th Chinese Control and Decision Conference, CCDC 2018",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Bregman learning for generative adversarial networks

AU - Gao, Jian

AU - Hamidou, Tembine

PY - 2018/7/6

Y1 - 2018/7/6

N2 - In this paper we develop a game theoretic learning framework for deep generative models. Firstly, the problem of minimizing the dissimilarity between the generator distribution and real data is introduced based on f-divergence. Secondly, the optimization problem is transformed into a zero-sum game with two adversarial players, and the existence of Nash equilibrium is established in the quasi-concave-convex case under suitable conditions. Thirdly, a general Bregman-based learning algorithm is proposed to find the Nash equilibria. The algorithm is proved to have a doubly logarithmic convergence time with respect to the precision of the minimax value in potential convex games. Lastly, our methodology is implemented in three application scenarios and compared with several existing optimization algorithms. Both qualitative and quantitative evaluation show that the generative model trained by our algorithm has the state-of-art performance.

AB - In this paper we develop a game theoretic learning framework for deep generative models. Firstly, the problem of minimizing the dissimilarity between the generator distribution and real data is introduced based on f-divergence. Secondly, the optimization problem is transformed into a zero-sum game with two adversarial players, and the existence of Nash equilibrium is established in the quasi-concave-convex case under suitable conditions. Thirdly, a general Bregman-based learning algorithm is proposed to find the Nash equilibria. The algorithm is proved to have a doubly logarithmic convergence time with respect to the precision of the minimax value in potential convex games. Lastly, our methodology is implemented in three application scenarios and compared with several existing optimization algorithms. Both qualitative and quantitative evaluation show that the generative model trained by our algorithm has the state-of-art performance.

KW - Bregman learning

KW - Convex Optimization

KW - Deep Neural Network

KW - Game Theory

KW - GAN

UR - http://www.scopus.com/inward/record.url?scp=85050871275&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85050871275&partnerID=8YFLogxK

U2 - 10.1109/CCDC.2018.8407110

DO - 10.1109/CCDC.2018.8407110

M3 - Conference contribution

SP - 82

EP - 89

BT - Proceedings of the 30th Chinese Control and Decision Conference, CCDC 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -