Stop Thinking, Just Do!

Sungsoo Kim's Blog

Generalization and Equilibrium in Generative Adversarial Networks

tagsTags

15 April 2017


Article Source


Generalization and Equilibrium in Generative Adversarial Networks

Abstract

This paper makes progress on several open theoretical issues related to Generative Adversarial Networks. A definition is provided for what it means for the training to generalize, and it is shown that generalization is not guaranteed for the popular distances between distributions such as Jensen-Shannon or Wasserstein. We introduce a new metric called neural net distance for which generalization does occur. We also show that an approximate pure equilibrium in the 2player game exists for a natural training objective (Wasserstein). Showing such a result has been an open problem (for any training objective).

Finally, the above theoretical ideas lead us to propose a new training protocol, MIX+GAN, which can be combined with any existing method. We present experiments showing that it stabilizes and improves some existing methods.

Joint work with Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang.

Download Video [2.2 GB .mp4]


comments powered by Disqus