Many people have recommended me the infoGAN paper, but I hadn't taken the time to read it until recently. It is actually quite cool:
The InfoGAN idea is pretty simple. The paper presents an extension to the GAN objective. A new term encourages high mutual information between generated samples and a small subset of latent variables cc. The hope is that by forcing high information content, we cram the most interesting aspects of the representation into cc.
If we were successful, cc ends up representing the most salient and most meaningful sources of variation in the data, while the rest of the noise variables zz will account for additional, meaningless sources of variation and can essentially be dismissed as uncompressible noise.
In order to maximise the mutual information, the authors make use of a variational lower bound. This, conveniently, results in a recognition model, similar to the one we see in variational autoencoders. The recognition model infers latent representation cc from data.
The paper is pretty cool, the results are convincing. I found the notation and derivation a bit confusing, so here is my mini-review:
ref paper eqn 5
I think there is an interesting connection that the authors did not mention (frankly, it probably would have overcomplicated the presentation). The connection is that original GAN objective itself can be derived from mutual information, and in fact, the discriminator D can be thought of as a variational auxillary variable, exactly the same role as the recognition model q(c|x)q(c|x) in the InfoGAN paper.
The connection relies on the interpretation of Jensen-Shannon divergence as mutual information (see e.g. Yingzen's blog post ： GANs, mutual information, and possibly algorithm selection?). Here is my graphical model view on InfoGANs that may put things in a slightly different light:
Let's consider the joint distribution of a bunch of varibles:
Now, the main problem is with this derivation is that we were supposed to minimise ℓGANℓGAN, so we really would like an upper bound instead of a lower bound. But the variational method only provides a lower bound. Therefore,
GANs minimise a lower bound, which I believe accounts for some of their unstable behaviour
Recall that the idealised InfoGAN objective is the weighted difference of two mutual information terms.
To arrive at the algorithm the authors used, one uses the bound on both mutual information terms.
原文发布于微信公众号 - CreateAMind（createamind）