Abstract
Deep neural networks have been shown to be very successful at learning feature hierarchies in supervised learning tasks. Generative models, on the other hand, have benefited less from hi- erarchical models with multiple layers of latent variables. In this paper, we prove that certain classes of hierarchical latent variable models do not take advantage of the hierarchical structure when trained with existing variational methods, and provide some limitations on the kind of fea- tures existing models can learn. Finally we pro- pose an alternative flat architecture that learns meaningful and disentangled features on natural images.
代码: https://github.com/ShengjiaZhao/Variational-Ladder-Autoencoder
招聘信息请公众号回复关键字招聘。