前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >资源 | 生成对抗网络新进展与论文全集

资源 | 生成对抗网络新进展与论文全集

作者头像
机器之心
发布2018-05-07 15:03:42
7100
发布2018-05-07 15:03:42
举报
文章被收录于专栏:机器之心机器之心

选自GitHub

参与:蒋思源、吴攀

生成对抗网络(GAN)是近段时间以来最受研究者关注的机器学习方法之一,深度学习泰斗 Yann LeCun 就曾多次谈到 这种机器学习理念的巨大价值和未来前景。在本文中,机器之心总结了 GitHub 上两篇关于 GAN 的资源,其中一篇介绍了 GAN 的一些引人关注的新理论和实践(如 Wasserstein GAN),另一篇则集中展示了大量 GAN 相关的论文。

以下是两篇原文的链接:

  • GAN 理论&实践的新进展:https://casmls.github.io/general/2017/04/13/gan.html
  • GAN 论文列表项目:https://github.com/nightrome/really-awesome-gan

GAN 理论&实践的新进展

首先我们看看 Liping Liu 在 github.io 上发布的这篇介绍了 GAN 理论和实践上的新进展的文章。这篇文章对两篇 GAN 相关的论文进行了探讨;其中第一篇是 Arora et al. 的《Generalization and Equilibrium in Generative Adversarial Nets》,该论文是一篇对 GAN 的理论研究;第二篇则是 Gulrajani et al. 的《Improved Training of Wasserstein GANs》,其介绍了一种用于 Facebook 最近提出并引起了广泛关注的 Wasserstein GAN 的新训练方法。下面的视频对第一篇论文给出了很好的介绍:

视频内容

GAN 和 Wasserstein GAN

GAN 训练是一个两方博弈过程,其中生成器(generator)的目标是最小化其生成的分布和数据分布之间的差异,而判别器(discriminator)的工作则是尽力区分生成器分布的样本和真实数据分布的样本。当判别器的表现不比随机乱猜更好时,我们就认为生成器「获胜」了。

基本 GAN 的优化问题是一个「最小-最大问题(min-max problem)」:

简单解释一下,最好的判别器会给出生成器分布 G(h),h∼pNormal 与数据分布 pdata 之间的差的度量。如果我们有 pdata(x) 且该判别器可以是任意函数,那么该生成器的优化目标就是最小化 pdata 和 G(h) 之间的 Jensen-Shannon 散度。

在实际中,人们已经在使用 Wasserstein 距离来度量两个分布之间的差异了。可参阅以下文章:

  • Robin Winstanley 的《Modified GANs》:https://casmls.github.io/general/2017/02/23/modified-gans.html

数据分布和生成分布之间的 Wasserstein 距离是:

其中 L1 表示 1-Lipschitz 函数集;f 是指判别器,其采用了神经网络的形式,并通过 GAN 训练来学习。其目标是最小化这两个分布之间的 Wasserstein 距离。

第一篇论文解决了以下问题:

1. 差异的度量定义在分布之上,但是当目标是使用有限样本计算的时候,我们能得到什么结果?

2. 训练能否达到均衡?

3.「达到均衡」究竟是什么意思?

第二篇论文研究了这个惩罚优化器(penalizing the optimizer)的问题——以便其在 1-Lipschitz 空间中近似找到最优的判别器。

论文 1:GAN 中的泛化与均衡(Generalization and Equilibrium in Generative Adversarial Nets)

论文地址:https://arxiv.org/abs/1703.00573

距离度量的泛化

Arora et al 引进了一种新的距离度量:神经网络散度(neural network divergence)。该距离度量定义在由神经网络生成的分布上。

定理:当样本空间足够大,两个分布之间的距离可以由各自样本间的距离逼近。

均衡

直观解释:一个强大的生成器总是能赢得对抗,因为它能使用无限的混合分量逼近数据的分布。弱一些的生成器使用有限但又足够多的混合分量也能近似逼近赢得博弈。

设置博弈:u 和 v 代表着生成器和判别器的纯策略。博弈的收益函数 F(u,v) 就是该 GAN 的目标函数:

根据冯·诺依曼的一个定理,混合策略总是能实现均衡,但在该理想情况下,生成器和判别器都需要考虑无限的纯策略。本论文提出有限纯策略的*ϵ*-近似均衡。

定理:给定生成器和判别器足够多的混合分量,生成器能近似赢得博弈。

MIX+GAN:生成器与判别器的混合

通过理论分析,该论文建议使用生成器和判别器的混合。模型的目标是最小化生成器 T 和其混合权重,判别器 T 和其混合权重。在这里 w=softmax(α)。

该论文使用 DCGAN 作为基本模型,并展示了 MIX+DCGAN 能生成更逼真的图像,且比 DCGAN 有更高的起始分。参阅论文《Unsupervised representation learning with deep convolutional generative adversarial networks》by Radford, Alec, et al. https://arxiv.org/abs/1511.06434

图 4:MIX+DCGAN 和 DCGAN 的训练曲线

论文 2:使用梯度惩罚的 Wasserstein GAN 训练(Wasserstein GAN training with gradient penalty)

这篇论文基于一个好结果——即最优的判别器(在该论文中被称为 critic)在几乎任何位置都有 norm 1 的梯度。这里的梯度是关于 x 的,而非该判别器的参数。

由于以下原因,梯度裁剪(gradient clipping)效果并不是很好:

1. 这个带有梯度裁剪的优化器会在一个比 1-Lipschitz 小的空间中搜索该判别器,所以其会使该判别器偏向更简单的函数。

2. 被裁剪后的梯度在反向传播通过网络层的过程中会消失或爆炸。

梯度的理论结果和梯度裁剪的缺点激发了新方法「梯度惩罚」的提出。如果梯度的标准不是一个,判别器将得到惩罚。目标函数为:

x_hat 是在直线 x 和 x_bar 之间的随机点。

在实验中,使用梯度惩罚的 GAN 训练要比使用权重裁剪的拥有更快的收敛速度。在图像生成和语言建模任务中,使用该论文提出的方法训练模型常常要比其他模型拥有更好的结果。

在了解生成对抗网络的最新进展之后,下面我们列出了 GitHub 用户 Holger Caesar 整理的 GAN 资源。

研讨会

  • NIPS 2016 对抗性训练研讨会 [https://sites.google.com/site/nips2016adversarial/] [http://www.inference.vc/my-summary-of-adversarial-training-nips-workshop/]

教程和技术博客

  • How to Train a GAN? Tips and tricks to make GANs work [https://github.com/soumith/ganhacks]
  • NIPS 2016 Tutorial: Generative Adversarial Networks [https://arxiv.org/abs/1701.00160]
  • On the intuition behind deep learning & GANs—towards a fundamental understanding [https://blog.waya.ai/introduction-to-gans-a-boxing-match-b-w-neural-nets-b4e5319cc935]
  • OpenAI - Generative Models [https://blog.openai.com/generative-models/]
  • SimGANs - a game changer in unsupervised learning, self driving cars, and more [https://blog.waya.ai/simgans-applied-to-autonomous-driving-5a8c6676e36b]

论文

理论和机器学习

  • A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models [https://arxiv.org/abs/1611.03852]
  • A General Retraining Framework for Scalable Adversarial Classification [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_2.pdf]
  • Adversarial Autoencoders [https://arxiv.org/abs/1511.05644]
  • Adversarial Discriminative Domain Adaptation [https://arxiv.org/abs/1702.05464]
  • Adversarial Generator-Encoder Networks [https://arxiv.org/pdf/1704.02304.pdf]
  • Adversarial Feature Learning [https://arxiv.org/abs/1605.09782]
  • Adversarially Learned Inference [https://arxiv.org/abs/1606.00704]
  • An Adversarial Regularisation for Semi-Supervised Training of Structured Output Neural Networks [https://arxiv.org/abs/1702.02382]
  • Associative Adversarial Networks [https://arxiv.org/abs/1611.06953]
  • b-GAN: New Framework of Generative Adversarial Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_4.pdf]
  • Boundary-Seeking Generative Adversarial Networks [https://arxiv.org/abs/1702.08431]
  • Conditional Generative Adversarial Nets [https://arxiv.org/abs/1411.1784]
  • Connecting Generative Adversarial Networks and Actor-Critic Methods [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_1.pdf]
  • Cooperative Training of Descriptor and Generator Networks [https://arxiv.org/abs/1609.09408]
  • Explaining and Harnessing Adversarial Examples [https://arxiv.org/abs/1412.6572]
  • f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization [https://arxiv.org/abs/1606.00709]
  • Generating images with recurrent adversarial networks [https://arxiv.org/abs/1602.05110]
  • Generative Adversarial Nets with Labeled Data by Activation Maximization [https://arxiv.org/abs/1703.02000]
  • Generative Adversarial Networks [https://arxiv.org/abs/1406.2661] [https://github.com/goodfeli/adversarial]
  • Generative Adversarial Residual Pairwise Networks for One Shot Learning [https://arxiv.org/abs/1703.08033]
  • Generative Adversarial Structured Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_14.pdf]
  • Generative Moment Matching Networks [https://arxiv.org/abs/1502.02761] [https://github.com/yujiali/gmmn]
  • Improved Techniques for Training GANs [https://arxiv.org/abs/1606.03498] [https://github.com/openai/improved-gan]
  • Inverting The Generator Of A Generative Adversarial Network [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_9.pdf]
  • Learning in Implicit Generative Models [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_10.pdf]
  • Learning to Discover Cross-Domain Relations with Generative Adversarial Networks [https://arxiv.org/abs/1703.05192]
  • Least Squares Generative Adversarial Networks [https://arxiv.org/abs/1611.04076]
  • Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities [https://arxiv.org/abs/1701.06264]
  • LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation [https://arxiv.org/abs/1703.01560]
  • Maximum-Likelihood Augmented Discrete Generative Adversarial Networks [https://arxiv.org/abs/1702.07983]
  • Mode Regularized Generative Adversarial Networks [https://arxiv.org/abs/1612.02136]
  • On the Quantitative Analysis of Decoder-Based Generative Models [https://arxiv.org/abs/1611.04273]
  • SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient [https://arxiv.org/abs/1609.05473]
  • Simple Black-Box Adversarial Perturbations for Deep Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_11.pdf]
  • Stacked Generative Adversarial Networks [https://arxiv.org/abs/1612.04357]
  • Training generative neural networks via Maximum Mean Discrepancy optimization [https://arxiv.org/abs/1505.03906]
  • Triple Generative Adversarial Nets [https://arxiv.org/abs/1703.02291]
  • Unrolled Generative Adversarial Networks [https://arxiv.org/abs/1611.02163]
  • Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [https://arxiv.org/abs/1511.06434] [https://github.com/Newmu/dcgan_code] [https://github.com/pytorch/examples/tree/master/dcgan][https://github.com/carpedm20/DCGAN-tensorflow] [https://github.com/soumith/dcgan.torch] [https://github.com/jacobgil/keras-dcgan]
  • Wasserstein GAN [https://arxiv.org/abs/1701.07875] [https://github.com/martinarjovsky/WassersteinGAN]

视觉应用

  • Adversarial Networks for the Detection of Aggressive Prostate Cancer [https://arxiv.org/abs/1702.08014]
  • Age Progression / Regression by Conditional Adversarial Autoencoder [https://arxiv.org/abs/1702.08423]
  • ArtGAN: Artwork Synthesis with Conditional Categorial GANs [https://arxiv.org/abs/1702.03410]
  • Conditional generative adversarial nets for convolutional face generation [http://www.foldl.me/uploads/2015/conditional-gans-face-generation/paper.pdf]
  • Conditional Image Synthesis with Auxiliary Classifier GANs [https://arxiv.org/abs/1610.09585]
  • Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks [https://arxiv.org/abs/1506.05751] [https://github.com/facebook/eyescream] [http://soumith.ch/eyescream/]
  • Deep multi-scale video prediction beyond mean square error [https://arxiv.org/abs/1511.05440] [https://github.com/dyelax/Adversarial_Video_Generation]
  • Full Resolution Image Compression with Recurrent Neural Networks [https://arxiv.org/abs/1608.05148]
  • Generate To Adapt: Aligning Domains using Generative Adversarial Networks [https://arxiv.org/pdf/1704.01705.pdf]
  • Generative Adversarial Text to Image Synthesis [https://arxiv.org/abs/1605.05396] [https://github.com/paarthneekhara/text-to-image]
  • Generative Visual Manipulation on the Natural Image Manifold [http://www.eecs.berkeley.edu/~junyanz/projects/gvm/] [https://youtu.be/9c4z6YsBGQ0] [https://arxiv.org/abs/1609.03552] [https://github.com/junyanz/iGAN]
  • Image De-raining Using a Conditional Generative Adversarial Network [https://arxiv.org/abs/1701.05957]
  • Image Generation and Editing with Variational Info Generative Adversarial Networks [https://arxiv.org/abs/1701.04568]
  • Image-to-Image Translation with Conditional Adversarial Networks [https://arxiv.org/abs/1611.07004] [https://github.com/phillipi/pix2pix]
  • Imitating Driver Behavior with Generative Adversarial Networks [https://arxiv.org/abs/1701.06699]
  • Invertible Conditional GANs for image editing [https://arxiv.org/abs/1611.06355]
  • Multi-view Generative Adversarial Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_13.pdf]
  • Neural Photo Editing with Introspective Adversarial Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_15.pdf]
  • Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network [https://arxiv.org/abs/1609.04802]
  • Recurrent Topic-Transition GAN for Visual Paragraph Generation [https://arxiv.org/abs/1703.07022]
  • RenderGAN: Generating Realistic Labeled Data [https://arxiv.org/abs/1611.01331]
  • SeGAN: Segmenting and Generating the Invisible [https://arxiv.org/abs/1703.10239]
  • Semantic Segmentation using Adversarial Networks [https://arxiv.org/abs/1611.08408]
  • Semi-Latent GAN: Learning to generate and modify facial images from attributes [https://arxiv.org/pdf/1704.02166.pdf]
  • TAC-GAN - Text Conditioned Auxiliary Classifier Generative Adversarial Network [https://arxiv.org/abs/1703.06412]
  • Towards Diverse and Natural Image Descriptions via a Conditional GAN [https://arxiv.org/abs/1703.06029]
  • Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro [https://arxiv.org/abs/1701.07717]
  • Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks [https://arxiv.org/abs/1703.10593]
  • Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery [https://arxiv.org/abs/1703.05921]
  • Unsupervised Cross-Domain Image Generation [https://arxiv.org/abs/1611.02200]
  • WaterGAN: Unsupervised Generative Network to Enable Real-time Color Correction of Monocular Underwater Images [https://arxiv.org/abs/1702.07392]

其它应用

  • Adversarial Training Methods for Semi-Supervised Text Classification [https://arxiv.org/abs/1605.07725]
  • Learning to Protect Communications with Adversarial Neural Cryptography [https://arxiv.org/abs/1610.06918] [https://blog.acolyer.org/2017/02/10/learning-to-protect-communications-with-adversarial-neural-cryptography/]
  • MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation using 1D and 2D Conditions [https://arxiv.org/abs/1703.10847]
  • Semi-supervised Learning of Compact Document Representations with Deep Networks [http://www.cs.nyu.edu/~ranzato/publications/ranzato-icml08.pdf]
  • Steganographic Generative Adversarial Networks [https://arxiv.org/abs/1703.05502]

视频

  • Generative Adversarial Networks by Ian Goodfellow [https://channel9.msdn.com/Events/Neural-Information-Processing-Systems-Conference/Neural-Information-Processing-Systems-Conference-NIPS-2016/Generative-Adversarial-Networks]
  • Tutorial on Generative Adversarial Networks by Mark Chang [https://www.youtube.com/playlist?list=PLeeHDpwX2Kj5Ugx6c9EfDLDojuQxnmxmU]

代码

  • Cleverhans: A library for benchmarking vulnerability to adversarial examples [https://github.com/openai/cleverhans] [http://cleverhans.io/]
  • Generative Adversarial Networks (GANs) in 50 lines of code (PyTorch) [https://medium.com/@devnag/generative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f] [https://github.com/devnag/pytorch-generative-adversarial-networks]

机器之心报道 GAN 相关文章

独家 | GAN 之父 NIPS 2016 演讲现场直击:全方位解读生成对抗网络的原理及未来(附 PPT)

人物 | Ian Goodfellow 亲述 GAN 简史:人工智能不能理解它无法创造的东西

干货 | 直观理解 GAN 背后的原理:以人脸图像生成为例

专栏 | 看穿机器学习(W-GAN 模型)的黑箱

学界 | 最小二乘 GAN:比常规 GAN 更稳定,比 WGAN 收敛更迅速

资源 | Wasserstein GAN 的 TensorFlow 实现

综述 | 一文帮你发现各种出色的 GAN 变体

一周论文 | GAN(Generative Adversarial Nets)研究进展

学界 | FAIR 提出常见 GAN 训练方法的替代方法:WGAN

本文为机器之心编译,转载请联系本公众号获得授权。

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 机器之心 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档