【论文推荐】最新六篇对抗自编码器相关论文—多尺度网络节点表示、生成对抗自编码、逆映射、Wasserstein、条件对抗、去噪

【导读】专知内容组整理了最近六篇对抗自编码器(Adversarial Autoencoder)相关文章,为大家进行介绍,欢迎查看!

1. AAANE: Attention-based Adversarial Autoencoder for Multi-scale Network Embedding(AAANE: 基于注意力机制对抗自编码器的多尺度网络节点表示



作者:Lei Sang,Min Xu,Shengsheng Qian,Xindong Wu

摘要:Network embedding represents nodes in a continuous vector space and preserves structure information from the Network. Existing methods usually adopt a "one-size-fits-all" approach when concerning multi-scale structure information, such as first- and second-order proximity of nodes, ignoring the fact that different scales play different roles in the embedding learning. In this paper, we propose an Attention-based Adversarial Autoencoder Network Embedding(AAANE) framework, which promotes the collaboration of different scales and lets them vote for robust representations. The proposed AAANE consists of two components: 1) Attention-based autoencoder effectively capture the highly non-linear network structure, which can de-emphasize irrelevant scales during training. 2) An adversarial regularization guides the autoencoder learn robust representations by matching the posterior distribution of the latent embeddings to given prior distribution. This is the first attempt to introduce attention mechanisms to multi-scale network embedding. Experimental results on real-world networks show that our learned attention parameters are different for every network and the proposed approach outperforms existing state-of-the-art approaches for network embedding.

期刊:arXiv, 2018年3月24日

网址

http://www.zhuanzhi.ai/document/2eff4f2b546ad1be81c27bc8436fe570

2. Generative Adversarial Autoencoder Networks(生成对抗自编码器网络)



作者:Ngoc-Trung Tran,Tuan-Anh Bui,Ngai-Man Cheung

摘要:We introduce an effective model to overcome the problem of mode collapse when training Generative Adversarial Networks (GAN). Firstly, we propose a new generator objective that finds it better to tackle mode collapse. And, we apply an independent Autoencoders (AE) to constrain the generator and consider its reconstructed samples as "real" samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model. Secondly, from mappings between latent and data spaces provided by AE, we further regularize AE by the relative distance between the latent and data samples to explicitly prevent the generator falling into mode collapse setting. This idea comes when we find a new way to visualize the mode collapse on MNIST dataset. To the best of our knowledge, our method is the first to propose and apply successfully the relative distance of latent and data samples for stabilizing GAN. Thirdly, our proposed model, namely Generative Adversarial Autoencoder Networks (GAAN), is stable and has suffered from neither gradient vanishing nor mode collapse issues, as empirically demonstrated on synthetic, MNIST, MNIST-1K, CelebA and CIFAR-10 datasets. Experimental results show that our method can approximate well multi-modal distribution and achieve better results than state-of-the-art methods on these benchmark datasets. Our model implementation is published here: https://github.com/tntrung/gaan

期刊:arXiv, 2018年3月24日

网址

http://www.zhuanzhi.ai/document/457bfc0e3182e83d292caf940b5a4a17

3. Learning Inverse Mappings with Adversarial Criterion(基于对抗标准学习逆映射)



作者:Jiyi Zhang,Hung Dang,Hwee Kuan Lee,Ee-Chien Chang

摘要:We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E that embodies an "inverse mapping" that encodes a data sample into a latent code vector. Unlike previous hybrid approaches that leverage adversarial training criterion in constructing autoencoders, FAAE minimizes re-encoding errors in the latent space and exploits adversarial criterion in the data space. Experimental evaluations demonstrate that the proposed framework produces sharper reconstructed images while at the same time enabling inference that captures rich semantic representation of data.

期刊:arXiv, 2018年3月21日

网址

http://www.zhuanzhi.ai/document/8f4c39a2488948d49d9d6e074019fa83

4.Wasserstein Auto-Encoders(Wasserstein自编码)



作者:Ilya Tolstikhin,Olivier Bousquet,Sylvain Gelly,Bernhard Schoelkopf

摘要:We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.

期刊:arXiv, 2018年3月12日

网址

http://www.zhuanzhi.ai/document/5a47a71e6da5b52ed1c88b55a5d724a0

5.Sounderfeit: Cloning a Physical Model with Conditional Adversarial Autoencoders(Sounderfeit:基于条件对抗自编码器克隆一个物理模型)



作者:Stephen Sinclair

摘要:An adversarial autoencoder conditioned on known parameters of a physical modeling bowed string synthesizer is evaluated for use in parameter estimation and resynthesis tasks. Latent dimensions are provided to capture variance not explained by the conditional parameters. Results are compared with and without the adversarial training, and a system capable of "copying" a given parameter-signal bidirectional relationship is examined. A real-time synthesis system built on a generative, conditioned and regularized neural network is presented, allowing to construct engaging sound synthesizers based purely on recorded data.

期刊:arXiv, 2018年2月22日

网址

http://www.zhuanzhi.ai/document/0e9ec08b2ee6bdfe86c3207ffeaabe16

6.Denoising Adversarial Autoencoders(去噪对抗自编码)



作者:Antonia Creswell,Anil Anthony Bharath

机构:Imperial College London

摘要:Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabelled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabelled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularisation during training to shape the distribution of the encoded data in latent space. We suggest denoising adversarial autoencoders, which combine denoising and regularisation, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of adversarial autoencoders. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance, and can synthesise samples that are more consistent with the input data than those trained without a corruption process.

期刊:arXiv, 2018年1月5日

网址

http://www.zhuanzhi.ai/document/ebadc455b2ed3cd9d83f6c1365f25449

原文发布于微信公众号 - 专知(Quan_Zhuanzhi)

原文发表时间:2018-04-07

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏企鹅号快讯

使用RNN预测股票价格系列一

正文共11490个字,16张图,预计阅读时间:29分钟。 01 概述 我们将解释如何建立一个有LSTM单元的RNN模型来预测S&P500指数的价格。 数据集可以...

2699
来自专栏小鹏的专栏

03 Linear Regression

Introduction:         线性回归可能是统计学,机器学习和科学中最重要的算法之一。 它是最常用的算法之一,了解如何实现它和其各种avors是非...

4358
来自专栏华章科技

与数据挖掘有关或有帮助的R包和函数的集合

rpart,party,randomForest,rpartOrdinal,tree,marginTree,

873
来自专栏AIUAI

Caffe2 - (三十二) Detectron 之 roi_data - 模型 minibatch blobs

6129
来自专栏数据结构与算法

圆的反演变换

挺神奇的东西,网上没有多少资料,我也不是太懂,代码什么的都没写过,那就抄一下百度百科吧

1432
来自专栏生信小驿站

python 特征选择①

VarianceThreshold 是特征选择中的一项基本方法。它会移除所有方差不满足阈值的特征。默认设置下,它将移除所有方差为0的特征,即那些在所有样本中数值...

762
来自专栏进击的程序猿

如何构建一个简单的神经网络如何构建一个简单的神经网络

最近报名了Udacity的深度学习基石,这是介绍了第二部分神经网络入门,第一篇是线性回归背后的数学. 本文notebook的地址是:https://githu...

1373
来自专栏CDA数据分析师

R语言时间序列函数大全(收藏!)

包 library(zoo) #时间格式预处理 library(xts) #同上 library(timeSeires) #同上 library(urca) #...

1K7
来自专栏用户画像

5.4.1 最小生成树(Minimum-Spanning-Tree,MST)

一个连通的生成树是图中的极小连通子图,它包括图中的所有顶点,并且只含尽可能少的边。这意味着对于生成树来说,若砍去它的一条边,就会使生成树变成非连通图;若给它添加...

941
来自专栏烂笔头

常用样本相似性和距离度量方法

目录[-] 数据挖掘中经常需要度量样本的相似度或距离,来评价样本间的相似性。特征数据不同,度量方法也不相同。 欧式距离 欧式距离(Euclidean ...

9424

扫码关注云+社区

领取腾讯云代金券