前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >预测编码 笔记

预测编码 笔记

作者头像
用户1908973
发布2018-08-20 14:55:12
2420
发布2018-08-20 14:55:12
举报
文章被收录于专栏:CreateAMindCreateAMind

https://arxiv.org/pdf/1807.03748.pdf

任务通用或任务间迁移的特征的学习;监督学习只学监督的特征即可。

会进行高维空间的压缩成隐变量空间,进行多步预测。

The main intuition behind our model is to learn the representations that encode the underlying shared

information between different parts of the (high-dimensional) signal.

不同高纬度信息后的共同信息。和多传感器patition 类似啊

One of the challenges of predicting high-dimensional data is that unimodal losses such as mean-

squared error and cross-entropy are not very useful, and powerful conditional generative models which

need to reconstruct every detail in the data are usually required. But these models are computationally

intense, and waste capacity at modeling the complex relationships in the data x, often ignoring the

context c. For example, images may contain thousands of bits of information while the high-level

latent variables such as the class label contain much less information (10 bits for 1,024 categories).

抽象关键信息,而不是所有的都重建,浪费计算资源

maximally preserves the mutual information of the original signals x

and c defined as

互信息

提到慢特征在深度学习书 13.3 有讲

The simplicity and low computational

requirements to train the model, together with the encouraging results in challenging reinforcement

learning domains when used in conjunction with the main loss are exciting developments towards

useful unsupervised learning that applies universally to many more data modalities.

https://github.com/danielhomola/mifs 互信息

Structured Disentangled Representations 附录 https://arxiv.org/abs/1804.02086 互信息

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-08-15,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 CreateAMind 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档