前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >AI尝试做判断题和填空题的效果

AI尝试做判断题和填空题的效果

作者头像
用户1908973
发布2018-11-30 10:48:28
7280
发布2018-11-30 10:48:28
举报
文章被收录于专栏:CreateAMindCreateAMind

小进展: cpc 做 填空题的能力-- cpc-pred-gan

code: https://github.com/createamind/keras-cpcgan

cpc 的实现: https://github.com/davidtellez/contrastive-predictive-coding:

对于数字的排列顺序,比如 123 后面是 456 ; 456 后面是 789; 345 后面是 678;cpc可以判断两个序列是否满足这个顺序要求;

注意:上面给的是123 的图片序列,从图片序列学习其中的数字规律

cpc既然可以做判断题,那么cpc能否扩展到做填空题的能力?即给123 填 456 ,或 给 135 填791;

注意:给123的是数字的图片 或135顺序的数字的图片:从图片中学习抽象数字的排列规律。

实验效果:

更多进展正在研发;期待热爱AI的你的加入! 简历投递:hr@createamind.com 微信 zdx3578.

ref: https://github.com/davidtellez/contrastive-predictive-coding

Representation Learning with Contrastive Predictive Coding

This repository contains a Keras implementation of the algorithm presented in the paper Representation Learning with Contrastive Predictive Coding.

The goal of unsupervised representation learning is to capture semantic information about the world, recognizing patterns in the data without using annotations. This paper presents a new method called Contrastive Predictive Coding (CPC) that can do so across multiple applications. The main ideas of the paper are:

  • Contrastive: it is trained using a contrastive approach, that is, the main model has to discern between right and wrong data sequences.
  • Predictive: the model has to predict future patterns given the current context.
  • Coding: the model performs this prediction in a latent space, transforming code vectors into other code vectors (in contrast with predicting high-dimensional data directly).

CPC has to predict the next item in a sequence using only an embedded representation of the data, provided by an encoder. In order to solve the task, this encoder has to learn a meaningful representation of the data space. After training, this encoder can be used for other downstream tasks like supervised classification.

To train the CPC algorithm, I have created a toy dataset. This dataset consists of sequences of modified MNIST numbers (64x64 RGB). Positive sequence samples contain sorted numbers, and negative ones random numbers. For example, let's assume that the context sequence length is S=4, and CPC is asked to predict the next P=2 numbers. A positive sample could look like [2, 3, 4, 5]->[6, 7], whereas a negative one could be [1, 2, 3, 4]->[0, 8]. Of course CPC will only see the patches, not the actual numbers.

Disclaimer: this code is provided as is, if you encounter a bug please report it as an issue. Your help will be much welcomed!

Results

After 10 training epochs, CPC reports a 99% accuracy on the contrastive task. After training, I froze the encoder and trained a MLP on top of it to perform supervised digit classification on the same MNIST data. It achieved 90% accuracy after 10 epochs, demonstrating the effectiveness of CPC for unsupervised feature extraction.

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-10-27,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 CreateAMind 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Representation Learning with Contrastive Predictive Coding
  • Results
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档