前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Building Agents with Imagination

Building Agents with Imagination

作者头像
用户1908973
发布2018-07-20 16:49:18
5070
发布2018-07-20 16:49:18
举报
文章被收录于专栏:CreateAMindCreateAMind

Building Agents with Imagination

https://github.com/createamind/Imagination-Augmented-Agents

Intelligent agents must have the capability to ‘imagine’ and reason about the future. Beyond that they must be able to construct a plan using this knowledge. [1] This tutorial presents a new family of approaches for imagination-based planning:

  • Imagination-Augmented Agents for Deep Reinforcement Learning [arxiv]
  • Learning and Querying Fast Generative Models for Reinforcement Learning [arxiv]

The tutorial consists of 4 parts:

1. MiniPacman Environemnt

MiniPacman is played in a 15 × 19 grid-world. Characters, the ghosts and Pacman, move through a maze. The environment was written by @sracaniere from DeepMind. [minipacman.ipynb]

2. Actor Critic

Training standard model-free agent to play MiniPacman with advantage actor-critic (A2C) [actor-critic.ipynb]

3. Environment Model

Environment model is a recurrent neural network which can be trained in an unsupervised fashion from agent trajectories: given a past state and current action, the environment model predicts the next state and reward. [environment-model.ipynb]

4. Imagination Augmented Agent [in progress]

The I2A learns to combine information from its model-free and imagination-augmented paths. The environment model is rolled out over multiple time steps into the future, by initializing the imagined trajectory with the present time real observation, and subsequently feeding simulated observations into the model. Then a rollout encoder processes the imagined trajectories as a whole and learns to interpret it, i.e. by extracting any information useful for the agent’s decision, or even ignoring it when necessary This allows the agent to benefit from model-based imagination without the pitfalls of conventional model-based planning. [imagination-augmented agent.ipynb]

More materials on model based + model free RL

  • The Predictron: End-To-End Learning and Planning [arxiv] [https://github.com/zhongwen/predictron]
  • Model-Based Planning in Discrete Action Spaces [arxiv]
  • Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics [arxiv]
  • Model-Based Value Expansion for Efficient Model-Free Reinforcement Learning [arxiv]
  • TEMPORAL DIFFERENCE MODELS: MODEL-FREE DEEP RL FOR MODEL-BASED CONTROL [arxiv][https://github.com/vitchyr/rlkit]
  • Universal Planning Networks [arxiv]
  • World Models [arxiv] [https://github.com/AppliedDataSciencePartners/WorldModels]
  • Recall Traces: Backtracking Models for Efficient Reinforcement Learning [arxiv]
  • [Learning by Playing – Solving Sparse Reward Tasks from Scratch ] [https://zhuanlan.zhihu.com/p/34222231][https://github.com/HugoCMU/pySACQ]
  • [Hindsight experience replay] [https://github.com/openai/baselines/tree/master/baselines/her]
  • [https://github.com/pathak22/zeroshot-imitation]

https://github.com/createamind/Imagination-Augmented-Agents

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-05-10,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 CreateAMind 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Building Agents with Imagination
    • The tutorial consists of 4 parts:
      • More materials on model based + model free RL
      领券
      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档