前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >模仿学习 比较图

模仿学习 比较图

作者头像
CreateAMind
发布2018-12-28 15:17:45
5440
发布2018-12-28 15:17:45
举报
文章被收录于专栏:CreateAMind

ONE-SHOT HIGH-FIDELITY IMITATION: TRAINING LARGE-SCALE DEEP NETS WITH RL

Tom Le Paine∗ , Sergio Gomez Colmenarejo ´ ∗ , Ziyu Wang, Scott Reed, Yusuf Aytar, Tobias Pfaff, Matt Hoffman, Gabriel Barth-Maron, Serkan Cabi, David Budden, Nando de Freitas DeepMind London, UK {tpaine,sergomez,ziyu,reedscot,yusufaytar,tpfaff, mwhoffman,gabrielbm,cabi,budden,nandodefreitas}@google.com

ABSTRACT Humans are experts at high-fidelity imitation – closely mimicking a demonstration, often in one attempt. Humans use this ability to quickly solve a task instance, and to bootstrap learning of new tasks. Achieving these abilities in autonomous agents is an open problem. In this paper, we introduce an off-policy RL algorithm (MetaMimic) to narrow this gap. MetaMimic can learn both (i) policies for high-fidelity one-shot imitation of diverse novel skills, and (ii) policies that enable the agent to solve tasks more efficiently than the demonstrators. MetaMimic relies on the principle of storing all experiences in a memory and replaying these to learn massive deep neural network policies by off-policy RL. This paper introduces, to the best of our knowledge, the largest existing neural networks for deep RL and shows that larger networks with normalization are needed to achieve one-shot high-fidelity imitation on a challenging manipulation task. The results also show that both types of policy can be learned from vision, in spite of the task rewards being sparse, and without access to demonstrator actions.

https://arxiv.org/pdf/1810.05017.pdf

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2018-12-18,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 CreateAMind 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档