前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Model-Based 两篇paper

Model-Based 两篇paper

作者头像
用户1908973
发布2018-12-28 15:17:06
5250
发布2018-12-28 15:17:06
举报
文章被收录于专栏:CreateAMindCreateAMind

概念学习作者的论文

Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control

Kendall Lowrey∗1 Aravind Rajeswaran∗1 Sham Kakade1 Emanuel Todorov1,2 Igor Mordatch3 ∗ Equal contributions 1 University of Washington 2 Roboti LLC 3 OpenAI { klowrey, aravraj } @ cs.washington.edu

Abstract We propose a “plan online and learn offline” framework for the setting where an agent, with an internal model, needs to continually act and learn in the world. Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration. We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning. Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions. Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation. This exploration is critical for fast and stable learning of the value function. Combining these components enable solutions to complex control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world.

https://sites.google.com/view/polo-mpc

https://arxiv.org/pdf/1811.01848.pdf

IMPROVING MODEL-BASED CONTROL AND ACTIVE EXPLORATION WITH RECONSTRUCTION UNCERTAINTY OPTIMIZATION

A PREPRINT

Norman Di Palo∗ Sapienza University of Rome Rome, Italy normandipalo@gmail.com Harri Valpola Curious AI Helsinki, Finland December 11, 2018

ABSTRACT Model-based predictions of future trajectories of a dynamical system often suffer from inaccuracies, forcing model-based control algorithms to re-plan often, thus being computationally expensive, sub-optimal and not reliable. In this work, we propose a model-agnostic method for estimating the uncertainty of a model’s predictions based on reconstruction error, using it in control and exploration. As our experiments show, this uncertainty estimation can be used to improve control performance on a wide variety of environments by choosing predictions of which the model is confident. It can also be used for active learning to explore more efficiently the environment by planning for trajectories with high uncertainty, allowing faster model learning.

https://arxiv.org/pdf/1812.03955.pdf

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-12-18,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 CreateAMind 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档