A key challenge in complex visuomotor control is learning abstract representations that are effective for specifying goals, planning, and generalization. To this end, we introduce universal planning networks (UPN). UPNs embed differentiable planning within a goal-directed policy. This planning computation unrolls a forward model in a latent space and infers an optimal action plan through gradient descent trajectory optimization. The plan-by-gradient-descent process and its underlying representations are learned end-to-end to directly optimize a supervised imitation learning objective. We find that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images. The learned representations can be leveraged to specify distance-based rewards to reach new target states for model-free reinforcement learning, resulting in substantially more effective learning when solving new tasks described via image-based goals. We were able to achieve successful transfer of visuomotor planning strategies across robots with significantly different morphologies and actuation capabilities.
本文分享自微信公众号 - CreateAMind（createamind）
原文出处及转载信息见文内详细说明，如有侵权，请联系 email@example.com 删除。
The recently developed variational autoencoders (VAEs) have proved to be an effe...
Generative Image Modeling using Style and Structure Adversarial Networks
开发工具：Androidstudio 适配机型：honor8(Android6.0)， 坚果R1(Android8.0) 开发功能：Android中蓝牙连接...
Adventures in Scaling from Zero to 5 Billion Data Points per Day -- Dave Torok(C...