前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >代码:Zero-Shot Visual Imitation

代码:Zero-Shot Visual Imitation

作者头像
用户1908973
发布2018-07-20 16:51:20
2840
发布2018-07-20 16:51:20
举报
文章被收录于专栏:CreateAMindCreateAMind

https://github.com/pathak22/zeroshot-imitation

Zero-Shot Visual Imitation

In ICLR 2018 [Project Website] [Videos]

Deepak Pathak*, Parsa Mahmoudieh*, Guanghao Luo*, Pulkit Agrawal*, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A. Efros, Trevor Darrell University of California, Berkeley

This is the implementation for the ICLR 2018 paper Zero Shot Visual Imitation. We propose an alternative paradigm wherein an agent first explores the world without any expert supervision and then distills its experience into a goal-conditioned skill policy with a novel forward consistency loss. The key insight is the intuition that, for most tasks, reaching the goal is more important than how it is reached.

代码语言:javascript
复制
@inproceedings{pathakICLR18zeroshot,
    Author = {Pathak, Deepak and
    Mahmoudieh, Parsa and Luo, Guanghao and
    Agrawal, Pulkit and Chen, Dian and
    Shentu, Yide and Shelhamer, Evan and
    Malik, Jitendra and Efros, Alexei A. and
    Darrell, Trevor},
    Title = {Zero-Shot Visual Imitation},
    Booktitle = {ICLR},
    Year = {2018}
}

1) Installation and Usage

Requirements
代码语言:javascript
复制
git clone -b master --single-branch https://github.com/pathak22/zeroshot-imitation.gitcd zeroshot-imitation/# (1) Install requirements:sudo apt-get install python-tk
virtualenv venvsource $PWD/venv/bin/activate
pip install --upgrade pip
pip install numpy
pip install -r src/requirements.txt# (2) Install Caffe: http://caffe.berkeleyvision.org/install_apt.htmlgit clone https://github.com/BVLC/caffe.git
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
sudo apt-get install libatlas-base-dev
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
sudo apt-get install --no-install-recommends libboost-all-devcd caffe/  # edit Makefile.configmake all -j
make pycaffe
make test -j
make runtest -j# Note: If you are using conda, then its easy:# $ conda install -c conda-forge caffe# $ conda install -c conda-forge opencv=3.2.0
Data setup

Data can be downloaded at google drive link. This is the same data as used in Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation.

You will need the rope9 dataset and img_mean.npy from this download.

Then, download the AlexNet weights, bvlc_alexnet.npy from here

  • put rope9 data in data/datasets/rope9
  • put img_mean.npy in data/img_mean.npy
  • put bvlc_alexnet.npy in nets/bvlc_alexnet.npy
Training
代码语言:javascript
复制
python -i train.py# fwd_consist=True to turn foward consistency loss on,# or leave it False for to just learn the inverse modelr = RopeImitator('name', fwd_consist=True)# to train baseline, turn baseline_reg=True. note that fwd_consist should be turned on as well (historical accident)r = RopeImitator('name', fwd_consist=True, baseline_reg=True)# Restore old models, if any. default of model_name is just current model namer.restore(iteration, model_name='name of old model')# trainingr.train(num_iters)

Note that the accuracies presented is not a good measure of real world performance. The purpose of forward consistency is to learn actions consistent with state transistions, which don't necessarily have to be the ground truth actions.

2) Other resources

  • Paper
  • Project Website
  • Videos
本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-05-07,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 CreateAMind 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Zero-Shot Visual Imitation
    • In ICLR 2018 [Project Website] [Videos]
      • 1) Installation and Usage
        • Requirements
        • Data setup
        • Training
      • 2) Other resources
      领券
      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档