前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Coach 包含carla Env的强化学习框架

Coach 包含carla Env的强化学习框架

作者头像
用户1908973
发布2018-07-24 14:13:29
9710
发布2018-07-24 14:13:29
举报
文章被收录于专栏:CreateAMindCreateAMind

https://github.com/NervanaSystems/coach

and

https://github.com/18605973470/rl-with-carla/blob/master/gym_carla.py

好奇心

https://pathak22.github.io/noreward-rl/

Coach

Overview

Coach is a python reinforcement learning research framework containing implementation of many state-of-the-art algorithms.

It exposes a set of easy-to-use APIs for experimenting with new RL algorithms, and allows simple integration of new environments to solve. Basic RL components (algorithms, environments, neural network architectures, exploration policies, ...) are well decoupled, so that extending and reusing existing components is fairly painless.

Training an agent to solve an environment is as easy as running:

代码语言:javascript
复制
python3 coach.py -p CartPole_DQN -r

Blog posts from the Intel® AI website:

  • Release 0.8.0 (initial release)
  • Release 0.9.0

Contacting the Coach development team is also possible through the email coach@intel.com

Table of Contents

  • Coach
    • Running Coach
    • Running Coach Dashboard (Visualization)
    • Parallelizing an Algorithm
    • Coach Installer
    • TensorFlow GPU Support
    • Overview
    • Documentation
    • Installation
    • Usage
    • Supported Environments
    • Supported Algorithms
    • Citation
    • Disclaimer

Documentation

Framework documentation, algorithm description and instructions on how to contribute a new agent/environment can be found here.

Installation

Note: Coach has only been tested on Ubuntu 16.04 LTS, and with Python 3.5.

Coach Installer

Coach's installer will setup all the basics needed to get the user going with running Coach on top of OpenAI Gymenvironments. This can be done by running the following command and then following the on-screen printed instructions:

代码语言:javascript
复制
./install.sh

Coach creates a virtual environment and installs in it to avoid changes to the user's system.

In order to activate and deactivate Coach's virtual environment:

代码语言:javascript
复制
source coach_env/bin/activate
代码语言:javascript
复制
deactivate

In addition to OpenAI Gym, several other environments were tested and are supported. Please follow the instructions in the Supported Environments section below in order to install more environments.

TensorFlow GPU Support

Coach's installer installs Intel-Optimized TensorFlow, which does not support GPU, by default. In order to have Coach running with GPU, a GPU supported TensorFlow version must be installed. This can be done by overriding the TensorFlow version:

代码语言:javascript
复制
pip3 install tensorflow-gpu

Usage

Running Coach

Coach supports both TensorFlow and neon deep learning frameworks.

Switching between TensorFlow and neon backends is possible by using the -f flag.

Using TensorFlow (default): -f tensorflow

Using neon: -f neon

There are several available presets in presets.py. To list all the available presets use the -l flag.

To run a preset, use:

代码语言:javascript
复制
python3 coach.py -r -p <preset_name>

For example:

  1. CartPole environment using Policy Gradients:
代码语言:javascript
复制
python3 coach.py -r -p CartPole_PG
  1. Pendulum using Clipped PPO:
代码语言:javascript
复制
python3 coach.py -r -p Pendulum_ClippedPPO -n 8
  1. MountainCar using A3C:
代码语言:javascript
复制
python3 coach.py -r -p MountainCar_A3C -n 8
  1. Doom basic level using Dueling network and Double DQN algorithm:
代码语言:javascript
复制
 python3 coach.py -r -p Doom_Basic_Dueling_DDQN
  1. Doom health gathering level using Mixed Monte Carlo:
代码语言:javascript
复制
python3 coach.py -r -p Doom_Health_MMC

It is easy to create new presets for different levels or environments by following the same pattern as in presets.py

More usage examples can be found here.

Running Coach Dashboard (Visualization)

Training an agent to solve an environment can be tricky, at times.

In order to debug the training process, Coach outputs several signals, per trained algorithm, in order to track algorithmic performance.

While Coach trains an agent, a csv file containing the relevant training signals will be saved to the 'experiments' directory. Coach's dashboard can then be used to dynamically visualize the training signals, and track algorithmic behavior.

To use it, run:

代码语言:javascript
复制
python3 dashboard.py

Parallelizing an Algorithm

Since the introduction of A3C in 2016, many algorithms were shown to benefit from running multiple instances in parallel, on many CPU cores. So far, these algorithms include A3C, DDPG, PPO, and NAF, and this is most probably only the begining.

Parallelizing an algorithm using Coach is straight-forward.

The following method of NetworkWrapper parallelizes an algorithm seamlessly:

代码语言:javascript
复制
network.train_and_sync_networks(current_states, targets)

Once a parallelized run is started, the train_and_sync_networks API will apply gradients from each local worker's network to the main global network, allowing for parallel training to take place.

Then, it merely requires running Coach with the -n flag and with the number of workers to run with. For instance, the following command will set 16 workers to work together to train a MuJoCo Hopper:

代码语言:javascript
复制
python3 coach.py -p Hopper_A3C -n 16

Supported Environments

  • OpenAI Gym: Installed by default by Coach's installer.
  • ViZDoom: Follow the instructions described in the ViZDoom repository - https://github.com/mwydmuch/ViZDoom Additionally, Coach assumes that the environment variable VIZDOOM_ROOT points to the ViZDoom installation directory.
  • Roboschool: Follow the instructions described in the roboschool repository - https://github.com/openai/roboschool
  • GymExtensions: Follow the instructions described in the GymExtensions repository - https://github.com/Breakend/gym-extensions Additionally, add the installation directory to the PYTHONPATH environment variable.
  • PyBullet: Follow the instructions described in the Quick Start Guide (basically just - 'pip install pybullet')
  • CARLA: Download release 0.7 from the CARLA repository - https://github.com/carla-simulator/carla/releases Create a new CARLA_ROOT environment variable pointing to CARLA's installation directory. A simple CARLA settings file (CarlaSettings.ini) is supplied with Coach, and is located in the environmentsdirectory.

Supported Algorithms

  • Deep Q Network (DQN) (code)
  • Double Deep Q Network (DDQN) (code)
  • Dueling Q Network
  • Mixed Monte Carlo (MMC) (code)
  • Persistent Advantage Learning (PAL) (code)
  • Categorical Deep Q Network (C51) (code)
  • Quantile Regression Deep Q Network (QR-DQN) (code)
  • Bootstrapped Deep Q Network (code)
  • N-Step Q Learning | Distributed (code)
  • Neural Episodic Control (NEC) (code)
  • Normalized Advantage Functions (NAF) | Distributed (code)
  • Policy Gradients (PG) | Distributed (code)
  • Asynchronous Advantage Actor-Critic (A3C) | Distributed (code)
  • Deep Deterministic Policy Gradients (DDPG) | Distributed (code)
  • Proximal Policy Optimization (PPO) (code)
  • Clipped Proximal Policy Optimization | Distributed (code)
  • Direct Future Prediction (DFP) | Distributed (code)
  • Behavioral Cloning (BC) (code)

Citation

If you used Coach for your work, please use the following citation:

代码语言:javascript
复制
@misc{caspi_itai_2017_1134899,
  author       = {Caspi, Itai and
                  Leibovich, Gal and
                  Novik, Gal},
  title        = {Reinforcement Learning Coach},
  month        = dec,
  year         = 2017,
  doi          = {10.5281/zenodo.1134899},
  url          = {https://doi.org/10.5281/zenodo.1134899}
}

Disclaimer

Coach is released as a reference code for research purposes. It is not an official Intel product, and the level of quality and support may not be as expected from an official product. Additional algorithms and environments are planned to be added to the framework. Feedback and contributions from the open source and RL research communities are more than welcome.

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-03-14,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 CreateAMind 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Coach
    • Overview
      • Table of Contents
        • Documentation
          • Installation
            • Coach Installer
            • TensorFlow GPU Support
          • Usage
            • Running Coach
            • Running Coach Dashboard (Visualization)
            • Parallelizing an Algorithm
          • Supported Environments
            • Supported Algorithms
              • Citation
                • Disclaimer
                领券
                问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档