我一直试图使用自定义openai健身房环境固定翼无人机从https://github.com/eivindeb/fixed-wing-gym通过测试它与openai稳定基线算法,但我已经遇到了几天的问题。我的基线是CartPole示例多处理:从https://stable-baselines.readthedocs.io/en/master/guide/examples.html#multiprocessing-unleashing-the-power-of-vectorized-environments释放的向量化环境的能力,因为我需要提供参数,我正在尝试使用多处理,我相信这个例子就是我所需要的。
我已将基线示例修改如下:
import gym
import numpy as np
from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import SubprocVecEnv
from stable_baselines.common import set_global_seeds
from stable_baselines import ACKTR, PPO2
from gym_fixed_wing.fixed_wing import FixedWingAircraft
def make_env(env_id, rank, seed=0):
    """
    Utility function for multiprocessed env.
    :param env_id: (str) the environment ID
    :param num_env: (int) the number of environments you wish to have in subprocesses
    :param seed: (int) the inital seed for RNG
    :param rank: (int) index of the subprocess
    """
    def _init():
        env = FixedWingAircraft("fixed_wing_config.json")
        #env = gym.make(env_id)
        env.seed(seed + rank)
        return env
    set_global_seeds(seed)
    return _init
if __name__ == '__main__':
    env_id = "fixed_wing"
    #env_id = "CartPole-v1"
    num_cpu = 4  # Number of processes to use
    # Create the vectorized environment
    env = SubprocVecEnv([lambda: FixedWingAircraft for i in range(num_cpu)])
    #env = SubprocVecEnv([make_env(env_id, i) for i in range(num_cpu)])
    model = PPO2(MlpPolicy, env, verbose=1)
    model.learn(total_timesteps=25000)
    obs = env.reset()
    for _ in range(1000):
        action, _states = model.predict(obs)
        obs, rewards, dones, info = env.step(action)
        env.render()我一直在犯的错误是:
Traceback (most recent call last):
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/fixed-wing-gym/gym_fixed_wing/ACKTR_fixedwing.py", line 38, in <module>
    model = PPO2(MlpPolicy, env, verbose=1)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 104, in __init__
    self.setup_model()
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 134, in setup_model
    n_batch_step, reuse=False, **self.policy_kwargs)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 660, in __init__
    feature_extraction="mlp", **_kwargs)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 540, in __init__
    scale=(feature_extraction == "cnn"))
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 221, in __init__
    scale=scale)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 117, in __init__
    self._obs_ph, self._processed_obs = observation_input(ob_space, n_batch, scale=scale)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/input.py", line 51, in observation_input
    type(ob_space).__name__))
NotImplementedError: Error: the model does not support input space of type NoneType我不知道作为env_id和def make_env(env_id, rank, seed=0)函数真正要输入什么。我还认为,并行进程的VecEnv函数没有正确设置。
我正在Ubuntu18.04中使用PyCharm IDE使用Pythonv3.6编写代码。
在这一点上,任何建议都会很有帮助!
谢谢。
发布于 2019-11-21 13:50:26
您创建了一个自定义环境,但是没有将它注册到openai gym接口中。这就是env_id所指的。gym中的所有环境都可以通过调用它们的注册名称来设置。
因此,基本上您需要做的是遵循设置指令这里并创建适当的__init__.py和setup.py脚本,并遵循相同的文件结构。
最后,从您的环境目录中使用pip install -e .在本地安装包。
https://stackoverflow.com/questions/58941164
复制相似问题