首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >用Gym训练神经网络

用Gym训练神经网络
EN

Stack Overflow用户
提问于 2018-03-11 17:15:33
回答 1查看 1.7K关注 0票数 3

我对下面提供的代码有一些问题。我正在开发python 3.6。我已经重新安装了Python和运行代码所需的所有模块。一般来说,我所做的一切都是基于这个tutorial

问题描述:

当我运行这段代码时,我得到了以下警告,并且根本没有输出。我不明白这些警告是什么意思,也不知道我如何解决它。如果有任何帮助,我将不胜感激。

导入警告(来自warnings模块):

"D:\Users\Rafal\AppData\Local\Programs\Python\Python36\lib\site packages\h5py__init__.py",第36行._conv导入register_converters as _register_converters FutureWarning:不推荐将issubdtype的第二个参数从float转换为np.floating。将来,它将被视为np.float64 == np.dtype(float).type

和:

[33mWARN: gym.spaces.Box自动检测到的数据类型为。请提供显式数据类型。[0M

由我运行的代码:

import gym
import random
import numpy as np
import tflearn
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
from statistics import median, mean
from collections import Counter

LR = 1e-3
env = gym.make("CartPole-v0")
env.reset()
goal_steps = 500
score_requirement = 50
initial_games = 10000


def initial_population():
    # [OBS, MOVES]
    training_data = []
    # all scores:
    scores = []
    # just the scores that met our threshold:
    accepted_scores = []
    # iterate through however many games we want:
    for _ in range(initial_games):
        score = 0
        # moves specifically from this environment:
        game_memory = []
        # previous observation that we saw
        prev_observation = []
        # for each frame in 200
        for _ in range(goal_steps):
            # choose random action (0 or 1)
            action = random.randrange(0,2)
            # do it!
            observation, reward, done, info = env.step(action)

            # notice that the observation is returned FROM the action
            # so we'll store the previous observation here, pairing
            # the prev observation to the action we'll take.
            if len(prev_observation) > 0 :
                game_memory.append([prev_observation, action])
            prev_observation = observation
            score+=reward
            if done: break

        # IF our score is higher than our threshold, we'd like to save
        # every move we made
        # NOTE the reinforcement methodology here. 
        # all we're doing is reinforcing the score, we're not trying 
        # to influence the machine in any way as to HOW that score is 
        # reached.
        if score >= score_requirement:
            accepted_scores.append(score)
            for data in game_memory:
                # convert to one-hot (this is the output layer for our neural network)
                if data[1] == 1:
                    output = [0,1]
                elif data[1] == 0:
                    output = [1,0]

                # saving our training data
                training_data.append([data[0], output])

        # reset env to play again
        env.reset()
        # save overall scores
        scores.append(score)

    # just in case you wanted to reference later
    training_data_save = np.array(training_data)
    np.save('saved.npy',training_data_save)

    # some stats here, to further illustrate the neural network magic!
    print('Average accepted score:',mean(accepted_scores))
    print('Median score for accepted scores:',median(accepted_scores))
    print(Counter(accepted_scores))

    return training_data
EN

回答 1

Stack Overflow用户

发布于 2018-06-05 07:08:35

要用这个错误来回答第二个问题:

gym.spaces.Box autodetected dtype as <class 'numpy.float32'>

进入下载的“健身房”文件所在的目录。进入gym/spaces/并打开"box.py“文件。

在第12行附近的某个地方,您应该会看到:

def __init__(self,low.shape=None,high.shape=None,shape=None,dtype=None):

dtype=None更改为dtype=np.float32

这为我修复了这个错误。

票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/49218443

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档