基于Gym的神经网络训练

内容来源于 Stack Overflow,并遵循CC BY-SA 3.0许可协议进行翻译与使用

  • 回答 (1)
  • 关注 (0)
  • 查看 (539)

我对下面提供的代码有一些问题。我正在研究python 3.6。我已经重新安装了Python和运行代码所需的所有模块。

问题说明:

当我运行这段代码时,我会得到以下警告,而根本没有输出。我不明白这些警告意味着什么,也不明白如何解决这个问题。

警告(来自警告模块):文件“D:\Users\Rafal\AppData\Local\Programs\Python\Python36\lib\site包\h5py__依尼特__.py“,第36行。_进口登记簿_转换器_寄存器_转换器未来警告:不建议将issubdtype的第二个参数从Float转换为np。将来,它将被视为np.Float 64=np.dtype(Float).type。

以及:

[33 mWARN:裸子植物.请提供明确的dtype。

我的代码是:

import gym
import random
import numpy as np
import tflearn
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
from statistics import median, mean
from collections import Counter

LR = 1e-3
env = gym.make("CartPole-v0")
env.reset()
goal_steps = 500
score_requirement = 50
initial_games = 10000


def initial_population():
    # [OBS, MOVES]
    training_data = []
    # all scores:
    scores = []
    # just the scores that met our threshold:
    accepted_scores = []
    # iterate through however many games we want:
    for _ in range(initial_games):
        score = 0
        # moves specifically from this environment:
        game_memory = []
        # previous observation that we saw
        prev_observation = []
        # for each frame in 200
        for _ in range(goal_steps):
            # choose random action (0 or 1)
            action = random.randrange(0,2)
            # do it!
            observation, reward, done, info = env.step(action)

            # notice that the observation is returned FROM the action
            # so we'll store the previous observation here, pairing
            # the prev observation to the action we'll take.
            if len(prev_observation) > 0 :
                game_memory.append([prev_observation, action])
            prev_observation = observation
            score+=reward
            if done: break

        # IF our score is higher than our threshold, we'd like to save
        # every move we made
        # NOTE the reinforcement methodology here. 
        # all we're doing is reinforcing the score, we're not trying 
        # to influence the machine in any way as to HOW that score is 
        # reached.
        if score >= score_requirement:
            accepted_scores.append(score)
            for data in game_memory:
                # convert to one-hot (this is the output layer for our neural network)
                if data[1] == 1:
                    output = [0,1]
                elif data[1] == 0:
                    output = [1,0]

                # saving our training data
                training_data.append([data[0], output])

        # reset env to play again
        env.reset()
        # save overall scores
        scores.append(score)

    # just in case you wanted to reference later
    training_data_save = np.array(training_data)
    np.save('saved.npy',training_data_save)

    # some stats here, to further illustrate the neural network magic!
    print('Average accepted score:',mean(accepted_scores))
    print('Median score for accepted scores:',median(accepted_scores))
    print(Counter(accepted_scores))

    return training_data
提问于
用户回答回答于

要用此错误回答第二个问题:

gym.spaces.Box autodetected dtype as <class 'numpy.float32'>

转到下载的“健身房”文件目录。进入健身房/空间/打开“box.py”文件。

在第12行附近,你应该看到:

def __init__(self,low.shape=None,high.shape=None,shape=None,dtype=None):

更改dtype=Nonedtype=np.float32

这为我修复了错误。

扫码关注云+社区

领取腾讯云代金券