前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >多目标的强化学习教程-两篇均有代码

多目标的强化学习教程-两篇均有代码

作者头像
用户1908973
发布2018-07-20 17:28:23
1.3K0
发布2018-07-20 17:28:23
举报
文章被收录于专栏:CreateAMindCreateAMind

1 https://flyyufelix.github.io/2017/11/17/direct-future-prediction.html 有代码

Direct Future Prediction - Supervised Learning for Reinforcement Learning

2 原文https://www.oreilly.com/ideas/reinforcement-learning-for-complex-goals-using-tensorflow,

建议 这一段参考原文

This new formulation changes our neural network in several ways. Instead of just a state, we will also provide as input to the network the current measurements and goal. Instead of Q-values, our network will now output a prediction tensor of the form [Measurements X Actions X Offsets]. Taking the product of the summed predicted future changes and our goals, we can pick actions that best satisfy our goals over time:

量子位的中文翻译:

代码语言:javascript
复制
https://mp.weixin.qq.com/s/XHdaoOWBgOWX7SrOemY4jw 有代码

论文:

The output of the network is a prediction of future measurements for each action, composed by

summing the output of the expectation stream and the normalized action-conditional output of the

action stream:DEp = hp a 1 , . . . , p a w i = A 1 (j) + E(j), . . . , A w (j) + E(j) .

代码语言:javascript
复制
for i in range(action_size):
    action_stream = Dense(measurement_pred_size, activation='relu')(concat_feat)
    prediction_list.append(merge([action_stream, expectation_stream], mode='sum'))
代码语言:javascript
复制
def get_action(self, state, measurement, goal, inference_goal):
    """
    Get action from model using epsilon-greedy policy
    """
    if np.random.rand() <= self.epsilon:
        #print("----------Random Action----------")
        action_idx = random.randrange(self.action_size)
    else:
        measurement = np.expand_dims(measurement, axis=0)
        goal = np.expand_dims(goal, axis=0)
        f = self.model.predict([state, measurement, goal]) # [1x6, 1x6, 1x6]
        f_pred = np.vstack(f) # 3x6
        obj = np.sum(np.multiply(f_pred, inference_goal), axis=1) # num_action

        action_idx = np.argmax(obj)
    return action_idx
本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-02-26,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 CreateAMind 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档