# 多目标的强化学习教程-两篇均有代码

1 https://flyyufelix.github.io/2017/11/17/direct-future-prediction.html 有代码

Direct Future Prediction - Supervised Learning for Reinforcement Learning

2 原文https://www.oreilly.com/ideas/reinforcement-learning-for-complex-goals-using-tensorflow，

This new formulation changes our neural network in several ways. Instead of just a state, we will also provide as input to the network the current measurements and goal. Instead of Q-values, our network will now output a prediction tensor of the form [Measurements X Actions X Offsets]. Taking the product of the summed predicted future changes and our goals, we can pick actions that best satisfy our goals over time:

`https://mp.weixin.qq.com/s/XHdaoOWBgOWX7SrOemY4jw 有代码`

The output of the network is a prediction of future measurements for each action, composed by

summing the output of the expectation stream and the normalized action-conditional output of the

action stream:DEp = hp a 1 , . . . , p a w i = A 1 (j) + E(j), . . . , A w (j) + E(j) .

```for i in range(action_size):
action_stream = Dense(measurement_pred_size, activation='relu')(concat_feat)
prediction_list.append(merge([action_stream, expectation_stream], mode='sum'))```
```def get_action(self, state, measurement, goal, inference_goal):
"""
Get action from model using epsilon-greedy policy
"""
if np.random.rand() <= self.epsilon:
#print("----------Random Action----------")
action_idx = random.randrange(self.action_size)
else:
measurement = np.expand_dims(measurement, axis=0)
goal = np.expand_dims(goal, axis=0)
f = self.model.predict([state, measurement, goal]) # [1x6, 1x6, 1x6]
f_pred = np.vstack(f) # 3x6
obj = np.sum(np.multiply(f_pred, inference_goal), axis=1) # num_action

action_idx = np.argmax(obj)
return action_idx```

564 篇文章32 人订阅

0 条评论

## 相关文章

6292

3.6K2

4216

1985

### Win10系统配置Python3.6+OpenGL环境详细步骤

1、首先登录https://www.opengl.org/resources/libraries/glut/，下载下图箭头所指的文件 ? 2、解压缩，如下图所示...

4237

2317

1466

1242

1113

### DSP图像处理

2422 