# DQN三大改进(一)-Double DQN

Double-DQN原文：https://arxiv.org/pdf/1509.06461v3.pdf 代码地址：https://github.com/princewen/tensorflow_practice/tree/master/Double-DQN-demo

# 1、背景

DQN中有两个关键的技术，叫做经验回放和双网络结构。

DQN中的损失函数定义为：

q-target如何计算呢？根据下面的公式：

# 2、代码实现

`# ------------------------input---------------------------self.s = tf.placeholder(tf.float32, [None, self.n_features], name='s')self.q_target = tf.placeholder(tf.float32, [None, self.n_actions], name='Q-target')self.s_ = tf.placeholder(tf.float32,[None,self.n_features],name='s_')`

```def build_layers(s,c_name,n_l1,w_initializer,b_initializer):
with tf.variable_scope('l1'):
w1 = tf.get_variable(name='w1',shape=[self.n_features,n_l1],initializer=w_initializer,collections=c_name)
b1 = tf.get_variable(name='b1',shape=[1,n_l1],initializer=b_initializer,collections=c_name)
l1 = tf.nn.relu(tf.matmul(s,w1)+b1)
with tf.variable_scope('l2'):
w2 = tf.get_variable(name='w2',shape=[n_l1,self.n_actions],initializer=w_initializer,collections=c_name)
b2 = tf.get_variable(name='b2',shape=[1,self.n_actions],initializer=b_initializer,collections=c_name)
out = tf.matmul(l1,w2) + b2    return out```

```# ------------------ build evaluate_net ------------------with tf.variable_scope('eval_net'):
c_names = ['eval_net_params',tf.GraphKeys.GLOBAL_VARIABLES]
n_l1 = 20
w_initializer = tf.random_normal_initializer(0,0.3)
b_initializer =tf.constant_initializer(0.1)    self.q_eval = build_layers(self.s,c_names,n_l1,w_initializer,b_initializer)# ------------------ build target_net ------------------with tf.variable_scope('target_net'):
c_names = ['target_net_params', tf.GraphKeys.GLOBAL_VARIABLES]    self.q_next = build_layers(self.s_, c_names, n_l1, w_initializer, b_initializer)```

```with tf.variable_scope('loss'):    self.loss = tf.reduce_mean(tf.squared_difference(self.q_target,self.q_eval))

with tf.variable_scope('train'):    self.train_op = tf.train.RMSPropOptimizer(self.lr).minimize(self.loss)```

```def store_transition(self,s,a,r,s_):    if not hasattr(self, 'memory_counter'):        self.memory_counter = 0
transition = np.hstack((s, [a, r], s_))
index = self.memory_counter % self.memory_size    self.memory[index, :] = transition    self.memory_counter += 1```

```def choose_action(self,observation):
observation = observation[np.newaxis,:]
actions_value = self.sess.run(self.q_eval,feed_dict={self.s:observation})
action = np.argmax(actions_value)    if np.random.random() > self.epsilon:
action = np.random.randint(0,self.n_actions)    return action```

```if self.memory_counter > self.memory_size:
sample_index = np.random.choice(self.memory_size, size=self.batch_size)else:
sample_index = np.random.choice(self.memory_counter, size=self.batch_size)

batch_memory = self.memory[sample_index,:]```

```t_params = tf.get_collection('target_net_params')
e_params = tf.get_collection('eval_net_params')self.replace_target_op = [tf.assign(t, e) for t, e in zip(t_params, e_params)]if self.learn_step_counter % self.replace_target_iter == 0:    self.sess.run(self.replace_target_op)    print('\ntarget_params_replaced\n')```

```q_next,q_eval4next = self.sess.run([self.q_next, self.q_eval],
feed_dict={self.s_: batch_memory[:, -self.n_features:],                                                      self.s: batch_memory[:, -self.n_features:]})```

q_next是根据经验池中下一时刻状态输入到target-net计算得到的q值，而q_eval4next是根据经验池中下一时刻状态s'输入到eval-net计算得到的q值，这个q值主要用来选择动作。

```batch_index = np.arange(self.batch_size, dtype=np.int32)
eval_act_index = batch_memory[:, self.n_features].astype(int)
reward = batch_memory[:, self.n_features + 1]```

```if self.double_q:
max_act4next = np.argmax(q_eval4next, axis=1)        # the action that brings the highest value is evaluated by q_eval
selected_q_next = q_next[batch_index, max_act4next]  # Double DQN, select q_next depending on above actionselse:
selected_q_next = np.max(q_next, axis=1)    # the natural DQN```

```q_target = q_eval.copy()
q_target[batch_index, eval_act_index] = reward + self.gamma * selected_q_next```

```_, self.cost = self.sess.run([self.train_op, self.loss],
feed_dict={self.s: batch_memory[:, :self.n_features],                                        self.q_target: q_target})self.cost_his.append(self.cost)self.epsilon = self.epsilon + self.epsilon_increment if self.epsilon < self.epsilon_max else self.epsilon_maxself.learn_step_counter += 1```

# 3、参考文献

1、Double-DQN原文：https://arxiv.org/pdf/1509.06461v3.pdf 2、解析 DeepMind 采用双 Q 学习 (Double Q-Learning) 深度强化学习技术：https://www.jianshu.com/p/193ca0106aa5

264 篇文章90 人订阅

0 条评论

## 相关文章

1.1K60

71910

20810

49050

29340

93410

### 用keras对国产剧评论文本的情感进行预测

RNN即循环神经网络，其主要用途是处理和预测序列数据。在CNN中，神经网络层间采用全连接的方式连接，但层内节点之间却无连接。RNN为了处理序列数据，层内节点的输...

46850

58460

28870

### Unsupervised Learning of Latent Physical Properties Using

https://www.groundai.com/project/unsupervised-learning-of-latent-physical-proper...

8030