# ExponentialMovingAverage

Some training algorithms, such as GradientDescent and Momentum often benefit from maintaining a moving average of variables during optimization. Using the moving averages for evaluations often improve results significantly. `tensorflow` 官网上对于这个方法功能的介绍。`GradientDescent``Momentum` 方式的训练 都能够从 `ExponentialMovingAverage` 方法中获益。

, 所以:

。即， mv_t 的值只和

## tensorflow 中的 ExponentialMovingAverage

,就知道各代表什么意思了。 `shadow variables are created with trainable=False`。用其来存放 ema 的值

```import tensorflow as tf
w = tf.Variable(1.0)
ema = tf.train.ExponentialMovingAverage(0.9)

with tf.control_dependencies([update]):
ema_op = ema.apply([w])#这句和下面那句不能调换顺序
# 以 w 当作 key， 获取 shadow value 的值
ema_val = ema.average(w)#参数不能是list，有点蛋疼

with tf.Session() as sess:
tf.global_variables_initializer().run()
for i in range(3):
sess.run(ema_op)
print(sess.run(ema_val))
# 创建一个时间序列 1 2 3 4
#输出：
#1.1      =0.9*1 + 0.1*2
#1.29     =0.9*1.1+0.1*3
#1.561    =0.9*1.29+0.1*4```

。如果 `w``Tensor`的话，将会用 `0.0` 初始化。

```# Create variables.
var0 = tf.Variable(...)
var1 = tf.Variable(...)
# ... use the variables to build a training model...
...
# Create an op that applies the optimizer.  This is what we usually
# would use as a training op.
opt_op = opt.minimize(my_loss, [var0, var1])

# Create an ExponentialMovingAverage object
ema = tf.train.ExponentialMovingAverage(decay=0.9999)

# Create the shadow variables, and add ops to maintain moving averages
# of var0 and var1.
maintain_averages_op = ema.apply([var0, var1])

# Create an op that will update the moving averages after each training
# step.  This is what we will use in place of the usual training op.
with tf.control_dependencies([opt_op]):
training_op = tf.group(maintain_averages_op)
# run这个op获取当前时刻 ema_value
get_var0_average_op = ema.average(var0)```

## 使用 ExponentialMovingAveraged parameters

```# Create a Saver that loads variables from their saved shadow values.
saver.restore(...checkpoint filename...)
# var0 and var1 now hold the moving average values```

```#Returns a map of names to Variables to restore.
variables_to_restore = ema.variables_to_restore()
saver = tf.train.Saver(variables_to_restore)
...
saver.restore(...checkpoint filename...)```

# 参考资料

https://www.tensorflow.org/versions/master/api_docs/python/train/moving_averages

87 篇文章30 人订阅

0 条评论

## 相关文章

### R中重复值、缺失值及空格值的处理

1、R中重复值的处理 unique函数作用：把数据结构中，行相同的数据去除。 #导入CSV数据 data <- read.csv('1.csv', fileEn...

29310

### python语句-中断循环-continue,break

continue的作用是:从continue语句开始到循环结束，之间所有的语句都不执行，直接从一下次循环重新开始

733

1976

662

### TensorFlow应用实战 | TensorFlow基础知识

hw = tf.constant("Hello World! Mtianyan love TensorFlow!")

1554

1603

804

### tensorflow自定义op：梯度

tensorflow自定义op，梯度 tensorflow 是 自动微分的，但是如果你不给它定义微分方程的话，它啥也干不了 在使用 tensorflow 的时...

5907

4097

### R语言函数的含义与用法，实现过程解读

R的源起 R是S语言的一种实现。S语言是由 AT&T贝尔实验室开发的一种用来进行数据探索、统计分析、作图的解释型语言。最初S语言的实现版本主要是S-PLUS。S...

39312