Improving Deep Neural Networks学习笔记(二)

4. Optimization algorithms

4.1 Mini-batch gradient descent

xtx^{\\{t\\}},yty^{\\{t\\}} is used to index into different mini batches. x[t]x^{[t]},y[t]y^{[t]} is used to index into different layer. x(t)x^{(t)},y(t)y^{(t)} is used to index into different examples.

Batch gradient descent is to process entire training set at the same time. Mini-batch gradient descent is to process single mini batch xtx^{\\{t\\}},yty^{\\{t\\}} at the same time.

Run forward propagation and back propagation once on mini batch is called one iteration.

Mini-batch gradient descent runs much faster than batch gradient descent.

4.2 Understanding mini-batch gradient descent

If mini-batch size = m, it’s batch gradient descend. If mini-batch size = 1, it’s stochastic gradient descend. In pracice, mini-batch size between 1 and m.

Batch gradient descend: too long per iteration. Stochastic gradient descend: lose speed up from vectorization. Mini-batch gradient descend: Faster learning, 1. vectorization 2. Make progress without needing to wait.

Choosing mini-batch size:

If small training set(m <= 2000), use batch gradient descend. Typical mini-batch size: 64, 128, 256, 512, 1024(rare).

4.3 Exponentially weighted averages

Vt=βVt−1+(1−β)θt

V_t = \beta V_{t-1} + (1-\beta)\theta_t

View VtV_t as approximately averaging over 11−β\frac {1} {1 - \beta}.

It’s called moving average in the statistics literature.

β=0.9\beta = 0.9:

β=0.9(red)\beta = 0.9(red),β=0.98(green)\beta = 0.98(green),β=0.5(yellow)\beta = 0.5(yellow):

4.4 Understanding exponentially weighted averages

θ\theta is the temperature of the day.

v100=0.9v99+0.1θ100

v_{100} = 0.9v_{99} + 0.1 \theta_{100}

v99=0.9v98+0.1θ99

v_{99} = 0.9v_{98} + 0.1 \theta_{99}

...

...

So

v100=0.1\*θ100+0.1\*0.9\*θ99+...+0.1\*0.9i\*θ100−i+...

v_{100} = 0.1 \* \theta _{100} + 0.1 \* 0.9 \* \theta _{99} + ... + 0.1 \* 0.9^{i} \* \theta _{100-i} + ...

Th coefficients is

0.1+0.1\*0.9+0.1\*0.92+...

0.1 + 0.1 \* 0.9 + 0.1 \* 0.9^2 + ...

All of these coefficients, add up to one or add up to very close to one. It is called bias correction.

(1−ϵ)1ϵ≈1e

(1 - \epsilon)^{\frac {1} {\epsilon}} \approx \frac {1} {e}

1e≈0.3679

\frac {1} {e} \approx 0.3679

Implement exponentially weighted average:

v0=0

v_0 = 0

v1=βv0+(1−β)θ1

v_1 = \beta v_0 + (1- \beta) \theta _1

v2=βv1+(1−β)θ2

v_2 = \beta v_1 + (1- \beta) \theta _2

...

...

Exponentially weighted average takes very low memory.

4.5 Bias correction in exponentially weighted averages

It’s not a very good estimate of the first several day’s temperature. Bias correction is used to mofity this estimate that makes it much better. The formula is:

vt1−βt=βvt−1+(1−β)θt.

\frac {v_t} {1 - \beta^t} = \beta v_{t-1} + (1- \beta) \theta _t.

4.6 Gradient descent with momentum

Gradient descent with momentum almost always works faster than the standard gradient descent algorithm. The basic idea is to compute an exponentially weighted average of gradients, and then use that gradient to update weights instead.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute vdwv_{dw}, vdbv_{db} vdw=βvdw+(1−β)dw v_{dw} = \beta v_{dw} + (1 - \beta)dwvdb=βvdb+(1−β)db v_{db} = \beta v_{db} + (1 - \beta)db
  3. update dw, db w=w−αvdw w = w - \alpha v_{dw}b=b−αvdb b = b - \alpha v_{db}

There are two hyperparameters, the most common value for β\beta is 0.9.

Another formula is vdw=βvdw+dwv_{dw} = \beta v_{dw} + dw, you need to modify corresponding α\alpha.

4.7 RMSprop

RMSprop stands for root mean square prop, that can also speed up gradient descent.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute sdws_{dw}, sdbs_{db} sdw=βsdw+(1−β)dw2 s_{dw} = \beta s_{dw} + (1 - \beta){dw}^2sdb=βsdb+(1−β)db2 s_{db} = \beta s_{db} + (1 - \beta){db}^2
  3. update dw, db w=w−αdwsdw‾‾‾√ w = w - \alpha \frac {dw} {\sqrt {s_{dw}}}b=b−αdbsdb‾‾‾√ b = b - \alpha \frac {db} {\sqrt {s_{db}}}

In practice, in order to avoid sdw‾‾‾√\sqrt {s_{dw}} being very close zero:

w=w−αdwsdw‾‾‾√+ϵ

w = w - \alpha \frac {dw} {\sqrt {s_{dw}} + \epsilon}

b=b−αdbsdb‾‾‾√+ϵ

b = b - \alpha \frac {db} {\sqrt {s_{db}} + \epsilon}

Usually

ϵ=10−8

\epsilon = 10^{-8}

4.8 Adam optimization algorithm

vdw=0,sdw=0,vdb,sdb=0

v_{dw}=0, s_{dw}=0,v_{db},s_{db}=0

On iteration t:

vdw=β1vdw+(1−β1)dw

v_{dw} = \beta_1 v_{dw} + (1 - \beta_1)dw

vdb=β1vdb+(1−β1)db

v_{db} = \beta_1 v_{db} + (1 - \beta_1)db

sdw=β2sdw+(1−β2)dw2

s_{dw} = \beta_2 s_{dw} + (1 - \beta_2){dw}^2

sdb=β2sdb+(1−β2)db2

s_{db} = \beta_2 s_{db} + (1 - \beta_2){db}^2

Bias correction:

vbcdw=vdw1−βt1,vbcdb=vdb1−βt1

v_{dw}^{bc} = \frac {v_{dw}} {1 - \beta_1^t}, v_{db}^{bc} = \frac {v_{db}} {1 - \beta_1^t}

sbcdw=sdw1−βt2,sbcdb=sdb1−βt2

s_{dw}^{bc} = \frac {s_{dw}} {1 - \beta_2^t}, s_{db}^{bc} = \frac {s_{db}} {1 - \beta_2^t}

Update weight:

w=w−αvbcdwsbcdw‾‾‾√+ϵ

w = w - \alpha \frac {v_{dw}^{bc}} {\sqrt {s_{dw}^{bc}} + \epsilon}

b=b−αvbcdbsbcdb‾‾‾√+ϵ

b = b - \alpha \frac {v_{db}^{bc}} {\sqrt {s_{db}^{bc}} + \epsilon}

Adam combines the effect of gradient descent with momentum together with gradient descent with RMSprop. It’s a commonly used learning algorithm that is proven to be very effective for many different neural networks of a very wide variety of architectures.\

α\alpha needs to be tuned. β1=0.9\beta_1 = 0.9, β2=0.999\beta_2 = 0.999, ϵ=10−8\epsilon = 10^{-8}.

Adam stands for Adaptive Moment Estimation.

4.9 Learning rate decay

Learning rate decay is slowly reduce the learning rate.

α=11+decayrate\*epochsα0

\alpha = \frac {1} {1 + {decay rate} \* epochs} \alpha_0

α0\alpha_0 is the initial learning rate.

Other learning rate decay methods:

α=0.95epochsα0\alpha = 0.95^{epochs}\alpha_0, this is called exponentially decay.

α=kepochs√α0\alpha = \frac {k} {\sqrt {epochs} } \alpha_0, α=kt√α0\alpha = \frac {k} {\sqrt t} \alpha_0.

α=12epochsα0\alpha = {\frac {1} {2}}^{epochs} \alpha _0, this is called a discrete staircase.

4.10 The problem of local optima

In very high-dimensional spaces you’re actually much more likely to run into a saddle point, rather than local optimum.

  • Unlikely to get stuck in a bad local optima.
  • Plateaus can make learning slow.

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏木子昭的博客

泰坦尼克乘客存活状况(决策树案例)

1912年4月15日凌晨2点20分,“永不沉没”的“泰坦尼克”走完了它短暂的航程,缓缓沉入大西洋这座安静冰冷的坟墓。 ? 欢迎你们说我幼稚荒诞,也欢迎你...

34212
来自专栏人工智能头条

如何用微信监管你的TF训练?

903
来自专栏数据结构与算法

1010. 邮寄包裹

1010. 邮寄包裹 (Standard IO) 时间限制: 1000 ms  空间限制: 262144 KB  具体限制  题目描述 某邮局对邮寄包裹有如下...

36311
来自专栏阮一峰的网络日志

巧用Photoshop进行科学研究

Photoshop CS3 Extended是一个强大的软件。你可以用它,让你的报名照变得漂亮一些,然后上传到社交网站上;你也可以将一个名人的脑袋,移植到一张裸...

742
来自专栏数据结构与算法

HDU3853

Akemi Homura is a Mahou Shoujo (Puella Magi/Magical Girl). Homura wants to hel...

2535
来自专栏数据派THU

清华大学发布珠算:一个用于生成模型的Python库

来源:GitHub 编译:机器之心 参与:吴攀 本文长度为1200字,建议阅读4分钟 本文为你介绍「珠算(ZhuSuan)」这一软件库的介绍文档。 5月27-2...

1775
来自专栏逍遥剑客的游戏开发

MD2关键帧动画

2526
来自专栏大数据文摘

手把手:Python加密货币价格预测9步走,视频+代码

1505
来自专栏海天一树

Codeforces积分系统介绍

请参考 https://blog.csdn.net/haishu_zheng/article/details/80480284

881
来自专栏机器之心

资源 | 清华大学发布珠算:一个用于生成模型的Python库

选自Github 机器之心编译 参与:吴攀 5 月 27-28 日,机器之心主办的第一届全球机器智能峰会(GMIS 2017)将在北京 898 创新空间举行。在...

33710

扫码关注云+社区