Improving Deep Neural Networks学习笔记(二)

4. Optimization algorithms

4.1 Mini-batch gradient descent

xtx^{\\{t\\}},yty^{\\{t\\}} is used to index into different mini batches. x[t]x^{[t]},y[t]y^{[t]} is used to index into different layer. x(t)x^{(t)},y(t)y^{(t)} is used to index into different examples.

Batch gradient descent is to process entire training set at the same time. Mini-batch gradient descent is to process single mini batch xtx^{\\{t\\}},yty^{\\{t\\}} at the same time.

Run forward propagation and back propagation once on mini batch is called one iteration.

Mini-batch gradient descent runs much faster than batch gradient descent.

4.2 Understanding mini-batch gradient descent

If mini-batch size = m, it’s batch gradient descend. If mini-batch size = 1, it’s stochastic gradient descend. In pracice, mini-batch size between 1 and m.

Batch gradient descend: too long per iteration. Stochastic gradient descend: lose speed up from vectorization. Mini-batch gradient descend: Faster learning, 1. vectorization 2. Make progress without needing to wait.

Choosing mini-batch size:

If small training set(m <= 2000), use batch gradient descend. Typical mini-batch size: 64, 128, 256, 512, 1024(rare).

4.3 Exponentially weighted averages

Vt=βVt−1+(1−β)θt

V_t = \beta V_{t-1} + (1-\beta)\theta_t

View VtV_t as approximately averaging over 11−β\frac {1} {1 - \beta}.

It’s called moving average in the statistics literature.

β=0.9\beta = 0.9:

β=0.9(red)\beta = 0.9(red),β=0.98(green)\beta = 0.98(green),β=0.5(yellow)\beta = 0.5(yellow):

4.4 Understanding exponentially weighted averages

θ\theta is the temperature of the day.

v100=0.9v99+0.1θ100

v_{100} = 0.9v_{99} + 0.1 \theta_{100}

v99=0.9v98+0.1θ99

v_{99} = 0.9v_{98} + 0.1 \theta_{99}

...

...

So

v100=0.1\*θ100+0.1\*0.9\*θ99+...+0.1\*0.9i\*θ100−i+...

v_{100} = 0.1 \* \theta _{100} + 0.1 \* 0.9 \* \theta _{99} + ... + 0.1 \* 0.9^{i} \* \theta _{100-i} + ...

Th coefficients is

0.1+0.1\*0.9+0.1\*0.92+...

0.1 + 0.1 \* 0.9 + 0.1 \* 0.9^2 + ...

All of these coefficients, add up to one or add up to very close to one. It is called bias correction.

(1−ϵ)1ϵ≈1e

(1 - \epsilon)^{\frac {1} {\epsilon}} \approx \frac {1} {e}

1e≈0.3679

\frac {1} {e} \approx 0.3679

Implement exponentially weighted average:

v0=0

v_0 = 0

v1=βv0+(1−β)θ1

v_1 = \beta v_0 + (1- \beta) \theta _1

v2=βv1+(1−β)θ2

v_2 = \beta v_1 + (1- \beta) \theta _2

...

...

Exponentially weighted average takes very low memory.

4.5 Bias correction in exponentially weighted averages

It’s not a very good estimate of the first several day’s temperature. Bias correction is used to mofity this estimate that makes it much better. The formula is:

vt1−βt=βvt−1+(1−β)θt.

\frac {v_t} {1 - \beta^t} = \beta v_{t-1} + (1- \beta) \theta _t.

4.6 Gradient descent with momentum

Gradient descent with momentum almost always works faster than the standard gradient descent algorithm. The basic idea is to compute an exponentially weighted average of gradients, and then use that gradient to update weights instead.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute vdwv_{dw}, vdbv_{db} vdw=βvdw+(1−β)dw v_{dw} = \beta v_{dw} + (1 - \beta)dwvdb=βvdb+(1−β)db v_{db} = \beta v_{db} + (1 - \beta)db
  3. update dw, db w=w−αvdw w = w - \alpha v_{dw}b=b−αvdb b = b - \alpha v_{db}

There are two hyperparameters, the most common value for β\beta is 0.9.

Another formula is vdw=βvdw+dwv_{dw} = \beta v_{dw} + dw, you need to modify corresponding α\alpha.

4.7 RMSprop

RMSprop stands for root mean square prop, that can also speed up gradient descent.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute sdws_{dw}, sdbs_{db} sdw=βsdw+(1−β)dw2 s_{dw} = \beta s_{dw} + (1 - \beta){dw}^2sdb=βsdb+(1−β)db2 s_{db} = \beta s_{db} + (1 - \beta){db}^2
  3. update dw, db w=w−αdwsdw‾‾‾√ w = w - \alpha \frac {dw} {\sqrt {s_{dw}}}b=b−αdbsdb‾‾‾√ b = b - \alpha \frac {db} {\sqrt {s_{db}}}

In practice, in order to avoid sdw‾‾‾√\sqrt {s_{dw}} being very close zero:

w=w−αdwsdw‾‾‾√+ϵ

w = w - \alpha \frac {dw} {\sqrt {s_{dw}} + \epsilon}

b=b−αdbsdb‾‾‾√+ϵ

b = b - \alpha \frac {db} {\sqrt {s_{db}} + \epsilon}

Usually

ϵ=10−8

\epsilon = 10^{-8}

4.8 Adam optimization algorithm

vdw=0,sdw=0,vdb,sdb=0

v_{dw}=0, s_{dw}=0,v_{db},s_{db}=0

On iteration t:

vdw=β1vdw+(1−β1)dw

v_{dw} = \beta_1 v_{dw} + (1 - \beta_1)dw

vdb=β1vdb+(1−β1)db

v_{db} = \beta_1 v_{db} + (1 - \beta_1)db

sdw=β2sdw+(1−β2)dw2

s_{dw} = \beta_2 s_{dw} + (1 - \beta_2){dw}^2

sdb=β2sdb+(1−β2)db2

s_{db} = \beta_2 s_{db} + (1 - \beta_2){db}^2

Bias correction:

vbcdw=vdw1−βt1,vbcdb=vdb1−βt1

v_{dw}^{bc} = \frac {v_{dw}} {1 - \beta_1^t}, v_{db}^{bc} = \frac {v_{db}} {1 - \beta_1^t}

sbcdw=sdw1−βt2,sbcdb=sdb1−βt2

s_{dw}^{bc} = \frac {s_{dw}} {1 - \beta_2^t}, s_{db}^{bc} = \frac {s_{db}} {1 - \beta_2^t}

Update weight:

w=w−αvbcdwsbcdw‾‾‾√+ϵ

w = w - \alpha \frac {v_{dw}^{bc}} {\sqrt {s_{dw}^{bc}} + \epsilon}

b=b−αvbcdbsbcdb‾‾‾√+ϵ

b = b - \alpha \frac {v_{db}^{bc}} {\sqrt {s_{db}^{bc}} + \epsilon}

Adam combines the effect of gradient descent with momentum together with gradient descent with RMSprop. It’s a commonly used learning algorithm that is proven to be very effective for many different neural networks of a very wide variety of architectures.\

α\alpha needs to be tuned. β1=0.9\beta_1 = 0.9, β2=0.999\beta_2 = 0.999, ϵ=10−8\epsilon = 10^{-8}.

Adam stands for Adaptive Moment Estimation.

4.9 Learning rate decay

Learning rate decay is slowly reduce the learning rate.

α=11+decayrate\*epochsα0

\alpha = \frac {1} {1 + {decay rate} \* epochs} \alpha_0

α0\alpha_0 is the initial learning rate.

Other learning rate decay methods:

α=0.95epochsα0\alpha = 0.95^{epochs}\alpha_0, this is called exponentially decay.

α=kepochs√α0\alpha = \frac {k} {\sqrt {epochs} } \alpha_0, α=kt√α0\alpha = \frac {k} {\sqrt t} \alpha_0.

α=12epochsα0\alpha = {\frac {1} {2}}^{epochs} \alpha _0, this is called a discrete staircase.

4.10 The problem of local optima

In very high-dimensional spaces you’re actually much more likely to run into a saddle point, rather than local optimum.

  • Unlikely to get stuck in a bad local optima.
  • Plateaus can make learning slow.

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏瓜大三哥

基于FPGA的Canny算子设计(二)

滞后阈值分割电路设计 滞后阈值需要两个阈值:一种方法是可以根据所要提取的图片,提前定好这两个阈值;另一种方式是采用自动阈值法(如大律法)。这里采用第一种方法。 ...

1966
来自专栏机器学习、深度学习

快速去阴影--Fast Shadow Detection from a Single Image Using a Patched Convolutional Neural Network

Fast Shadow Detection from a Single Image Using a Patched Convolutional Neural N...

2099
来自专栏菩提树下的杨过

ExtJs学习笔记(12)_Anchor布局

Anchor布局的效果直接看代码和效果图最为直观 !DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitiona...

1936
来自专栏算法+

最快的3x3中值模糊

10.1国庆后,知名博主:laviewpbt  http://www.cnblogs.com/Imageshop/ 发起了一个优化3x3中值模糊的小活动。 俺也...

3486
来自专栏听雨堂

vb中实现最佳按钮效果

使用了AquaButton按钮,支持很丰富的效果,不过就是不支持png,所以找来的 png图片,阴影部分总是不能表现得很好。 尝试了一下,找到一种较好的方式: ...

1877
来自专栏数值分析与有限元编程

矩阵方程

对于矩阵 A(n,n) 和 B(n,m) 组成的矩阵方程 [A][X] = [B] 记 X(n,m) 的第i列向量为 Xi(i = 1,2...m), 矩阵B的...

2818
来自专栏GIS讲堂

OL3矢量图层样式自定义

903
来自专栏云时之间

NLP系列学习:常用的语言平滑模型

语言模型常见的平滑算法就那几种,一般的教程都不提分几种的模式、分类。 不过在MIT的NLP课程ppt中总结说有三种模式:Discounting, Interp...

2746
来自专栏mukekeheart的iOS之旅

iOS学习——Quartz2D学习(1)

本文以问答形式主要讲述Quartz2D的相关内容,参考内容是网上下载的学习视频资料。

682
来自专栏CreateAMind

beta-VAE 实验:mnist多图及代码

dimz=30,如果每一个维度采样10个样本,就是10的30次方,数量巨大,还没找到好论文参考,请大牛指点!

741

扫码关注云+社区