Improving Deep Neural Networks学习笔记(二)

4. Optimization algorithms

4.1 Mini-batch gradient descent

xtx^{\\{t\\}},yty^{\\{t\\}} is used to index into different mini batches. x[t]x^{[t]},y[t]y^{[t]} is used to index into different layer. x(t)x^{(t)},y(t)y^{(t)} is used to index into different examples.

Batch gradient descent is to process entire training set at the same time. Mini-batch gradient descent is to process single mini batch xtx^{\\{t\\}},yty^{\\{t\\}} at the same time.

Run forward propagation and back propagation once on mini batch is called one iteration.

Mini-batch gradient descent runs much faster than batch gradient descent.

4.2 Understanding mini-batch gradient descent

If mini-batch size = m, it’s batch gradient descend. If mini-batch size = 1, it’s stochastic gradient descend. In pracice, mini-batch size between 1 and m.

Batch gradient descend: too long per iteration. Stochastic gradient descend: lose speed up from vectorization. Mini-batch gradient descend: Faster learning, 1. vectorization 2. Make progress without needing to wait.

Choosing mini-batch size:

If small training set(m <= 2000), use batch gradient descend. Typical mini-batch size: 64, 128, 256, 512, 1024(rare).

4.3 Exponentially weighted averages

Vt=βVt−1+(1−β)θt

V_t = \beta V_{t-1} + (1-\beta)\theta_t

View VtV_t as approximately averaging over 11−β\frac {1} {1 - \beta}.

It’s called moving average in the statistics literature.

β=0.9\beta = 0.9:

β=0.9(red)\beta = 0.9(red),β=0.98(green)\beta = 0.98(green),β=0.5(yellow)\beta = 0.5(yellow):

4.4 Understanding exponentially weighted averages

θ\theta is the temperature of the day.

v100=0.9v99+0.1θ100

v_{100} = 0.9v_{99} + 0.1 \theta_{100}

v99=0.9v98+0.1θ99

v_{99} = 0.9v_{98} + 0.1 \theta_{99}

...

...

So

v100=0.1\*θ100+0.1\*0.9\*θ99+...+0.1\*0.9i\*θ100−i+...

v_{100} = 0.1 \* \theta _{100} + 0.1 \* 0.9 \* \theta _{99} + ... + 0.1 \* 0.9^{i} \* \theta _{100-i} + ...

Th coefficients is

0.1+0.1\*0.9+0.1\*0.92+...

0.1 + 0.1 \* 0.9 + 0.1 \* 0.9^2 + ...

All of these coefficients, add up to one or add up to very close to one. It is called bias correction.

(1−ϵ)1ϵ≈1e

(1 - \epsilon)^{\frac {1} {\epsilon}} \approx \frac {1} {e}

1e≈0.3679

\frac {1} {e} \approx 0.3679

Implement exponentially weighted average:

v0=0

v_0 = 0

v1=βv0+(1−β)θ1

v_1 = \beta v_0 + (1- \beta) \theta _1

v2=βv1+(1−β)θ2

v_2 = \beta v_1 + (1- \beta) \theta _2

...

...

Exponentially weighted average takes very low memory.

4.5 Bias correction in exponentially weighted averages

It’s not a very good estimate of the first several day’s temperature. Bias correction is used to mofity this estimate that makes it much better. The formula is:

vt1−βt=βvt−1+(1−β)θt.

\frac {v_t} {1 - \beta^t} = \beta v_{t-1} + (1- \beta) \theta _t.

4.6 Gradient descent with momentum

Gradient descent with momentum almost always works faster than the standard gradient descent algorithm. The basic idea is to compute an exponentially weighted average of gradients, and then use that gradient to update weights instead.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute vdwv_{dw}, vdbv_{db} vdw=βvdw+(1−β)dw v_{dw} = \beta v_{dw} + (1 - \beta)dwvdb=βvdb+(1−β)db v_{db} = \beta v_{db} + (1 - \beta)db
  3. update dw, db w=w−αvdw w = w - \alpha v_{dw}b=b−αvdb b = b - \alpha v_{db}

There are two hyperparameters, the most common value for β\beta is 0.9.

Another formula is vdw=βvdw+dwv_{dw} = \beta v_{dw} + dw, you need to modify corresponding α\alpha.

4.7 RMSprop

RMSprop stands for root mean square prop, that can also speed up gradient descent.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute sdws_{dw}, sdbs_{db} sdw=βsdw+(1−β)dw2 s_{dw} = \beta s_{dw} + (1 - \beta){dw}^2sdb=βsdb+(1−β)db2 s_{db} = \beta s_{db} + (1 - \beta){db}^2
  3. update dw, db w=w−αdwsdw‾‾‾√ w = w - \alpha \frac {dw} {\sqrt {s_{dw}}}b=b−αdbsdb‾‾‾√ b = b - \alpha \frac {db} {\sqrt {s_{db}}}

In practice, in order to avoid sdw‾‾‾√\sqrt {s_{dw}} being very close zero:

w=w−αdwsdw‾‾‾√+ϵ

w = w - \alpha \frac {dw} {\sqrt {s_{dw}} + \epsilon}

b=b−αdbsdb‾‾‾√+ϵ

b = b - \alpha \frac {db} {\sqrt {s_{db}} + \epsilon}

Usually

ϵ=10−8

\epsilon = 10^{-8}

4.8 Adam optimization algorithm

vdw=0,sdw=0,vdb,sdb=0

v_{dw}=0, s_{dw}=0,v_{db},s_{db}=0

On iteration t:

vdw=β1vdw+(1−β1)dw

v_{dw} = \beta_1 v_{dw} + (1 - \beta_1)dw

vdb=β1vdb+(1−β1)db

v_{db} = \beta_1 v_{db} + (1 - \beta_1)db

sdw=β2sdw+(1−β2)dw2

s_{dw} = \beta_2 s_{dw} + (1 - \beta_2){dw}^2

sdb=β2sdb+(1−β2)db2

s_{db} = \beta_2 s_{db} + (1 - \beta_2){db}^2

Bias correction:

vbcdw=vdw1−βt1,vbcdb=vdb1−βt1

v_{dw}^{bc} = \frac {v_{dw}} {1 - \beta_1^t}, v_{db}^{bc} = \frac {v_{db}} {1 - \beta_1^t}

sbcdw=sdw1−βt2,sbcdb=sdb1−βt2

s_{dw}^{bc} = \frac {s_{dw}} {1 - \beta_2^t}, s_{db}^{bc} = \frac {s_{db}} {1 - \beta_2^t}

Update weight:

w=w−αvbcdwsbcdw‾‾‾√+ϵ

w = w - \alpha \frac {v_{dw}^{bc}} {\sqrt {s_{dw}^{bc}} + \epsilon}

b=b−αvbcdbsbcdb‾‾‾√+ϵ

b = b - \alpha \frac {v_{db}^{bc}} {\sqrt {s_{db}^{bc}} + \epsilon}

Adam combines the effect of gradient descent with momentum together with gradient descent with RMSprop. It’s a commonly used learning algorithm that is proven to be very effective for many different neural networks of a very wide variety of architectures.\

α\alpha needs to be tuned. β1=0.9\beta_1 = 0.9, β2=0.999\beta_2 = 0.999, ϵ=10−8\epsilon = 10^{-8}.

Adam stands for Adaptive Moment Estimation.

4.9 Learning rate decay

Learning rate decay is slowly reduce the learning rate.

α=11+decayrate\*epochsα0

\alpha = \frac {1} {1 + {decay rate} \* epochs} \alpha_0

α0\alpha_0 is the initial learning rate.

Other learning rate decay methods:

α=0.95epochsα0\alpha = 0.95^{epochs}\alpha_0, this is called exponentially decay.

α=kepochs√α0\alpha = \frac {k} {\sqrt {epochs} } \alpha_0, α=kt√α0\alpha = \frac {k} {\sqrt t} \alpha_0.

α=12epochsα0\alpha = {\frac {1} {2}}^{epochs} \alpha _0, this is called a discrete staircase.

4.10 The problem of local optima

In very high-dimensional spaces you’re actually much more likely to run into a saddle point, rather than local optimum.

  • Unlikely to get stuck in a bad local optima.
  • Plateaus can make learning slow.

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏SnailTyan

Improving Deep Neural Networks学习笔记(三)

5. Hyperparameter tuning 5.1 Tuning process Hyperparameters: α\alpha, β\beta, β1...

2161
来自专栏Data Analysis & Viz

手把手教你完成一个数据科学小项目(5):省份提取与可视化

请先阅读“中国年轻人正带领国家走向危机”,这锅背是不背? 一文,以对“手把手教你完成一个数据科学小项目”系列有个全局性的了解。

871
来自专栏AILearning

【机器学习实战】第4章 基于概率论的分类方法:朴素贝叶斯

第4章 基于概率论的分类方法:朴素贝叶斯 <script type="text/javascript" src="http://cdn.mathjax.org...

33810
来自专栏有趣的Python和你

机器学习实战之朴素贝叶斯

1312
来自专栏自然语言处理

实现 | 朴素贝叶斯模型算法研究与实例分析

构建一个快速过滤器来屏蔽在线社区留言板上的侮辱性言论。如果某条留言使用了负面或者侮辱性的语言,那么就将该留言标识为内容不当。对此问题建立两个类别: 侮辱类和非侮...

2194
来自专栏Petrichor的专栏

深度学习: Jacobian矩阵 & Hessian矩阵

[1] Functions - Gradient, Jacobian and Hessian [2] Deep Learning Book

5293
来自专栏郭耀华‘s Blog

NLP基础——词集模型(SOW)和词袋模型(BOW)

2443
来自专栏JasonhavenDai

朴素贝叶斯练习实例

文本分类:过滤恶意留言 此处有两个改进的地方: (1)若有的类别没有出现,其概率就是0,会十分影响分类器的性能。所以采取各类别默认1次累加,总类别(两类)次...

5195
来自专栏量化投资与机器学习

用Numpy实现优化算法比较

1262
来自专栏Pulsar-V

矩阵理论·范数

向量范数 1-范数: ,即向量元素绝对值之和,matlab调用函数norm(x, 1) 。 2-范数:,Euclid范数(欧几里得范数,常用计算向量长度),即向...

3848

扫码关注云+社区

领取腾讯云代金券