前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Improving Deep Neural Networks学习笔记(二)

Improving Deep Neural Networks学习笔记(二)

作者头像
Tyan
发布2017-12-29 10:32:04
5230
发布2017-12-29 10:32:04
举报
文章被收录于专栏:SnailTyan

4. Optimization algorithms

4.1 Mini-batch gradient descent

xtx^{\\{t\\}},yty^{\\{t\\}} is used to index into different mini batches. x[t]x^{[t]},y[t]y^{[t]} is used to index into different layer. x(t)x^{(t)},y(t)y^{(t)} is used to index into different examples.

Batch gradient descent is to process entire training set at the same time. Mini-batch gradient descent is to process single mini batch xtx^{\\{t\\}},yty^{\\{t\\}} at the same time.

Run forward propagation and back propagation once on mini batch is called one iteration.

Mini-batch gradient descent runs much faster than batch gradient descent.

4.2 Understanding mini-batch gradient descent

If mini-batch size = m, it’s batch gradient descend. If mini-batch size = 1, it’s stochastic gradient descend. In pracice, mini-batch size between 1 and m.

Batch gradient descend: too long per iteration. Stochastic gradient descend: lose speed up from vectorization. Mini-batch gradient descend: Faster learning, 1. vectorization 2. Make progress without needing to wait.

Choosing mini-batch size:

If small training set(m <= 2000), use batch gradient descend. Typical mini-batch size: 64, 128, 256, 512, 1024(rare).

4.3 Exponentially weighted averages

Vt=βVt−1+(1−β)θt

V_t = \beta V_{t-1} + (1-\beta)\theta_t

View VtV_t as approximately averaging over 11−β\frac {1} {1 - \beta}.

It’s called moving average in the statistics literature.

β=0.9\beta = 0.9:

Figure 1
Figure 1

β=0.9(red)\beta = 0.9(red),β=0.98(green)\beta = 0.98(green),β=0.5(yellow)\beta = 0.5(yellow):

Figure 2
Figure 2
4.4 Understanding exponentially weighted averages

θ\theta is the temperature of the day.

v100=0.9v99+0.1θ100

v_{100} = 0.9v_{99} + 0.1 \theta_{100}

v99=0.9v98+0.1θ99

v_{99} = 0.9v_{98} + 0.1 \theta_{99}

...

...

So

v100=0.1\*θ100+0.1\*0.9\*θ99+...+0.1\*0.9i\*θ100−i+...

v_{100} = 0.1 \* \theta _{100} + 0.1 \* 0.9 \* \theta _{99} + ... + 0.1 \* 0.9^{i} \* \theta _{100-i} + ...

Th coefficients is

0.1+0.1\*0.9+0.1\*0.92+...

0.1 + 0.1 \* 0.9 + 0.1 \* 0.9^2 + ...

All of these coefficients, add up to one or add up to very close to one. It is called bias correction.

(1−ϵ)1ϵ≈1e

(1 - \epsilon)^{\frac {1} {\epsilon}} \approx \frac {1} {e}

1e≈0.3679

\frac {1} {e} \approx 0.3679

Implement exponentially weighted average:

v0=0

v_0 = 0

v1=βv0+(1−β)θ1

v_1 = \beta v_0 + (1- \beta) \theta _1

v2=βv1+(1−β)θ2

v_2 = \beta v_1 + (1- \beta) \theta _2

...

...

Exponentially weighted average takes very low memory.

4.5 Bias correction in exponentially weighted averages

It’s not a very good estimate of the first several day’s temperature. Bias correction is used to mofity this estimate that makes it much better. The formula is:

vt1−βt=βvt−1+(1−β)θt.

\frac {v_t} {1 - \beta^t} = \beta v_{t-1} + (1- \beta) \theta _t.

4.6 Gradient descent with momentum

Gradient descent with momentum almost always works faster than the standard gradient descent algorithm. The basic idea is to compute an exponentially weighted average of gradients, and then use that gradient to update weights instead.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute vdwv_{dw}, vdbv_{db} vdw=βvdw+(1−β)dw v_{dw} = \beta v_{dw} + (1 - \beta)dwvdb=βvdb+(1−β)db v_{db} = \beta v_{db} + (1 - \beta)db
  3. update dw, db w=w−αvdw w = w - \alpha v_{dw}b=b−αvdb b = b - \alpha v_{db}

There are two hyperparameters, the most common value for β\beta is 0.9.

Another formula is vdw=βvdw+dwv_{dw} = \beta v_{dw} + dw, you need to modify corresponding α\alpha.

4.7 RMSprop

RMSprop stands for root mean square prop, that can also speed up gradient descent.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute sdws_{dw}, sdbs_{db} sdw=βsdw+(1−β)dw2 s_{dw} = \beta s_{dw} + (1 - \beta){dw}^2sdb=βsdb+(1−β)db2 s_{db} = \beta s_{db} + (1 - \beta){db}^2
  3. update dw, db w=w−αdwsdw‾‾‾√ w = w - \alpha \frac {dw} {\sqrt {s_{dw}}}b=b−αdbsdb‾‾‾√ b = b - \alpha \frac {db} {\sqrt {s_{db}}}

In practice, in order to avoid sdw‾‾‾√\sqrt {s_{dw}} being very close zero:

w=w−αdwsdw‾‾‾√+ϵ

w = w - \alpha \frac {dw} {\sqrt {s_{dw}} + \epsilon}

b=b−αdbsdb‾‾‾√+ϵ

b = b - \alpha \frac {db} {\sqrt {s_{db}} + \epsilon}

Usually

ϵ=10−8

\epsilon = 10^{-8}

4.8 Adam optimization algorithm

vdw=0,sdw=0,vdb,sdb=0

v_{dw}=0, s_{dw}=0,v_{db},s_{db}=0

On iteration t:

vdw=β1vdw+(1−β1)dw

v_{dw} = \beta_1 v_{dw} + (1 - \beta_1)dw

vdb=β1vdb+(1−β1)db

v_{db} = \beta_1 v_{db} + (1 - \beta_1)db

sdw=β2sdw+(1−β2)dw2

s_{dw} = \beta_2 s_{dw} + (1 - \beta_2){dw}^2

sdb=β2sdb+(1−β2)db2

s_{db} = \beta_2 s_{db} + (1 - \beta_2){db}^2

Bias correction:

vbcdw=vdw1−βt1,vbcdb=vdb1−βt1

v_{dw}^{bc} = \frac {v_{dw}} {1 - \beta_1^t}, v_{db}^{bc} = \frac {v_{db}} {1 - \beta_1^t}

sbcdw=sdw1−βt2,sbcdb=sdb1−βt2

s_{dw}^{bc} = \frac {s_{dw}} {1 - \beta_2^t}, s_{db}^{bc} = \frac {s_{db}} {1 - \beta_2^t}

Update weight:

w=w−αvbcdwsbcdw‾‾‾√+ϵ

w = w - \alpha \frac {v_{dw}^{bc}} {\sqrt {s_{dw}^{bc}} + \epsilon}

b=b−αvbcdbsbcdb‾‾‾√+ϵ

b = b - \alpha \frac {v_{db}^{bc}} {\sqrt {s_{db}^{bc}} + \epsilon}

Adam combines the effect of gradient descent with momentum together with gradient descent with RMSprop. It’s a commonly used learning algorithm that is proven to be very effective for many different neural networks of a very wide variety of architectures.\

α\alpha needs to be tuned. β1=0.9\beta_1 = 0.9, β2=0.999\beta_2 = 0.999, ϵ=10−8\epsilon = 10^{-8}.

Adam stands for Adaptive Moment Estimation.

4.9 Learning rate decay

Learning rate decay is slowly reduce the learning rate.

α=11+decayrate\*epochsα0

\alpha = \frac {1} {1 + {decay rate} \* epochs} \alpha_0

α0\alpha_0 is the initial learning rate.

Other learning rate decay methods:

α=0.95epochsα0\alpha = 0.95^{epochs}\alpha_0, this is called exponentially decay.

α=kepochs√α0\alpha = \frac {k} {\sqrt {epochs} } \alpha_0, α=kt√α0\alpha = \frac {k} {\sqrt t} \alpha_0.

α=12epochsα0\alpha = {\frac {1} {2}}^{epochs} \alpha _0, this is called a discrete staircase.

4.10 The problem of local optima

In very high-dimensional spaces you’re actually much more likely to run into a saddle point, rather than local optimum.

  • Unlikely to get stuck in a bad local optima.
  • Plateaus can make learning slow.
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2017-09-21 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 4. Optimization algorithms
    • 4.1 Mini-batch gradient descent
      • 4.2 Understanding mini-batch gradient descent
        • 4.3 Exponentially weighted averages
          • 4.4 Understanding exponentially weighted averages
            • 4.5 Bias correction in exponentially weighted averages
              • 4.6 Gradient descent with momentum
                • 4.7 RMSprop
                  • 4.8 Adam optimization algorithm
                    • 4.9 Learning rate decay
                      • 4.10 The problem of local optima
                      领券
                      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档