Improving Deep Neural Networks学习笔记(二)

4. Optimization algorithms

4.1 Mini-batch gradient descent

xtx^{\\{t\\}},yty^{\\{t\\}} is used to index into different mini batches. x[t]x^{[t]},y[t]y^{[t]} is used to index into different layer. x(t)x^{(t)},y(t)y^{(t)} is used to index into different examples.

Batch gradient descent is to process entire training set at the same time. Mini-batch gradient descent is to process single mini batch xtx^{\\{t\\}},yty^{\\{t\\}} at the same time.

Run forward propagation and back propagation once on mini batch is called one iteration.

Mini-batch gradient descent runs much faster than batch gradient descent.

4.2 Understanding mini-batch gradient descent

If mini-batch size = m, it’s batch gradient descend. If mini-batch size = 1, it’s stochastic gradient descend. In pracice, mini-batch size between 1 and m.

Batch gradient descend: too long per iteration. Stochastic gradient descend: lose speed up from vectorization. Mini-batch gradient descend: Faster learning, 1. vectorization 2. Make progress without needing to wait.

Choosing mini-batch size:

If small training set(m <= 2000), use batch gradient descend. Typical mini-batch size: 64, 128, 256, 512, 1024(rare).

4.3 Exponentially weighted averages

Vt=βVt−1+(1−β)θt

V_t = \beta V_{t-1} + (1-\beta)\theta_t

View VtV_t as approximately averaging over 11−β\frac {1} {1 - \beta}.

It’s called moving average in the statistics literature.

β=0.9\beta = 0.9:

β=0.9(red)\beta = 0.9(red),β=0.98(green)\beta = 0.98(green),β=0.5(yellow)\beta = 0.5(yellow):

4.4 Understanding exponentially weighted averages

θ\theta is the temperature of the day.

v100=0.9v99+0.1θ100

v_{100} = 0.9v_{99} + 0.1 \theta_{100}

v99=0.9v98+0.1θ99

v_{99} = 0.9v_{98} + 0.1 \theta_{99}

...

...

So

v100=0.1\*θ100+0.1\*0.9\*θ99+...+0.1\*0.9i\*θ100−i+...

v_{100} = 0.1 \* \theta _{100} + 0.1 \* 0.9 \* \theta _{99} + ... + 0.1 \* 0.9^{i} \* \theta _{100-i} + ...

Th coefficients is

0.1+0.1\*0.9+0.1\*0.92+...

0.1 + 0.1 \* 0.9 + 0.1 \* 0.9^2 + ...

All of these coefficients, add up to one or add up to very close to one. It is called bias correction.

(1−ϵ)1ϵ≈1e

(1 - \epsilon)^{\frac {1} {\epsilon}} \approx \frac {1} {e}

1e≈0.3679

\frac {1} {e} \approx 0.3679

Implement exponentially weighted average:

v0=0

v_0 = 0

v1=βv0+(1−β)θ1

v_1 = \beta v_0 + (1- \beta) \theta _1

v2=βv1+(1−β)θ2

v_2 = \beta v_1 + (1- \beta) \theta _2

...

...

Exponentially weighted average takes very low memory.

4.5 Bias correction in exponentially weighted averages

It’s not a very good estimate of the first several day’s temperature. Bias correction is used to mofity this estimate that makes it much better. The formula is:

vt1−βt=βvt−1+(1−β)θt.

\frac {v_t} {1 - \beta^t} = \beta v_{t-1} + (1- \beta) \theta _t.

4.6 Gradient descent with momentum

Gradient descent with momentum almost always works faster than the standard gradient descent algorithm. The basic idea is to compute an exponentially weighted average of gradients, and then use that gradient to update weights instead.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute vdwv_{dw}, vdbv_{db} vdw=βvdw+(1−β)dw v_{dw} = \beta v_{dw} + (1 - \beta)dwvdb=βvdb+(1−β)db v_{db} = \beta v_{db} + (1 - \beta)db
  3. update dw, db w=w−αvdw w = w - \alpha v_{dw}b=b−αvdb b = b - \alpha v_{db}

There are two hyperparameters, the most common value for β\beta is 0.9.

Another formula is vdw=βvdw+dwv_{dw} = \beta v_{dw} + dw, you need to modify corresponding α\alpha.

4.7 RMSprop

RMSprop stands for root mean square prop, that can also speed up gradient descent.

On iteration t:

  1. compute dwdw, db on current mini-batch.
  2. compute sdws_{dw}, sdbs_{db} sdw=βsdw+(1−β)dw2 s_{dw} = \beta s_{dw} + (1 - \beta){dw}^2sdb=βsdb+(1−β)db2 s_{db} = \beta s_{db} + (1 - \beta){db}^2
  3. update dw, db w=w−αdwsdw‾‾‾√ w = w - \alpha \frac {dw} {\sqrt {s_{dw}}}b=b−αdbsdb‾‾‾√ b = b - \alpha \frac {db} {\sqrt {s_{db}}}

In practice, in order to avoid sdw‾‾‾√\sqrt {s_{dw}} being very close zero:

w=w−αdwsdw‾‾‾√+ϵ

w = w - \alpha \frac {dw} {\sqrt {s_{dw}} + \epsilon}

b=b−αdbsdb‾‾‾√+ϵ

b = b - \alpha \frac {db} {\sqrt {s_{db}} + \epsilon}

Usually

ϵ=10−8

\epsilon = 10^{-8}

4.8 Adam optimization algorithm

vdw=0,sdw=0,vdb,sdb=0

v_{dw}=0, s_{dw}=0,v_{db},s_{db}=0

On iteration t:

vdw=β1vdw+(1−β1)dw

v_{dw} = \beta_1 v_{dw} + (1 - \beta_1)dw

vdb=β1vdb+(1−β1)db

v_{db} = \beta_1 v_{db} + (1 - \beta_1)db

sdw=β2sdw+(1−β2)dw2

s_{dw} = \beta_2 s_{dw} + (1 - \beta_2){dw}^2

sdb=β2sdb+(1−β2)db2

s_{db} = \beta_2 s_{db} + (1 - \beta_2){db}^2

Bias correction:

vbcdw=vdw1−βt1,vbcdb=vdb1−βt1

v_{dw}^{bc} = \frac {v_{dw}} {1 - \beta_1^t}, v_{db}^{bc} = \frac {v_{db}} {1 - \beta_1^t}

sbcdw=sdw1−βt2,sbcdb=sdb1−βt2

s_{dw}^{bc} = \frac {s_{dw}} {1 - \beta_2^t}, s_{db}^{bc} = \frac {s_{db}} {1 - \beta_2^t}

Update weight:

w=w−αvbcdwsbcdw‾‾‾√+ϵ

w = w - \alpha \frac {v_{dw}^{bc}} {\sqrt {s_{dw}^{bc}} + \epsilon}

b=b−αvbcdbsbcdb‾‾‾√+ϵ

b = b - \alpha \frac {v_{db}^{bc}} {\sqrt {s_{db}^{bc}} + \epsilon}

Adam combines the effect of gradient descent with momentum together with gradient descent with RMSprop. It’s a commonly used learning algorithm that is proven to be very effective for many different neural networks of a very wide variety of architectures.\

α\alpha needs to be tuned. β1=0.9\beta_1 = 0.9, β2=0.999\beta_2 = 0.999, ϵ=10−8\epsilon = 10^{-8}.

Adam stands for Adaptive Moment Estimation.

4.9 Learning rate decay

Learning rate decay is slowly reduce the learning rate.

α=11+decayrate\*epochsα0

\alpha = \frac {1} {1 + {decay rate} \* epochs} \alpha_0

α0\alpha_0 is the initial learning rate.

Other learning rate decay methods:

α=0.95epochsα0\alpha = 0.95^{epochs}\alpha_0, this is called exponentially decay.

α=kepochs√α0\alpha = \frac {k} {\sqrt {epochs} } \alpha_0, α=kt√α0\alpha = \frac {k} {\sqrt t} \alpha_0.

α=12epochsα0\alpha = {\frac {1} {2}}^{epochs} \alpha _0, this is called a discrete staircase.

4.10 The problem of local optima

In very high-dimensional spaces you’re actually much more likely to run into a saddle point, rather than local optimum.

  • Unlikely to get stuck in a bad local optima.
  • Plateaus can make learning slow.

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏Jerry的SAP技术分享

如何在SAP CRM里创建和消费Web service

The following steps demonstrates how to expose a function module as a web servic...

941
来自专栏菩提树下的杨过

linq to sql取出随机记录/多表查询/将查询出的结果生成xml

在手写sql的年代,如果想从sqlserver数据库随机取几条数据,可以利用order by NewId()轻松实现,要实现多表查询也可以用select * f...

2236
来自专栏码匠的流水账

聊聊HystrixCommandExecutionHook

hystrix-core-1.5.12-sources.jar!/com/netflix/hystrix/strategy/executionhook/Hyst...

832
来自专栏一个会写诗的程序员的博客

java.base.jmod

/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/jmods$ jmod list java....

1182
来自专栏我和未来有约会

简练的视图模型 ViewModel

patterns & practices Developer Center 发布了 Unity Application Block 1.2 for Silver...

2349
来自专栏跟着阿笨一起玩NET

c# 使用timer定时器操作,上次定时到了以后,下次还未执行完怎么处理

------解决方案-------------------------------------------------------- 开始的时候,禁用定时器,你...

3071
来自专栏搞前端的李蚊子

Html5模拟通讯录人员排序(sen.js)

// JavaScript Document  var PY_Json_Str = ""; var PY_Str_1 = ""; var PY_Str_...

6426
来自专栏WOLFRAM

错觉艺术的巅峰,错觉图形大师M.C. Escher的不可能方块的可能模型

1423
来自专栏ml

md5算法原理一窥(其一)

    首先,需要了解的事,md5并不是传说中的加密算法,只是一种散列算法。其加密的算法并不是我们说所的那样固定不变,只是一种映射的关系。 所以解密MD5没有现...

3957
来自专栏Hadoop数据仓库

Oracle sqlldr 如何导入一个日期列

1. LOAD DATA INFILE * INTO TABLE test FIELDS TERMINATED BY X'9' TRAILING NULLCO...

1876

扫码关注云+社区