Improving Deep Neural Networks学习笔记(三)

5. Hyperparameter tuning

5.1 Tuning process

Hyperparameters:

α\alpha, β\beta, β1,β2,ϵ\beta_1,\beta_2, \epsilon, layers, hidden units, learning rate decay, mini-batch size.

The learning rate is the most important hyperparameter to tune. β\beta, mini-batch size and hidden units is second in importance to tune.

Try random values: Don’t use a grid. Corarse to fine.

5.2 Using an appropriate scale to pick hyperparameters

Appropriate scale to hyperparameters:

α=[0.0001,1]\alpha = [0.0001, 1], r = -4 * np.random.rand(), α=10r\alpha = 10^r.

If α=[10a,10b]\alpha = [10^a, 10^b], random pick from [a, b] uniformly, and set α=10r\alpha = 10^r.

Hyperparameters for exponentially weighted average

β=[0.9,0.999]\beta = [0.9, 0.999], don’t random pick from [0.9,0.999][0.9, 0.999]. Use 1−β=[0.001,0.1]1-\beta = [0.001, 0.1], use similar method lik α\alpha.

Why don’t use linear pick? Because when β\beta is close one, even if a little change, it will have a huge impact on algorithm.

5.3 Hyperparameters tuning in practice: Pandas vs Caviar

  • Re-test hyperparamters occasionally
  • Babysitting one model(Pandas)
  • Training many models in parallel(Caviar)

6. Batch Normalization

6.1 Normalizing activations in a network

In logistic regression, normalizing inputs to speed up learning.

  1. compute meansμ=1m∑ni=1x(i)\mu = \frac {1} {m} \sum_{i=1}^n x^{(i)}
  2. subtract off the means from training set x=x−μx = x - \mu\
  3. compute the variances σ2=1m∑ni=1x(i)2\sigma ^2 = \frac {1} {m} \sum_{i=1}^n {x^{(i)}}^2
  4. normalize training set X=Xσ2X = \frac {X} {\sigma ^2}

Similarly, in order to speed up training neural network, we can normalize intermediate values in layers(z in hidden layer), it is called Batch Normalization or Batch Norm.

Implementing Batch Norm

  1. Given some intermediate value in neural network, z(1),z(2),...,z(m)z^{(1)}, z^{(2)},...,z^{(m)}
  2. compute means μ=1m∑i=1z(i)\mu = \frac {1} {m} \sum_{i=1} z^{(i)}
  3. compute the variances σ2=1m∑i=1(z(i)−μ)2\sigma ^2 = \frac {1} {m} \sum_{i=1} (z^{(i)} - \mu)^2
  4. normalize zz, z(i)=z(i)−μ(σ2+ϵ)√z^{(i)} = \frac {z^{(i)} - \mu} {\sqrt {(\sigma ^2 + \epsilon)}}
  5. compute ẑ \hat z, ẑ =γz(i)+β\hat z = \gamma z^{(i)} + \beta.

Now we have normalized Z to have mean zero and standard unit variance. But maybe it makes sense for hidden units to have a different distribution. So we use ẑ \hat z instead of zz, γ\gamma and β\beta are learnable parameters of your model.

6.2 Fitting Batch Norm into a neural network

Add Batch Norm to a network

X→Z[1]→Ẑ [1]→a[1]→Z[2]→Ẑ [2]→a[2]...X \rightarrow Z^{[1]} \rightarrow {\hat Z^{[1]}} \rightarrow {a^{[1]}} \rightarrow Z^{[2]} \rightarrow {\hat Z^{[2]}} \rightarrow {a^{[2]}}...

Parameters: W[1],b[1]W^{[1]}, b^{[1]}, W[2],b[2]...W^{[2]}, b^{[2]}... γ[1],β[1]\gamma^{[1]}, \beta^{[1]}, γ[2],β[2]...\gamma^{[2]}, \beta^{[2]}...

If you use Batch Norm, you need to computing means and subtracting means, so b[i]b^{[i]} is useless, so we can set b[i]=0b^{[i]} = 0 permanently.

6.3 Why does Batch Norm work?

Covariate Shift: You have learned a function from x→yx \rightarrow y, it works well. If the distribution of xx changes, you need to learn a new function to make it work well.

Hidden unit values change all the time, and so it’s suffering from the problem of covariate.

Batch Norm as regularization

  • Each mini-batch is scaled by the mean/variance computed on just that mini-batch.
  • This adds some noise to the values z[l]z^{[l]} within that mini-batch. So similar to dropout, it adds some noise to each hidden layer’s activations.
  • This has a slight regularization effect.

6.4 Batch Norm at test time

In order to apply neural network at test time, come up with some seperate estimate of mu and sigma squared.

7. Multi-class classification

7.1 Softmax regression

7.2 Training a softmax classifier

Hard max.

Loss function.

Gradient descent with softmax.

8. Programming Frameworks

8.1 Deep Learning frameworks

  • Caffe/Caffe2
  • TensorFlow
  • Torch
  • Theano
  • mxnet
  • PaddlePaddle
  • Keras
  • CNTK

Choosing deep learning frameworks

  • Ease of programming (development and deployment)
  • Running speed
  • Truly open (open source with good governance)

8.2 TensorFlow

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏Petrichor的专栏

Dataset 列表:机器学习研究

In computer vision, face images have been used extensively to develop face recog...

941
来自专栏机器之心

灵魂追问 | 教程那么多,你……看完了吗?

2689
来自专栏CreateAMind

vae 相关论文 表示学习 1

05 Nov 2016 (modified: 18 Apr 2017)ICLR 2017 conference submissionReaders: Ever...

1312
来自专栏专知

语音顶级会议Interspeech2018接受论文列表!

Interspeech 是国际语音通信协会(ISCA)组织的语音领域顶级学术会议,是全球最大的综合性语音信息处理领域的科技盛会。Interspeech...

2463
来自专栏calmound

ZOJ 3594 Sexagenary Cycle

题意:天干地支。         天干: Jia, Yi, Bing, Ding, Wu, Ji, Geng, Xin, Ren and Gui        ...

2595
来自专栏专知

【论文推荐】最新八篇情感分析相关论文—注意力网络、多模态情感分析、情感分析局限性、跨语言情感分类、多语言情感分析

【导读】专知内容组今天推出最新八篇情感分析(Sentiment Analysis)相关论文,欢迎查看!

2392
来自专栏专知

【专知荟萃17】情感分析Sentiment Analysis 知识资料全集(入门/进阶/论文/综述/视频/专家,附查看)

情感分析 ( Sentiment Analysis ) 专知荟萃 入门学习 进阶论文 Tutorial 综述 代码 视频教程 领域专家 入门学习 斯坦福大学自然...

5315
来自专栏红色石头的机器学习之路

Python机器学习(1)-- 自己设计一个感知机(Perceptron)分类算法

Implementing a perceptron learning algorithm in Python Define a Class import num...

4701
来自专栏计算机视觉战队

技术 | 用二进制算法加速神经网络

The original article is published on Nervana site: Accelerating Neural Networks ...

2887
来自专栏专知

【论文推荐】最新7篇变分自编码器(VAE)相关论文—汉语诗歌、生成模型、跨模态、MR图像重建、机器翻译、推断、合成人脸

【导读】专知内容组整理了最近七篇变分自编码器(Variational Autoencoders)相关文章,为大家进行介绍,欢迎查看! 1. Generating...

5024

扫码关注云+社区