Improving Deep Neural Networks学习笔记(三)

5. Hyperparameter tuning

5.1 Tuning process

Hyperparameters:

α\alpha, β\beta, β1,β2,ϵ\beta_1,\beta_2, \epsilon, layers, hidden units, learning rate decay, mini-batch size.

The learning rate is the most important hyperparameter to tune. β\beta, mini-batch size and hidden units is second in importance to tune.

Try random values: Don’t use a grid. Corarse to fine.

5.2 Using an appropriate scale to pick hyperparameters

Appropriate scale to hyperparameters:

α=[0.0001,1]\alpha = [0.0001, 1], r = -4 * np.random.rand(), α=10r\alpha = 10^r.

If α=[10a,10b]\alpha = [10^a, 10^b], random pick from [a, b] uniformly, and set α=10r\alpha = 10^r.

Hyperparameters for exponentially weighted average

β=[0.9,0.999]\beta = [0.9, 0.999], don’t random pick from [0.9,0.999][0.9, 0.999]. Use 1−β=[0.001,0.1]1-\beta = [0.001, 0.1], use similar method lik α\alpha.

Why don’t use linear pick? Because when β\beta is close one, even if a little change, it will have a huge impact on algorithm.

5.3 Hyperparameters tuning in practice: Pandas vs Caviar

  • Re-test hyperparamters occasionally
  • Babysitting one model(Pandas)
  • Training many models in parallel(Caviar)

6. Batch Normalization

6.1 Normalizing activations in a network

In logistic regression, normalizing inputs to speed up learning.

  1. compute meansμ=1m∑ni=1x(i)\mu = \frac {1} {m} \sum_{i=1}^n x^{(i)}
  2. subtract off the means from training set x=x−μx = x - \mu\
  3. compute the variances σ2=1m∑ni=1x(i)2\sigma ^2 = \frac {1} {m} \sum_{i=1}^n {x^{(i)}}^2
  4. normalize training set X=Xσ2X = \frac {X} {\sigma ^2}

Similarly, in order to speed up training neural network, we can normalize intermediate values in layers(z in hidden layer), it is called Batch Normalization or Batch Norm.

Implementing Batch Norm

  1. Given some intermediate value in neural network, z(1),z(2),...,z(m)z^{(1)}, z^{(2)},...,z^{(m)}
  2. compute means μ=1m∑i=1z(i)\mu = \frac {1} {m} \sum_{i=1} z^{(i)}
  3. compute the variances σ2=1m∑i=1(z(i)−μ)2\sigma ^2 = \frac {1} {m} \sum_{i=1} (z^{(i)} - \mu)^2
  4. normalize zz, z(i)=z(i)−μ(σ2+ϵ)√z^{(i)} = \frac {z^{(i)} - \mu} {\sqrt {(\sigma ^2 + \epsilon)}}
  5. compute ẑ \hat z, ẑ =γz(i)+β\hat z = \gamma z^{(i)} + \beta.

Now we have normalized Z to have mean zero and standard unit variance. But maybe it makes sense for hidden units to have a different distribution. So we use ẑ \hat z instead of zz, γ\gamma and β\beta are learnable parameters of your model.

6.2 Fitting Batch Norm into a neural network

Add Batch Norm to a network

X→Z[1]→Ẑ [1]→a[1]→Z[2]→Ẑ [2]→a[2]...X \rightarrow Z^{[1]} \rightarrow {\hat Z^{[1]}} \rightarrow {a^{[1]}} \rightarrow Z^{[2]} \rightarrow {\hat Z^{[2]}} \rightarrow {a^{[2]}}...

Parameters: W[1],b[1]W^{[1]}, b^{[1]}, W[2],b[2]...W^{[2]}, b^{[2]}... γ[1],β[1]\gamma^{[1]}, \beta^{[1]}, γ[2],β[2]...\gamma^{[2]}, \beta^{[2]}...

If you use Batch Norm, you need to computing means and subtracting means, so b[i]b^{[i]} is useless, so we can set b[i]=0b^{[i]} = 0 permanently.

6.3 Why does Batch Norm work?

Covariate Shift: You have learned a function from x→yx \rightarrow y, it works well. If the distribution of xx changes, you need to learn a new function to make it work well.

Hidden unit values change all the time, and so it’s suffering from the problem of covariate.

Batch Norm as regularization

  • Each mini-batch is scaled by the mean/variance computed on just that mini-batch.
  • This adds some noise to the values z[l]z^{[l]} within that mini-batch. So similar to dropout, it adds some noise to each hidden layer’s activations.
  • This has a slight regularization effect.

6.4 Batch Norm at test time

In order to apply neural network at test time, come up with some seperate estimate of mu and sigma squared.

7. Multi-class classification

7.1 Softmax regression

7.2 Training a softmax classifier

Hard max.

Loss function.

Gradient descent with softmax.

8. Programming Frameworks

8.1 Deep Learning frameworks

  • Caffe/Caffe2
  • TensorFlow
  • Torch
  • Theano
  • mxnet
  • PaddlePaddle
  • Keras
  • CNTK

Choosing deep learning frameworks

  • Ease of programming (development and deployment)
  • Running speed
  • Truly open (open source with good governance)

8.2 TensorFlow

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏郭耀华‘s Blog

NLP基础——词集模型(SOW)和词袋模型(BOW)

26530
来自专栏大数据挖掘DT机器学习

python实现朴素贝叶斯模型:文本分类+垃圾邮件分类

学习了那么多机器学习模型,一切都是为了实践,动手自己写写这些模型的实现对自己很有帮助的,坚持,共勉。本文主要致力于总结贝叶斯实战中程序代码的实现(python)...

82970
来自专栏fangyangcoder

python机器学习实战(三)

这篇博客是关于机器学习中基于概率论的分类方法--朴素贝叶斯,内容包括朴素贝叶斯分类器,垃圾邮件的分类,解析RSS源数据以及用朴素贝叶斯来分析不同地区的态度.

26020
来自专栏Petrichor的专栏

深度学习: Jacobian矩阵 & Hessian矩阵

[1] Functions - Gradient, Jacobian and Hessian [2] Deep Learning Book

62230
来自专栏AILearning

【机器学习实战】第4章 基于概率论的分类方法:朴素贝叶斯

第4章 基于概率论的分类方法:朴素贝叶斯 <script type="text/javascript" src="http://cdn.mathjax.org...

354100
来自专栏有趣的Python和你

机器学习实战之朴素贝叶斯

在学习朴素贝叶斯分类模型之前,我们回顾一下之前学习的KNN和决策树,读者本人的总结:不同的机器学习方法有着不同的假设和理论进行支撑,而这些假设和理论在很大程度上...

16650
来自专栏机器学习算法与Python学习

【代码分享】系列之朴素贝叶斯(github clone)

前言 朴素贝叶斯是一种使用概率论来分类的算法。其中朴素:各特征条件独立;贝叶斯:根据贝叶斯定理。 根据贝叶斯定理,对一个分类问题,给定样本特征x,样本属于类别y...

33290
来自专栏强仔仔

利用JavaScript中的正则表达式实现常用输入框的验证

本章主要讲:通过JavaScript中正则表达式的 应用实现(http、电话号码、邮箱、数字、字母及其数字、时间日期、身份证)等的验证。 下面看例子demo的实...

25960
来自专栏JasonhavenDai

朴素贝叶斯练习实例

文本分类:过滤恶意留言 此处有两个改进的地方: (1)若有的类别没有出现,其概率就是0,会十分影响分类器的性能。所以采取各类别默认1次累加,总类别(两类)次...

53050
来自专栏marsggbo

向量和矩阵的各种范数比较(1范数、2范数、无穷范数等等

向量的1范数即:向量的各个元素的绝对值之和,上述向量a的1范数结果就是:29,MATLAB代码实现为:norm(a,1);

60930

扫码关注云+社区

领取腾讯云代金券