Improving Deep Neural Networks学习笔记(三)

5. Hyperparameter tuning

5.1 Tuning process

Hyperparameters:

α\alpha, β\beta, β1,β2,ϵ\beta_1,\beta_2, \epsilon, layers, hidden units, learning rate decay, mini-batch size.

The learning rate is the most important hyperparameter to tune. β\beta, mini-batch size and hidden units is second in importance to tune.

Try random values: Don’t use a grid. Corarse to fine.

5.2 Using an appropriate scale to pick hyperparameters

Appropriate scale to hyperparameters:

α=[0.0001,1]\alpha = [0.0001, 1], r = -4 * np.random.rand(), α=10r\alpha = 10^r.

If α=[10a,10b]\alpha = [10^a, 10^b], random pick from [a, b] uniformly, and set α=10r\alpha = 10^r.

Hyperparameters for exponentially weighted average

β=[0.9,0.999]\beta = [0.9, 0.999], don’t random pick from [0.9,0.999][0.9, 0.999]. Use 1−β=[0.001,0.1]1-\beta = [0.001, 0.1], use similar method lik α\alpha.

Why don’t use linear pick? Because when β\beta is close one, even if a little change, it will have a huge impact on algorithm.

5.3 Hyperparameters tuning in practice: Pandas vs Caviar

  • Re-test hyperparamters occasionally
  • Babysitting one model(Pandas)
  • Training many models in parallel(Caviar)

6. Batch Normalization

6.1 Normalizing activations in a network

In logistic regression, normalizing inputs to speed up learning.

  1. compute meansμ=1m∑ni=1x(i)\mu = \frac {1} {m} \sum_{i=1}^n x^{(i)}
  2. subtract off the means from training set x=x−μx = x - \mu\
  3. compute the variances σ2=1m∑ni=1x(i)2\sigma ^2 = \frac {1} {m} \sum_{i=1}^n {x^{(i)}}^2
  4. normalize training set X=Xσ2X = \frac {X} {\sigma ^2}

Similarly, in order to speed up training neural network, we can normalize intermediate values in layers(z in hidden layer), it is called Batch Normalization or Batch Norm.

Implementing Batch Norm

  1. Given some intermediate value in neural network, z(1),z(2),...,z(m)z^{(1)}, z^{(2)},...,z^{(m)}
  2. compute means μ=1m∑i=1z(i)\mu = \frac {1} {m} \sum_{i=1} z^{(i)}
  3. compute the variances σ2=1m∑i=1(z(i)−μ)2\sigma ^2 = \frac {1} {m} \sum_{i=1} (z^{(i)} - \mu)^2
  4. normalize zz, z(i)=z(i)−μ(σ2+ϵ)√z^{(i)} = \frac {z^{(i)} - \mu} {\sqrt {(\sigma ^2 + \epsilon)}}
  5. compute ẑ \hat z, ẑ =γz(i)+β\hat z = \gamma z^{(i)} + \beta.

Now we have normalized Z to have mean zero and standard unit variance. But maybe it makes sense for hidden units to have a different distribution. So we use ẑ \hat z instead of zz, γ\gamma and β\beta are learnable parameters of your model.

6.2 Fitting Batch Norm into a neural network

Add Batch Norm to a network

X→Z[1]→Ẑ [1]→a[1]→Z[2]→Ẑ [2]→a[2]...X \rightarrow Z^{[1]} \rightarrow {\hat Z^{[1]}} \rightarrow {a^{[1]}} \rightarrow Z^{[2]} \rightarrow {\hat Z^{[2]}} \rightarrow {a^{[2]}}...

Parameters: W[1],b[1]W^{[1]}, b^{[1]}, W[2],b[2]...W^{[2]}, b^{[2]}... γ[1],β[1]\gamma^{[1]}, \beta^{[1]}, γ[2],β[2]...\gamma^{[2]}, \beta^{[2]}...

If you use Batch Norm, you need to computing means and subtracting means, so b[i]b^{[i]} is useless, so we can set b[i]=0b^{[i]} = 0 permanently.

6.3 Why does Batch Norm work?

Covariate Shift: You have learned a function from x→yx \rightarrow y, it works well. If the distribution of xx changes, you need to learn a new function to make it work well.

Hidden unit values change all the time, and so it’s suffering from the problem of covariate.

Batch Norm as regularization

  • Each mini-batch is scaled by the mean/variance computed on just that mini-batch.
  • This adds some noise to the values z[l]z^{[l]} within that mini-batch. So similar to dropout, it adds some noise to each hidden layer’s activations.
  • This has a slight regularization effect.

6.4 Batch Norm at test time

In order to apply neural network at test time, come up with some seperate estimate of mu and sigma squared.

7. Multi-class classification

7.1 Softmax regression

7.2 Training a softmax classifier

Hard max.

Loss function.

Gradient descent with softmax.

8. Programming Frameworks

8.1 Deep Learning frameworks

  • Caffe/Caffe2
  • TensorFlow
  • Torch
  • Theano
  • mxnet
  • PaddlePaddle
  • Keras
  • CNTK

Choosing deep learning frameworks

  • Ease of programming (development and deployment)
  • Running speed
  • Truly open (open source with good governance)

8.2 TensorFlow

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏郭耀华‘s Blog

NLP基础——词集模型(SOW)和词袋模型(BOW)

2043
来自专栏Data Analysis & Viz

手把手教你完成一个数据科学小项目(5):省份提取与可视化

请先阅读“中国年轻人正带领国家走向危机”,这锅背是不背? 一文,以对“手把手教你完成一个数据科学小项目”系列有个全局性的了解。

671
来自专栏自然语言处理

实现 | 朴素贝叶斯模型算法研究与实例分析

构建一个快速过滤器来屏蔽在线社区留言板上的侮辱性言论。如果某条留言使用了负面或者侮辱性的语言,那么就将该留言标识为内容不当。对此问题建立两个类别: 侮辱类和非侮...

1604
来自专栏Petrichor的专栏

深度学习: Jacobian矩阵 & Hessian矩阵

[1] Functions - Gradient, Jacobian and Hessian [2] Deep Learning Book

2603
来自专栏机器学习算法与Python学习

【代码分享】系列之朴素贝叶斯(github clone)

前言 朴素贝叶斯是一种使用概率论来分类的算法。其中朴素:各特征条件独立;贝叶斯:根据贝叶斯定理。 根据贝叶斯定理,对一个分类问题,给定样本特征x,样本属于类别y...

3089
来自专栏大数据挖掘DT机器学习

python实现朴素贝叶斯模型:文本分类+垃圾邮件分类

学习了那么多机器学习模型,一切都是为了实践,动手自己写写这些模型的实现对自己很有帮助的,坚持,共勉。本文主要致力于总结贝叶斯实战中程序代码的实现(python)...

5407
来自专栏人工智能的秘密

朴素贝叶斯基于概率论的分类算法

机器学习算法的基础当属概率论,所以理解和使用概率论在机器学习中就显得尤为重要。本文给大家提供一个使用概率分类的方法——朴树贝叶斯。如果写出一个最简单的贝叶斯分类...

2550
来自专栏有趣的Python和你

机器学习实战之朴素贝叶斯

在学习朴素贝叶斯分类模型之前,我们回顾一下之前学习的KNN和决策树,读者本人的总结:不同的机器学习方法有着不同的假设和理论进行支撑,而这些假设和理论在很大程度上...

1545
来自专栏Pulsar-V

矩阵理论·范数

向量范数 1-范数: ,即向量元素绝对值之和,matlab调用函数norm(x, 1) 。 2-范数:,Euclid范数(欧几里得范数,常用计算向量长度),即向...

3448
来自专栏marsggbo

向量和矩阵的各种范数比较(1范数、2范数、无穷范数等等

向量的1范数即:向量的各个元素的绝对值之和,上述向量a的1范数结果就是:29,MATLAB代码实现为:norm(a,1);

3333

扫码关注云+社区