Improving Deep Neural Networks学习笔记(三)

5. Hyperparameter tuning

5.1 Tuning process

Hyperparameters:

α\alpha, β\beta, β1,β2,ϵ\beta_1,\beta_2, \epsilon, layers, hidden units, learning rate decay, mini-batch size.

The learning rate is the most important hyperparameter to tune. β\beta, mini-batch size and hidden units is second in importance to tune.

Try random values: Don’t use a grid. Corarse to fine.

5.2 Using an appropriate scale to pick hyperparameters

Appropriate scale to hyperparameters:

α=[0.0001,1]\alpha = [0.0001, 1], r = -4 * np.random.rand(), α=10r\alpha = 10^r.

If α=[10a,10b]\alpha = [10^a, 10^b], random pick from [a, b] uniformly, and set α=10r\alpha = 10^r.

Hyperparameters for exponentially weighted average

β=[0.9,0.999]\beta = [0.9, 0.999], don’t random pick from [0.9,0.999][0.9, 0.999]. Use 1−β=[0.001,0.1]1-\beta = [0.001, 0.1], use similar method lik α\alpha.

Why don’t use linear pick? Because when β\beta is close one, even if a little change, it will have a huge impact on algorithm.

5.3 Hyperparameters tuning in practice: Pandas vs Caviar

  • Re-test hyperparamters occasionally
  • Babysitting one model(Pandas)
  • Training many models in parallel(Caviar)

6. Batch Normalization

6.1 Normalizing activations in a network

In logistic regression, normalizing inputs to speed up learning.

  1. compute meansμ=1m∑ni=1x(i)\mu = \frac {1} {m} \sum_{i=1}^n x^{(i)}
  2. subtract off the means from training set x=x−μx = x - \mu\
  3. compute the variances σ2=1m∑ni=1x(i)2\sigma ^2 = \frac {1} {m} \sum_{i=1}^n {x^{(i)}}^2
  4. normalize training set X=Xσ2X = \frac {X} {\sigma ^2}

Similarly, in order to speed up training neural network, we can normalize intermediate values in layers(z in hidden layer), it is called Batch Normalization or Batch Norm.

Implementing Batch Norm

  1. Given some intermediate value in neural network, z(1),z(2),...,z(m)z^{(1)}, z^{(2)},...,z^{(m)}
  2. compute means μ=1m∑i=1z(i)\mu = \frac {1} {m} \sum_{i=1} z^{(i)}
  3. compute the variances σ2=1m∑i=1(z(i)−μ)2\sigma ^2 = \frac {1} {m} \sum_{i=1} (z^{(i)} - \mu)^2
  4. normalize zz, z(i)=z(i)−μ(σ2+ϵ)√z^{(i)} = \frac {z^{(i)} - \mu} {\sqrt {(\sigma ^2 + \epsilon)}}
  5. compute ẑ \hat z, ẑ =γz(i)+β\hat z = \gamma z^{(i)} + \beta.

Now we have normalized Z to have mean zero and standard unit variance. But maybe it makes sense for hidden units to have a different distribution. So we use ẑ \hat z instead of zz, γ\gamma and β\beta are learnable parameters of your model.

6.2 Fitting Batch Norm into a neural network

Add Batch Norm to a network

X→Z[1]→Ẑ [1]→a[1]→Z[2]→Ẑ [2]→a[2]...X \rightarrow Z^{[1]} \rightarrow {\hat Z^{[1]}} \rightarrow {a^{[1]}} \rightarrow Z^{[2]} \rightarrow {\hat Z^{[2]}} \rightarrow {a^{[2]}}...

Parameters: W[1],b[1]W^{[1]}, b^{[1]}, W[2],b[2]...W^{[2]}, b^{[2]}... γ[1],β[1]\gamma^{[1]}, \beta^{[1]}, γ[2],β[2]...\gamma^{[2]}, \beta^{[2]}...

If you use Batch Norm, you need to computing means and subtracting means, so b[i]b^{[i]} is useless, so we can set b[i]=0b^{[i]} = 0 permanently.

6.3 Why does Batch Norm work?

Covariate Shift: You have learned a function from x→yx \rightarrow y, it works well. If the distribution of xx changes, you need to learn a new function to make it work well.

Hidden unit values change all the time, and so it’s suffering from the problem of covariate.

Batch Norm as regularization

  • Each mini-batch is scaled by the mean/variance computed on just that mini-batch.
  • This adds some noise to the values z[l]z^{[l]} within that mini-batch. So similar to dropout, it adds some noise to each hidden layer’s activations.
  • This has a slight regularization effect.

6.4 Batch Norm at test time

In order to apply neural network at test time, come up with some seperate estimate of mu and sigma squared.

7. Multi-class classification

7.1 Softmax regression

7.2 Training a softmax classifier

Hard max.

Loss function.

Gradient descent with softmax.

8. Programming Frameworks

8.1 Deep Learning frameworks

  • Caffe/Caffe2
  • TensorFlow
  • Torch
  • Theano
  • mxnet
  • PaddlePaddle
  • Keras
  • CNTK

Choosing deep learning frameworks

  • Ease of programming (development and deployment)
  • Running speed
  • Truly open (open source with good governance)

8.2 TensorFlow

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏专知

最新5篇生成对抗网络相关论文推荐—FusedGAN、DeblurGAN、AdvGAN、CipherGAN、MMD GANS

【导读】专知内容组整理了最近生成对抗网络相关文章,为大家进行介绍,欢迎查看! 1. Semi-supervised FusedGAN for Condition...

4927
来自专栏专知

【论文推荐】最新6篇生成式对抗网络(GAN)相关论文—半监督对抗学习、行人再识别、代表性特征、高分辨率深度卷积、自监督、超分辨

【导读】专知内容组整理了最近六篇生成式对抗网络(GAN)相关文章,为大家进行介绍,欢迎查看! 1. Classification of sparsely lab...

4676
来自专栏专知

【论文推荐】最新6篇卷积神经网络相关论文—多任务学习、SAR和光学图像、动态加权排列、去雾新方法、点CNN、肿瘤生长预测

【导读】专知内容组整理了最近六篇卷积神经网络(CNN)相关文章,为大家进行介绍,欢迎查看! 1. NDDR-CNN: Layer-wise Feature Fu...

4805
来自专栏专知

【论文推荐】最新5篇图像分割(Image Segmentation)相关论文—多重假设、超像素分割、自监督、图、生成对抗网络

【导读】专知内容组整理了最近五篇图像分割(Image Segmentation)相关文章,为大家进行介绍,欢迎查看! 1. Improved Image Seg...

3784
来自专栏专知

【干货】初学者的深度学习论文打怪升级指南

,【导读】人工智能研究专家Flood Sung针对近几年深度学习的研究进展提供了一个非常详细的阅读清单。如果你在深度学习领域是一个新手,你可以会想知道如何从哪篇...

34710
来自专栏Soul Joy Hub

深度学习论文分类整理

1 深度学习历史和基础 1.0 书籍 █[0] Bengio, Yoshua, Ian J. Goodfellow, and Aaron Courville. ...

2715
来自专栏专知

【论文推荐】最新八篇生成对抗网络相关论文—BRE、图像合成、多模态图像生成、非配对多域图、注意力、对抗特征增强、深度对抗性训练

1712
来自专栏专知

【论文推荐】最新5篇目标检测相关论文——显著目标检测、弱监督One-Shot检测、多框检测器、携带物体检测、假彩色图像检测

【导读】专知内容组整理了最近目标检测相关文章,为大家进行介绍,欢迎查看! 1. MSDNN: Multi-Scale Deep Neural Network f...

3357
来自专栏CreateAMind

tensorpack

See some examples to learn about the framework:

1062
来自专栏专知

【论文推荐】最新六篇图像检索相关论文—多模态反馈、二值约束深度哈希、绘制草图、对话交互式、多目标图像检索

【导读】专知内容组在前天为大家推出六篇图像检索(Image Retrieval)相关论文,今天又推出六篇图像检索相关论文,欢迎查看!

1143

扫码关注云+社区