Hyperparameters:
α\alpha, β\beta, β1,β2,ϵ\beta_1,\beta_2, \epsilon, layers, hidden units, learning rate decay, mini-batch size.
The learning rate is the most important hyperparameter to tune. β\beta, mini-batch size and hidden units is second in importance to tune.
Try random values: Don’t use a grid. Corarse to fine.
Appropriate scale to hyperparameters:
α=[0.0001,1]\alpha = [0.0001, 1], r = -4 * np.random.rand(), α=10r\alpha = 10^r.
If α=[10a,10b]\alpha = [10^a, 10^b], random pick from [a, b] uniformly, and set α=10r\alpha = 10^r.
Hyperparameters for exponentially weighted average
β=[0.9,0.999]\beta = [0.9, 0.999], don’t random pick from [0.9,0.999][0.9, 0.999]. Use 1−β=[0.001,0.1]1-\beta = [0.001, 0.1], use similar method lik α\alpha.
Why don’t use linear pick? Because when β\beta is close one, even if a little change, it will have a huge impact on algorithm.
In logistic regression, normalizing inputs to speed up learning.
Similarly, in order to speed up training neural network, we can normalize intermediate values in layers(z
in hidden layer), it is called Batch Normalization or Batch Norm.
Implementing Batch Norm
Now we have normalized Z to have mean zero and standard unit variance. But maybe it makes sense for hidden units to have a different distribution. So we use ẑ \hat z instead of zz, γ\gamma and β\beta are learnable parameters of your model.
Add Batch Norm to a network
X→Z[1]→Ẑ [1]→a[1]→Z[2]→Ẑ [2]→a[2]...X \rightarrow Z^{[1]} \rightarrow {\hat Z^{[1]}} \rightarrow {a^{[1]}} \rightarrow Z^{[2]} \rightarrow {\hat Z^{[2]}} \rightarrow {a^{[2]}}...
Parameters: W[1],b[1]W^{[1]}, b^{[1]}, W[2],b[2]...W^{[2]}, b^{[2]}... γ[1],β[1]\gamma^{[1]}, \beta^{[1]}, γ[2],β[2]...\gamma^{[2]}, \beta^{[2]}...
If you use Batch Norm, you need to computing means and subtracting means, so b[i]b^{[i]} is useless, so we can set b[i]=0b^{[i]} = 0 permanently.
Covariate Shift: You have learned a function from x→yx \rightarrow y, it works well. If the distribution of xx changes, you need to learn a new function to make it work well.
Hidden unit values change all the time, and so it’s suffering from the problem of covariate.
Batch Norm as regularization
In order to apply neural network at test time, come up with some seperate estimate of mu and sigma squared.
Hard max.
Loss function.
Gradient descent with softmax.
Choosing deep learning frameworks
…