Make sure that the dev and test sets come from the same distribution。
Not having a test set might be okay.(Only dev set.)
So having set up a train dev and test set will allow you to integrate more quickly. It will also allow you to more efficiently measure the bias and variance of your algorithm, so you can more efficiently select ways to improve your algorithm.
High Bias: underfitting High Variance: overfitting
Assumption——human: 0% (Optimal/Bayes error), train set and dev set are drawn from the same distribution.
Train set error | Dev set error | Result |
---|---|---|
1% | 11% | high variance |
15% | 16% | high bias |
15% | 30% | high bias and high variance |
0.5% | 1% | low bias and low variance |
High bias –> Bigger network, Training longer, Advanced optimization algorithms, Try different netword.
High variance –> More data, Try regularization, Find a more appropriate neural network architecture.
In logistic regression,
w∈Rnx,b∈R
w \in R^{n_x}, b \in R
J(w,b)=1m∑i=1mL(ŷ (i),y(i))+λ2m||w||22
J(w, b) = \frac {1} {m} \sum _{i=1} ^m L(\hat y^{(i)}, y^{(i)}) + \frac {\lambda} {2m} ||w||_2^2
||w||22=∑j=1nxw2j=wTw
||w||_2^2 = \sum _{j=1} ^{n_x} w_j^2 = w^Tw This is called L2 regularization.
J(w,b)=1m∑i=1mL(ŷ (i),y(i))+λ2m||w||1
J(w, b) = \frac {1} {m} \sum _{i=1} ^m L(\hat y^{(i)}, y^{(i)}) + \frac {\lambda} {2m} ||w||_1
This is called L1 regularization. w
will end up being sparse. λ\lambda is called regularization parameter.
In neural network, the formula is
J(w[1],b[1],...,w[L],b[L])=1m∑i=1mL(ŷ (i),y(i))+λ2m∑l=1L||w[l]||2
J(w^{[1]},b^{[1]},...,w^{[L]},b^{[L]}) = \frac {1} {m} \sum _{i=1} ^m L(\hat y^{(i)}, y^{(i)}) + \frac {\lambda} {2m} \sum _{l=1}^L ||w^{[l]}||^2
||w[l]||2=∑i=1n[l−1]∑j=1n[l](w[l]ij)2,w:(n[l−1],n[l])
||w^{[l]}||^2 = \sum_{i=1}^{n^{[l-1]}}\sum _{j=1}^{n^{[l]}} (w_{ij}^{[l]})^2, w:(n^{[l-1]}, n^{[l]})
This matrix norm, it turns out is called the Frobenius Norm
of the matrix, denoted with a F
in the subscript.
L2 norm regularization is also called weight decay
.
If λ\lambda is set too large, matrices W
is set to be reasonabley close to zero, and it will zero out the impact of these hidden units. And that’s the case, then this much simplified neural network becomes a much smaller neural network. It will take you from overfitting to underfitting, but there is a just right case
in the middle.
Dropout will go through each of the layers of the network, and set some probability of eliminating a node in neural network. By far the most common implementation of dropouts today is inverted dropouts.
Inverted dropout, kp
stands for keep-prob
:
z[i+1]=w[i+1]a[i]+b[i+1]
z^{[i + 1]} = w^{[i + 1]} a^{[i]} + b^{[i + 1]}
a[i]=a[i]/kp
a^{[i]} = a^{[i]} / kp
In test phase, we don’t use dropout and keep-prob
.
Why does dropout workd? Intuition: Can’t rely on any one feature, so have to spread out weights.
By spreading all the weights, this will tend to have an effect of shrinking the squared norm of the weights.
Normalizing inputs can speed up training. Normalizing inputs corresponds to two steps. The first is to subtract out or to zero out the mean. And then the second step is to normalize the variances.
If the network is very deeper, deep network suffer from the problems of vanishing or exploding gradients.
If activation function is ReLU
or tanh
, w
initialization is:
w[l]=np.random.randn(shape)∗np.sqrt(2n[l−1]).
w^{[l]} = np.random.randn(shape) * np.sqrt(\frac {2} {n^{[l-1]}}). This is called Xavier initalization.
Another formula is
w[l]=np.random.randn(shape)∗np.sqrt(2n[l−1]+n[l]).
w^{[l]} = np.random.randn(shape) * np.sqrt(\frac {2} {n^{[l-1]} + n^{[l]}}).
In order to build up to gradient checking, you need to numerically approximate computatiions of gradients.
g(θ)≈f(θ+ϵ)−f(θ−ϵ)2ϵ
g(\theta) \approx \frac {f(\theta + \epsilon) - f(\theta - \epsilon)} {2 \epsilon}
Take matrix W
, vector b
and reshape them into vectors, and then concatenate them, you have a giant vector θ\theta. For each i
:
dθapprox[i]=J(θ1,...,θi+ϵ,...)−J(θ1,...,θi−ϵ,...)2ϵ≈dθi=∂J∂θi
d\theta _{approx}[i]= \frac {J(\theta_1,...,\theta_i + \epsilon,...)-J(\theta_1,...,\theta_i - \epsilon,...)} {2\epsilon} \approx d\theta_i=\frac {\partial J} {\partial \theta_i}
If
||dθapprox−dθ||2||dθapprox||2+||θ||2≈10−7
\frac {||d\theta_{approx} - d\theta ||_2} {||d\theta_{approx}||_2 + ||\theta||_2} \approx 10^{-7}, that’s great. If ≈10−5\approx 10^{-5}, you need to do double check, if ≈10−5\approx 10^{-5}, there may be a bug.