The specific contributions of this paper are as follows:
The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Motivation: We are not the first to consider alternatives to traditional neuron models in CNNs.
The standard way to model a neuron’s output f as a function of its input x is with f(x)=tanh(x)f(x)=tanh(x) or f(x)=(1+e−x)−1f(x)=(1+e_{−x})^{−1}. In terms of training time with gradient descent, these saturating nonlinearities are much slower than the non-saturating nonlinearity f(x)=max(0,x)f(x)=max(0,x). Deep convolutional neural networks with ReLUs train several times faster than their equivalents with tanh units.
Note: Saturation of Sigmoid f(x) refers to the time when the output is almost zero or 1, where the gradient in the region is almost zero, causing the local gradient to disappear.
A four-layer convolutional neural network with ReLUs (solid line) reaches a 25% training error rate on CIFAR-10 six times faster than an equivalent network with tanh neurons (dashed line).
A single GTX 580 GPU has only 3GB of memory, which limits the maximum size of the networks that can be trained on it. It turns out that 1.2 million training examples are enough to train networks which are too big to fit on one GPU. Therefore we spread the net across two GPUs. The parallelization scheme that we employ essentially puts half of the kernels (or neurons) on each GPU, with one additional trick: the GPUs communicate only in certain layers.
Result: This scheme reduces our top-1
and top-5
error rates by 1.7%
and 1.2%
, respectively, as compared with a net with half as many kernels in each convolutional layer trained on one GPU. The two-GPU net takes slightly less time to train than the one-GPU net.
Motivation: This sort of response normalization implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities amongst neuron outputs computed using different kernels.
We still find that the following local normalization scheme aids generalization. Denoting by aix,ya_{x,y}^i the activity of a neuron computed by applying kernel ii at position (x,y)(x, y) and then applying the ReLU nonlinearity, the response-normalized activity bix,yb^i_{x,y} is given by the expression
bix,y=aix,y/(k+α∑j=max(0,i−n/2)min(N−1,i+n/2)(aix,y)2)β
b^i_{x,y} = a_{x,y}^i / ( k + \alpha \sum _{j = max(0, i-n / 2)} ^{min(N-1, i+n / 2)} (a_{x,y}^i)^2 )^\beta
where the sum runs over n “adjacent” kernel maps at the same spatial position, and N is the total number of kernels in the layer. The constants k, n, α, and β are hyper-parameters whose values are determined using a validation set; we used k = 2, n = 5, α = 0.0001, and β = 0.75.
Result: Response normalization reduces our top-1
and top-5
error rates by 1.4%
and 1.2%
, respectively.
A pooling layer can be thought of as consisting of a grid of pooling units spaced ss pixels apart, each summarizing a neighborhood of size z×zz × z centered at the location of the pooling unit. If we set s=zs = z, we obtain traditional local pooling as commonly employed in CNNs. If we set s<zs < z, we obtain overlapping pooling. This is what we use throughout our network, with s=2s = 2 and z=3z = 3.
Result: This scheme reduces the top-1
and top-5
error rates by 0.4%
and 0.3%
, respectively. We generally observe during training that models with overlapping pooling find it slightly more difficult to overfit.
The first form of data augmentation consists of generating image translations and horizontal reflections.
The second form of data augmentation consists of altering the intensities of the RGB channels in training images. Specifically, we perform PCA on the set of RGB pixel values throughout the ImageNet training set. To each training image, we add multiples of the found principal components, with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a Gaussian with mean zero and standard deviation 0.1. Therefore to each RGB image pixel Ixy=[IRxy,IGxy,IBxy]TI_{xy} = [I^R_{xy} , I^G_{xy} , I^B_{xy} ]^T we add the following quantity:
[p1,p2,p3][α1λ1,α2λ2,α3λ3]T
[p_1, p_2, p_3][\alpha_1\lambda_1, \alpha_2\lambda_2, \alpha_3\lambda_3]^T
where p_ip\_i and λi\lambda_i are iith eigenvector and eigenvalue of the 3 × 3 covariance matrix of RGB pixel values, respectively, and αi\alpha_i is the aforementioned random variable.
Result: This scheme reduces the top-1
error rate by over 1%
.
Motivation: Combining the predictions of many different models is a very successful way to reduce test errors, but it appears to be too expensive for big neural networks that already take several days to train.
There is, however, a very efficient version of model combination that only costs about a factor of two during training. The recently-introduced technique, called “dropout”, consists of setting to zero the output of each hidden neuron with probability 0.5. We use dropout in the first two fully-connected layers. Without dropout, our network exhibits substantial overfitting. Dropout roughly doubles the number of iterations required to converge.
https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks