AlexNet论文总结

Introduction

Preconditions

  • ImageNet Objects in realistic settings exhibit considerable variability, so to learn to recognize them it is necessary to use much larger training sets. ImageNet consists of over 15 million labeled high-resolution images in over 22,000 categories.
  • CNNs To learn about thousands of objects from millions of images, we need a model with a large learning capacity. Convolutional neural networks (CNNs) constitute one such class of models. Their capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies).

Contributions

The specific contributions of this paper are as follows:

  • We trained one of the largest convolutional neural networks to date on the subsets of ImageNet used in the ILSVRC-2010 and ILSVRC-2012 competitions and achieved by far the best results ever reported on these datasets.
  • We wrote a highly-optimized GPU implementation of 2D convolution and all the other operations inherent in training convolutional neural networks, which we make available publicly.
  • Our network contains a number of new and unusual features which improve its performance and reduce its training time.
  • The size of our network made overfitting a significant problem, even with 1.2 million labeled training examples, so we used several effective techniques for preventing overfitting.
  • Our final network contains five convolutional and three fully-connected layers, and this depth seems to be important: we found that removing any convolutional layer (each of which contains no more than 1% of the model’s parameters) resulted in inferior performance.

Architecture

The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.

ReLU

Motivation: We are not the first to consider alternatives to traditional neuron models in CNNs.

The standard way to model a neuron’s output f as a function of its input x is with f(x)=tanh(x)f(x)=tanh(x) or f(x)=(1+e−x)−1f(x)=(1+e_{−x})^{−1}. In terms of training time with gradient descent, these saturating nonlinearities are much slower than the non-saturating nonlinearity f(x)=max(0,x)f(x)=max(0,x). Deep convolutional neural networks with ReLUs train several times faster than their equivalents with tanh units.

Note: Saturation of Sigmoid f(x) refers to the time when the output is almost zero or 1, where the gradient in the region is almost zero, causing the local gradient to disappear.

A four-layer convolutional neural network with ReLUs (solid line) reaches a 25% training error rate on CIFAR-10 six times faster than an equivalent network with tanh neurons (dashed line).

Training on Multiple GPUs

A single GTX 580 GPU has only 3GB of memory, which limits the maximum size of the networks that can be trained on it. It turns out that 1.2 million training examples are enough to train networks which are too big to fit on one GPU. Therefore we spread the net across two GPUs. The parallelization scheme that we employ essentially puts half of the kernels (or neurons) on each GPU, with one additional trick: the GPUs communicate only in certain layers. Result: This scheme reduces our top-1 and top-5 error rates by 1.7% and 1.2%, respectively, as compared with a net with half as many kernels in each convolutional layer trained on one GPU. The two-GPU net takes slightly less time to train than the one-GPU net.

Local Response Normalization

Motivation: This sort of response normalization implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities amongst neuron outputs computed using different kernels.

We still find that the following local normalization scheme aids generalization. Denoting by aix,ya_{x,y}^i the activity of a neuron computed by applying kernel ii at position (x,y)(x, y) and then applying the ReLU nonlinearity, the response-normalized activity bix,yb^i_{x,y} is given by the expression

bix,y=aix,y/(k+α∑j=max(0,i−n/2)min(N−1,i+n/2)(aix,y)2)β

b^i_{x,y} = a_{x,y}^i / ( k + \alpha \sum _{j = max(0, i-n / 2)} ^{min(N-1, i+n / 2)} (a_{x,y}^i)^2 )^\beta

where the sum runs over n “adjacent” kernel maps at the same spatial position, and N is the total number of kernels in the layer. The constants k, n, α, and β are hyper-parameters whose values are determined using a validation set; we used k = 2, n = 5, α = 0.0001, and β = 0.75.

Result: Response normalization reduces our top-1 and top-5 error rates by 1.4% and 1.2%, respectively.

Overlapping Pooling

A pooling layer can be thought of as consisting of a grid of pooling units spaced ss pixels apart, each summarizing a neighborhood of size z×zz × z centered at the location of the pooling unit. If we set s=zs = z, we obtain traditional local pooling as commonly employed in CNNs. If we set s<zs < z, we obtain overlapping pooling. This is what we use throughout our network, with s=2s = 2 and z=3z = 3.

Result: This scheme reduces the top-1 and top-5 error rates by 0.4% and 0.3%, respectively. We generally observe during training that models with overlapping pooling find it slightly more difficult to overfit.

Overall Architecture

Reduce Overfitting

Data Augmentation

The first form of data augmentation consists of generating image translations and horizontal reflections.

The second form of data augmentation consists of altering the intensities of the RGB channels in training images. Specifically, we perform PCA on the set of RGB pixel values throughout the ImageNet training set. To each training image, we add multiples of the found principal components, with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a Gaussian with mean zero and standard deviation 0.1. Therefore to each RGB image pixel Ixy=[IRxy,IGxy,IBxy]TI_{xy} = [I^R_{xy} , I^G_{xy} , I^B_{xy} ]^T we add the following quantity:

[p1,p2,p3][α1λ1,α2λ2,α3λ3]T

[p_1, p_2, p_3][\alpha_1\lambda_1, \alpha_2\lambda_2, \alpha_3\lambda_3]^T

where p_ip\_i and λi\lambda_i are iith eigenvector and eigenvalue of the 3 × 3 covariance matrix of RGB pixel values, respectively, and αi\alpha_i is the aforementioned random variable.

Result: This scheme reduces the top-1 error rate by over 1%.

Dropout

Motivation: Combining the predictions of many different models is a very successful way to reduce test errors, but it appears to be too expensive for big neural networks that already take several days to train.

There is, however, a very efficient version of model combination that only costs about a factor of two during training. The recently-introduced technique, called “dropout”, consists of setting to zero the output of each hidden neuron with probability 0.5. We use dropout in the first two fully-connected layers. Without dropout, our network exhibits substantial overfitting. Dropout roughly doubles the number of iterations required to converge.

Experimental Results

References

https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks

Supplement material

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏Petrichor的专栏

Detection-Timeline

版权声明:转载请注明出处 https://blog.csdn.net/JNingWei/article/details/80050966 ...

1013
来自专栏机器学习、深度学习

语义分割+视频分割 开源代码文献集合

语义分割 Global Deconvolutional Networks BMVC 2016 https://github.com/DrSleep...

1.1K10
来自专栏专知

【论文推荐】最新七篇目标检测相关论文—Self Paced、上下文注意力、特征反射、层次特征、Tiny SSD、少样本、协同学习

【导读】专知内容组整理了最近七篇目标检测(Object Detection)相关文章,为大家进行介绍,欢迎查看! 1. Self Paced Deep Lear...

4816
来自专栏书山有路勤为径

Keras tutorialThe Happy House

Why are we using Keras? Keras was developed to enable deep learning engineers to...

641
来自专栏专知

计算机视觉的不同任务

【导读】 在计算机视觉领域,有许多不同的任务:图像分类、目标定位、目标检测、语义分割、实例分割、图像字幕等。

612
来自专栏专知

【论文推荐】最新五篇图像分割相关论文—R2U-Net、ScatterNet混合深度学习、分离卷积编解码、控制、Embedding

【导读】专知内容组整理了最近五篇图像分割(Image Segmentation)相关文章,为大家进行介绍,欢迎查看! 1. Recurrent Residual...

4203
来自专栏专知

Expressivity,Trainability,and Generalization in Machine Learning

When I read Machine Learning papers, I ask myself whether the contributions of t...

2656
来自专栏CreateAMind

Why Neurons Have Thousands of Synapses

651
来自专栏专知

【论文推荐】最新九篇目标检测相关论文—混合区域嵌入、FSSD、尺度不敏感、图像篡改检测、对抗实例、条件生成模型

3982
来自专栏CreateAMind

(Keras)基于DDPG用300行Python代码玩转TORCS(开放赛车模拟器)-教程及代码

视频地址 http://weibo.com/3164120327/EcF8g6jdw

1153

扫码关注云+社区