前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >卷积神经网络反向传播推导

卷积神经网络反向传播推导

作者头像
量化投资与机器学习微信公众号
发布2018-01-30 10:28:37
1K0
发布2018-01-30 10:28:37
举报
文章被收录于专栏:量化投资与机器学习

Disclaimer: It is assumed that the reader is familiar with terms such as Multilayer Perceptron, delta errors or backpropagation. If not, it is recommended to read for example a chapter 2 of free online book ‘Neural Networks and Deep Learning’ by Michael Nielsen.

Convolutional Neural Networks (CNN) are now a standard way of image classification – there are publicly accessible deep learning frameworks, trained models and services. It’s more time consuming to install stuff like caffe than to perform state-of-the-art object classification or detection. We also have many methods of getting knowledge -there is a large number of deep learning courses/MOOCs, free e-books or even direct ways of accessing to the strongest Deep/Machine Learning minds such as Yoshua Bengio, Andrew NG or Yann Lecun by Quora, Facebook or G+.

Nevertheless, when I wanted to get deeper insight in CNN, I could not find a “CNN backpropagation for dummies”. Notoriously I met with statements like: “If you understand backpropagation in standard neural networks, there should not be a problem with understanding it in CNN” or “All things are nearly the same, except matrix multiplications are replaced by convolutions”. And of course I saw tons of ready equations.

It was a little consoling, when I found out that I am not alone, for example: Hello, when computing the gradients CNN, the weights need to be rotated, Why ?

The answer on above question, that concerns the need of rotation on weights in gradient computing, will be a result of this long post.

We start from multilayer perceptron and counting delta errors on fingers:

We see on above picture that \ \delta_{1}^{1} is proportional to deltas from next layer that are scaled by weights.

But how do we connect concept of MLP with Convolutional Neural Network? Let’s play with MLP:

If you are not sure that after connections cutting and weights sharing we get one layer Convolutional Neural Network, I hope that below picture will convince you:

The idea behind this figure is to show, that such neural network configuration is identical with a 2D convolution operation and weights are just filters (also called kernels, convolution matrices, or masks).

Now we can come back to gradient computing by counting on fingers, but from now we will be only focused on CNN. Let’s begin:

No magic here, we have just summed in “blue layer” scaled by weights gradients from “orange” layer. Same process as in MLP’s backpropagation. However, in the standard approach we talk about dot products and here we have … yup, again convolution:

Yeah, it is a bit different convolution than in previous (forward) case. There we did so called valid convolution, while here we do a full convolution (more about nomenclature here). What is more, we rotate our kernel by 180 degrees. But still, we are talking about convolution!

Now, I have some good news and some bad news:

  1. you see (BTW, sorry for pictures aesthetics :) ), that matrix dot products are replaced by convolution operations both in feed forward and backpropagation.
  2. you know that seeing something and understanding something … yup, we are going now to get our hands dirty and prove above statement <fn> before getting next, I recommend to read, mentioned already in the disclaimer, chapter 2 of M. Nielsen book. I tried to make all quantities to be consistent with work of Michael.

In the standard MLP, we can define an error of neuron j as:

where z_{j}^{l} is just:

and for clarity, a_{j}^{l}=\sigma(z_{j}^{l}) , where \sigma is an activation function such as sigmoid, hyperbolic tangent or relu.

But here, we do not have MLP but CNN and matrix multiplications are replaced by convolutions as we discussed before. So instead of z_{j} we do have a z_{x,y} :

Above equation is just a convolution operation during feedforward phase illustrated in the above picture titled ‘Feedforward in CNN is identical with convolution operation’

Now we can get to the point and answer the question Hello, when computing the gradients CNN, the weights need to be rotated, Why ?

We start from statement:

We know that z_{x,y}^{l} is in relation to z_{x,y}^{l1} which is indirectly showed in the above picture titled ‘Backpropagation also results with convolution’. So sums are the result of chain rule. Let’s move on:

First term is replaced by definition of error, while second has become large because we put it here expression on z_{x',y'}^{l1}. However, we do not have to fear of this big monster – all components of sums equal 0, except these ones that are indexed: x=x'-a and y=y'-b . So:

If x=x'-a and y=y'-b then it is obvious that a=x'-x and b=y'-y so we can reformulate above equation to:

OK, our last equation is just …

Where is the rotation of weights? Actually ROT180(w^{l1}_{x,y})=w^{l1}_{-x,-y} .

So the answer on question Hello, when computing the gradients CNN, the weights need to be rotated, Why ? is simple: the rotation of the weights just results from derivation of delta error in Convolution Neural Network.

OK, we are really close to the end. One more ingredient of backpropagation algorithm is update of weights\frac{\partial C}{\partial w^{1}_{a,b}} :

So paraphrasing the backpropagation algorithm for CNN:

  1. Input x: set the corresponding activation a^{1} for the input layer.
  2. Feedforward: for each l = 2,3, …,L compute and a^{1}_{x,y}=\sigma(z^{1}_{x,y}) .
  3. Output error \delta^{L} : Compute the vector \delta^{L}=\bigtriangledown_{a}C\ \odot\sigma'(z^{L})
  4. Backpropagate the error: For each l=L-1,L-2,…,2 compute \delta^{l}_{x,y}=\delta^{l1}_{x,y}*ROT180(w^{l1}_{x,y})\sigma'(z^{l}_{x,y})
  5. Outp\delta^{l}_{x,y}ut: The gradient of the cost function is given by
本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2016-05-04,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 量化投资与机器学习 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档