前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >网络模型--Densely Connected Convolutional Networks

网络模型--Densely Connected Convolutional Networks

作者头像
用户1148525
发布2019-05-26 11:51:33
4900
发布2019-05-26 11:51:33
举报

Densely Connected Convolutional Networks CVPR2017 best paper Code: https://github.com/liuzhuang13/DenseNet

本文受到 ResNet and Highway Networks 的启发: bypass signal from one layer to the next via identity connections,这里主要是多加了几个 identity connections,发现这么干效果很好。

首先看看 一个 5层 的Dense Block 是怎么Densely Connected

这里写图片描述
这里写图片描述

上面5层的模块有多少连接了? 5*(5+1)/2=15

整个网络结构如下图所示:

这里写图片描述
这里写图片描述
  1. DenseNets ResNets [11] add a skip-connection that bypasses the non-linear transformations with an identity function
这里写图片描述
这里写图片描述

Dense connectivity

这里写图片描述
这里写图片描述

where [x_0 ,x _1 ,…,x_l−1 ] refers to the concatenation of the feature-maps produced in layers 0,…,l−1

Composite function H定义为 a composite function of three consecutive operations: batch normalization (BN) , followed by a rectified linear unit (ReLU) and a 3 × 3 convolution (Conv)

Pooling layers 可以改变特征图的尺寸,便于 concatenation

Growth rate:If each function H produces k feature-maps as output,We refer to the hyper-parameter k as the growth rate of the network

这里写图片描述
这里写图片描述

Bottleneck layers:尽管每个网络层只输出 k 个特征图,但是同时仍然有太多的输入个数,通常的做法是降维,在进行3×3卷积之前首先用一个 1×1卷积将输入个数降低到 4*k, 也就是在 H的定义中再加入一个 1×1卷积 Although each layer only produces k output feature maps, it typically has many more inputs. It has been noted in [36, 11] that a 1×1 convolution can be introduced as bottleneck layer before each 3×3 convolution to reduce the number of input feature-maps

为什么有太多的输入个数了? If each function H_l produces k feature-maps as output, it follows that the l th layer has k×(l−1)+k0 input feature-maps, where k 0 is the number of channels in the input image.

Compression :为了进一步提升模型的简洁性,我们在 transition layers里 降低特征图数量 To further improve model compactness, we can reduce the number of feature-maps at transition layers. If a dense block contains m feature-maps, we let the following transition layer generate bθmc output feature-maps, where 0 <θ ≤1 is referred to as the compression factor.

  1. Experiments Error rates (%) on CIFAR and SVHN datasets
这里写图片描述
这里写图片描述

DenseNet and ResNet Top-1 (single model and single-crop) 对比:

这里写图片描述
这里写图片描述

参数规模还是比较小的

这里写图片描述
这里写图片描述

1) Middle: DenseNet-BC requires about 1/3 of the parameters as ResNet to achieve comparable accuracy 2)Right: Training and testing curves of the 1001-layer pre-activation ResNet [12] with more than 10M parameters and a 100-layer DenseNet with only 0.8M parameters

总的来说就是简单的多加几个 shortcut ,效果就好了,计算量少了!

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2017年07月25日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档