[Paper]: Multi-scale Context Aggregation by Dilated Convolutions
Dilated Convolution 已经可在 Caffe 官方的卷积层参数中定义.
message ConvolutionParameter {
// Factor used to dilate the kernel, (implicitly) zero-filling the resulting holes.
// (Kernel dilation is sometimes referred to by its use in the
// algorithme à trous from Holschneider et al. 1987.)
repeated uint32 dilation = 18; // The dilation; defaults to 1
}
layer {
name: "ct_conv1_1"
type: "Convolution"
bottom: "fc-final"
top: "ct_conv1_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 1
}
convolution_param {
num_output: 42
pad: 33
kernel_size: 3
}
}
layer {
name: "ct_relu1_1"
type: "ReLU"
bottom: "ct_conv1_1"
top: "ct_conv1_1"
}
layer {
name: "ct_conv1_2"
type: "Convolution"
bottom: "ct_conv1_1"
top: "ct_conv1_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 1
}
convolution_param {
num_output: 42
pad: 0
kernel_size: 3
}
}
layer {
name: "ct_relu1_2"
type: "ReLU"
bottom: "ct_conv1_2"
top: "ct_conv1_2"
}
layer {
name: "ct_conv2_1"
type: "Convolution"
bottom: "ct_conv1_2"
top: "ct_conv2_1"
convolution_param {
num_output: 84
kernel_size: 3
dilation: 2
}
}
layer {
name: "ct_relu2_1"
type: "ReLU"
bottom: "ct_conv2_1"
top: "ct_conv2_1"
}
layer {
name: "ct_conv3_1"
type: "Convolution"
bottom: "ct_conv2_1"
top: "ct_conv3_1"
convolution_param {
num_output: 168
kernel_size: 3
dilation: 4
}
}
layer {
name: "ct_relu3_1"
type: "ReLU"
bottom: "ct_conv3_1"
top: "ct_conv3_1"
}
layer {
name: "ct_conv4_1"
type: "Convolution"
bottom: "ct_conv3_1"
top: "ct_conv4_1"
convolution_param {
num_output: 336
kernel_size: 3
dilation: 8
}
}
layer {
name: "ct_relu4_1"
type: "ReLU"
bottom: "ct_conv4_1"
top: "ct_conv4_1"
}
layer {
name: "ct_conv5_1"
type: "Convolution"
bottom: "ct_conv4_1"
top: "ct_conv5_1"
convolution_param {
num_output: 672
kernel_size: 3
dilation: 16
}
}
layer {
name: "ct_relu5_1"
type: "ReLU"
bottom: "ct_conv5_1"
top: "ct_conv5_1"
}
layer {
name: "ct_fc1"
type: "Convolution"
bottom: "ct_conv5_1"
top: "ct_fc1"
convolution_param {
num_output: 672
kernel_size: 3
}
}
layer {
name: "ct_fc1_relu"
type: "ReLU"
bottom: "ct_fc1"
top: "ct_fc1"
}
layer {
name: "ct_final"
type: "Convolution"
bottom: "ct_fc1"
top: "ct_final"
convolution_param {
num_output: 21
kernel_size: 1
}
}
语义分割属于 dense prediction 问题, 不同于图像分类问题.
Dilated Convolutions 能够整合多尺度内容信息,且不损失分辨率,支持接受野的指数增长.
图像分类任务通过连续的 Pooling 和 Subsampling 层整合多尺度的内容信息,降低图像分别率,以得到全局预测输出.
Dense Prediction 需要结合多尺度内容推理(multi-scale contextual reasoning)与 full-resolution 输出.
处理 multi-scale reasoning 与 full-resolution dense prediction 冲突的方法:
Dilated Convolutions 不会降低图像分辨率,或分析 rescaled 图像,整合了多尺度的内容信息. 可以以任何分辨率加入到已有的网络结构中.
如图 Figure1.
Figure1:Dilated Convolution例示.
各层的参数数量是相同的. 随着参数的线性增加,接受野指数增长.
context 模块通过整合多尺度内容信息来提高 dense prediction 的结果. 其输入是 CCC 个特征图(feature maps), 输出也是 CCC 个特征图,输入输出的形式相同.
context 模块的基本形式中,各层具有 CCC 个 channels. 尽管特征图没有归一化,模块内也没有定义loss,但各层的表示是相同的,可以直接用于获得 dense per-class prediction. 直观上是可以增加特征图的准确度的.
基本的 context 模块有 7 层,各层采用具有不同的 dilation 因子的 3×3卷积. 各卷积操作后跟着一个逐元素截断(pointwise truncation)操作:max(⋅,0) 最终的输出是采用 1×1×C 的卷积操作得到的.
[1] - caffe::ConvolutionLayer
[2] - Multi-scale context aggregation by dilated convolutions