论文连接:https://www.sciencedirect.com/science/article/abs/pii/S0952197623012630 翻译:https://jingjing.blog.csdn.net/article/details/144916015?spm=1001.2014.3001.5502
该论文提出了一种新颖的多维协作注意力模块MCA,通过同时在多个维度上建模注意力,显著提升了深度卷积神经网络的性能。MCA模块具有轻量级、高效且易于泛化的特点,在不同数据集和CNN架构上均表现出良好的性能提升效果。代码如下:
import torch
from torch import nn
import math
class StdPool(nn.Module):
def __init__(self):
super(StdPool, self).__init__()
def forward(self, x):
b, c, _, _ = x.size()
std = x.view(b, c, -1).std(dim=2, keepdim=True)
std = std.reshape(b, c, 1, 1)
return std
class MCAGate(nn.Module):
def __init__(self, k_size, pool_types=['avg', 'std']):
"""Constructs a MCAGate module.
Args:
k_size: kernel size
pool_types: pooling type. 'avg': average pooling, 'max': max pooling, 'std': standard deviation pooling.
"""
super(MCAGate, self).__init__()
self.pools = nn.ModuleList([])
for pool_type in pool_types:
if pool_type == 'avg':
self.pools.append(nn.AdaptiveAvgPool2d(1))
elif pool_type == 'max':
self.pools.append(nn.AdaptiveMaxPool2d(1))
elif pool_type == 'std':
self.pools.append(StdPool())
else:
raise NotImplementedError
self.conv = nn.Conv2d(1, 1, kernel_size=(1, k_size), stride=1, padding=(0, (k_size - 1) // 2), bias=False)
self.sigmoid = nn.Sigmoid()
self.weight = nn.Parameter(torch.rand(2))
def forward(self, x):
feats = [pool(x) for pool in self.pools]
if len(feats) == 1:
out = feats[0]
elif len(feats) == 2:
weight = torch.sigmoid(self.weight)
out = 1 / 2 * (feats[0] + feats[1]) + weight[0] * feats[0] + weight[1] * feats[1]
else:
assert False, "Feature Extraction Exception!"
out = out.permute(0, 3, 2, 1).contiguous()
out = self.conv(out)
out = out.permute(0, 3, 2, 1).contiguous()
out = self.sigmoid(out)
out = out.expand_as(x)
return x * out
class MCALayer(nn.Module):
def __init__(self, inp, no_spatial=False):
"""Constructs a MCA module.
Args:
inp: Number of channels of the input feature maps
no_spatial: whether to build channel dimension interactions
"""
super(MCALayer, self).__init__()
lambd = 1.5
gamma = 1
temp = round(abs((math.log2(inp) - gamma) / lambd))
kernel = temp if temp % 2 else temp - 1
self.h_cw = MCAGate(3)
self.w_hc = MCAGate(3)
self.no_spatial = no_spatial
if not no_spatial:
self.c_hw = MCAGate(kernel)
def forward(self, x):
x_h = x.permute(0, 2, 1, 3).contiguous()
x_h = self.h_cw(x_h)
x_h = x_h.permute(0, 2, 1, 3).contiguous()
x_w = x.permute(0, 3, 2, 1).contiguous()
x_w = self.w_hc(x_w)
x_w = x_w.permute(0, 3, 2, 1).contiguous()
if not self.no_spatial:
x_c = self.c_hw(x)
x_out = 1 / 3 * (x_c + x_h + x_w)
else:
x_out = 1 / 2 * (x_h + x_w)
return x_out
if __name__ == '__main__':
input_data = torch.randn(1, 32, 640, 480)
mca = MCALayer(32)
output = mca(input_data)
# 打印输入和输出形状
print("Input size:", input_data.size())
print("Output size:", output.size())
以下是对代码的详细解释:
StdPool
是一个自定义的池化层,它计算输入特征图在每个通道上的标准差,并返回这些标准差作为输出。这个池化层没有学习参数,主要用于提取特征的统计信息。MCAGate
是一个注意力门控模块,它结合了多种池化类型(平均池化、最大池化、标准差池化)来提取特征,并通过一个1xK的卷积层和一个sigmoid激活函数来生成注意力权重。k_size
参数定义了1xK卷积层的核大小。pool_types
参数指定了使用的池化类型。MCAGate
对输入特征图应用所有指定的池化操作,然后将得到的特征图融合起来,通过1xK卷积和sigmoid激活后生成注意力权重。MCALayer
是MCA机制的核心层,它包含了三个 MCAGate
实例,分别用于处理通道-宽度(h_cw)、宽度-通道(w_hc)和通道-高度(c_hw)之间的交互。inp
参数指定了输入特征图的通道数。no_spatial
参数是一个布尔值,用于指示是否忽略空间维度(高度和宽度)的交互。如果设置为 True
,则只考虑通道-宽度和宽度-通道之间的交互。MCALayer
首先对输入特征图进行三个维度的变换(通过 MCAGate
),然后将得到的特征图融合起来。如果 no_spatial
为 False
,则融合所有三个维度的特征图;否则,只融合通道-宽度和宽度-通道的特征图。input_data
,其形状为 (1, 32, 640, 480)
,表示一个批次中有一个样本,具有32个通道,高度为640,宽度为480。MCALayer
实例 mca
,其输入通道数为32。input_data
传递给 mca
,得到输出 output
。YoloV9改进策略:Block改进|MCA,用于图像识别的深度卷积神经网络中的多维协同注意力|即插即用
Yolo11改进策略:Block改进|MCA,用于图像识别的深度卷积神经网络中的多维协同注意力|即插即用
https://jingjing.blog.csdn.net/article/details/144927053?spm=1001.2014.3001.5502
https://jingjing.blog.csdn.net/article/details/144918039?spm=1001.2014.3001.5502