前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >resnet18模型

resnet18模型

作者头像
全栈程序员站长
发布2022-09-01 16:19:47
3190
发布2022-09-01 16:19:47
举报

大家好,又见面了,我是你们的朋友全栈君。

睡觉

结构

代码语言:javascript
复制
ResNet18(
  (conv1): Conv2D(3, 64, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (bn1): BatchNorm2D(num_features=64, momentum=0.9, epsilon=1e-05)
  (relu): ReLU()
  (avagPool): AdaptiveAvgPool2D(output_size=1)
  (classifier): Linear(in_features=512, out_features=1000, dtype=float32)
  (layer1): Sequential(
    (0): Block(
      (conv1): Conv2D(64, 64, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn1): BatchNorm2D(num_features=64, momentum=0.9, epsilon=1e-05)
      (conv2): Conv2D(64, 64, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn2): BatchNorm2D(num_features=64, momentum=0.9, epsilon=1e-05)
      (relu): ReLU()
      (downsample): Identity()
    )
    (1): Block(
      (conv1): Conv2D(64, 64, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn1): BatchNorm2D(num_features=64, momentum=0.9, epsilon=1e-05)
      (conv2): Conv2D(64, 64, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn2): BatchNorm2D(num_features=64, momentum=0.9, epsilon=1e-05)
      (relu): ReLU()
      (downsample): Identity()
    )
  )
  (layer2): Sequential(
    (0): Block(
      (conv1): Conv2D(64, 128, kernel_size=[3, 3], stride=[2, 2], padding=1, data_format=NCHW)
      (bn1): BatchNorm2D(num_features=128, momentum=0.9, epsilon=1e-05)
      (conv2): Conv2D(128, 128, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn2): BatchNorm2D(num_features=128, momentum=0.9, epsilon=1e-05)
      (relu): ReLU()
      (downsample): Sequential(
        (0): Conv2D(64, 128, kernel_size=[1, 1], stride=[2, 2], data_format=NCHW)
        (1): BatchNorm2D(num_features=128, momentum=0.9, epsilon=1e-05)
      )
    )
    (1): Block(
      (conv1): Conv2D(128, 128, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn1): BatchNorm2D(num_features=128, momentum=0.9, epsilon=1e-05)
      (conv2): Conv2D(128, 128, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn2): BatchNorm2D(num_features=128, momentum=0.9, epsilon=1e-05)
      (relu): ReLU()
      (downsample): Identity()
    )
  )
  (layer3): Sequential(
    (0): Block(
      (conv1): Conv2D(128, 256, kernel_size=[3, 3], stride=[2, 2], padding=1, data_format=NCHW)
      (bn1): BatchNorm2D(num_features=256, momentum=0.9, epsilon=1e-05)
      (conv2): Conv2D(256, 256, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn2): BatchNorm2D(num_features=256, momentum=0.9, epsilon=1e-05)
      (relu): ReLU()
      (downsample): Sequential(
        (0): Conv2D(128, 256, kernel_size=[1, 1], stride=[2, 2], data_format=NCHW)
        (1): BatchNorm2D(num_features=256, momentum=0.9, epsilon=1e-05)
      )
    )
    (1): Block(
      (conv1): Conv2D(256, 256, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn1): BatchNorm2D(num_features=256, momentum=0.9, epsilon=1e-05)
      (conv2): Conv2D(256, 256, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn2): BatchNorm2D(num_features=256, momentum=0.9, epsilon=1e-05)
      (relu): ReLU()
      (downsample): Identity()
    )
  )
  (layer4): Sequential(
    (0): Block(
      (conv1): Conv2D(256, 512, kernel_size=[3, 3], stride=[2, 2], padding=1, data_format=NCHW)
      (bn1): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
      (conv2): Conv2D(512, 512, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn2): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
      (relu): ReLU()
      (downsample): Sequential(
        (0): Conv2D(256, 512, kernel_size=[1, 1], stride=[2, 2], data_format=NCHW)
        (1): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
      )
    )
    (1): Block(
      (conv1): Conv2D(512, 512, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn1): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
      (conv2): Conv2D(512, 512, kernel_size=[3, 3], padding=1, data_format=NCHW)
      (bn2): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
      (relu): ReLU()
      (downsample): Identity()
    )
  )
)

Process finished with exit code 0

代码

代码语言:javascript
复制
import paddle
import paddle.nn as nn


class Identity(nn.Layer):
    def __init__(self):
        super().__init__()

    def forward(self, x):
        return x


class Block(nn.Layer):
    def __init__(self, in_dim, out_dim, stride):
        super().__init__()
        self.conv1 = nn.Conv2D(in_dim, out_dim, 3, stride, 1, bias_attr=False)
        self.bn1 = nn.BatchNorm2D(out_dim)
        self.conv2 = nn.Conv2D(out_dim, out_dim, 3, 1, 1, bias_attr=False)
        self.bn2 = nn.BatchNorm2D(out_dim)
        self.relu = nn.ReLU()
        if stride == 2 or in_dim != out_dim:
            self.downsample = nn.Sequential(
                *[nn.Conv2D(in_dim, out_dim, 1, stride, bias_attr=False), nn.BatchNorm2D(out_dim)])
        else:
            self.downsample = Identity()

    def forward(self, x):
        h = x
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.conv2(x)
        x = self.bn2(x)
        identity = self.downsample(h)
        x = x + identity
        x = self.relu(x)
        return x


class ResNet18(nn.Layer):
    def __init__(self, in_dim=64, num_classes=1000):
        super().__init__()
        self.in_dim = in_dim  # 差点忘了这一行
        #     stem
        self.conv1 = nn.Conv2D(in_channels=3, out_channels=in_dim, kernel_size=3, stride=1, padding=1, bias_attr=False)
        self.bn1 = nn.BatchNorm2D(in_dim)
        self.relu = nn.ReLU()
        #     head
        self.avagPool = nn.AdaptiveAvgPool2D(1)
        self.classifier = nn.Linear(512, num_classes)
        # blocks
        self.layer1 = self.makelayer(64, 2, 1)
        self.layer2 = self.makelayer(128, 2, 2)
        self.layer3 = self.makelayer(256, 2, 2)
        self.layer4 = self.makelayer(512, 2, 2)

    def makelayer(self, out_dim, n_blocks, stride):
        layer_list = []
        layer_list.append(Block(self.in_dim, out_dim, stride))  # 哦对,这里的self.in_dim是这个类的,不是这个函数的.
        self.in_dim = out_dim
        for i in range(1, n_blocks):
            layer_list.append(Block(self.in_dim, out_dim, stride=1))
        return nn.Sequential(*layer_list)

    def forward(self, x):
        x=self.conv1(x)
        x=self.bn1(x)
        x=self.relu(x)

        #blocks
        x=self.layer1(x)
        x=self.layer2(x)
        x=self.layer3(x)
        x=self.layer4(x)

        #head
        x=self.avagPool(x)
        # print("preflatten:",x.shape)
        x=x.flatten(1)
        # print("flatten:",x.shape)
        x=self.classifier(x)
        # print("classifier:",x.shape)
        return x
def main():
    model=ResNet18()
    x=paddle.randn([2,3,32,32])
    out=model(x)
    print(model)
    # print("x.shape:",x.shape)
if __name__ == "__main__":
    main()

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/141235.html原文链接:https://javaforall.cn

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2022年5月2,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 睡觉
  • 结构
  • 代码
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档