一些研究论文提到,他们使用了在Imagenet上训练的conv3、conv4、conv5网络的输出。
如果我像这样显示VGG16层的名称:
base_model = tf.keras.applications.VGG16(input_shape=[h, h, 3], include_top=False)
base_model.summary()
我得到了不同名字的层次。
input_1 (InputLayer) [(None, 512, 512, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 512, 512, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 512, 512, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 256, 256, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 256, 256, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 256, 256, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 128, 128, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 128, 128, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 128, 128, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 128, 128, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 64, 64, 256) 0
.....
那么,它们指的是conv3、conv4、conv5等层吗?它们是指每个池之前的第3层、第4层、第5层(因为vgg16有5个阶段)吗?
发布于 2020-07-15 14:20:33
VGG16
的体系结构可以通过如下代码获得:
import tensorflow as tf
from tensorflow.keras.applications import VGG16
model = VGG16(include_top=False, weights = 'imagenet')
print(model.summary())
VGG16
的架构如下所示:
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, None, None, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, None, None, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, None, None, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, None, None, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, None, None, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, None, None, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, None, None, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, None, None, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, None, None, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, None, None, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, None, None, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, None, None, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
从上面的体系结构来看,在通用意义上的,
Conv3
表示Layer
的输出,block3_pool (MaxPooling2D)
Conv4
表示Layer
的输出,block4_pool (MaxPooling2D)
Conv5
表示Layer
、block5_pool (MaxPooling2D)
的输出
如果你觉得我提供的解释是不正确的,请分享你所指的Research Papers
,我可以相应地更新答案。
https://stackoverflow.com/questions/62874773
复制相似问题