# CNN-AlexNet

• NIN模型 Network-in-Network主要思想是，用全连接的多层感知机去代替 传统的卷积过程，以获取特征更加全面的表达，同时，因为前面已 经做了提升特征表达的过程，传统CNN最后的全连接层也被替换为 一个全局平均池化层，因为作者认为此时的map已经具备分类足够 的可信度了，它可以直接通过softmax来计算loss了。

NIN模型

• 在计算要求增加很多的地方应用维度缩减 和预测。即，在3x3和5x5的卷积前用一个1x1的卷积用于减少计算， 还用于修正线性激活。如下图所示，左边是加入维度缩减之前的， 右边是加入维度缩减之后的。

# 其他网络结构

def vgg_network(x, y):
net1_kernel_size = 32
net3_kernel_size = 64
net5_kernal_size_1 = 128
net5_kernal_size_2 = 128
net7_kernal_size_1 = 256
net7_kernal_size_2 = 256
net9_kernal_size_1 = 256
net9_kernal_size_2 = 256
net11_unit_size = 1000
net12_unit_size = 1000
net13_unit_size = 17

# cov3-64 lrn
with tf.variable_scope('net1'):
net = tf.nn.conv2d(x, filter=get_variable('w', [3, 3, 3, net1_kernel_size]), strides=[1, 1, 1, 1],
net = tf.nn.relu(net)
# lrn(input, depth_radius=5, bias=1, alpha=1, beta=0.5, name=None)
# 做一个局部响应归一化，是对卷积核的输出值做归一化
# depth_radius ==> 对应ppt公式上的n，bias => 对应ppt公式上的k, alpha => 对应ppt公式上的α, beta=>对应ppt公式上的β
net = tf.nn.lrn(net)
# maxpool
with tf.variable_scope('net2'):
net = tf.nn.max_pool(net, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# conv3-128
with tf.variable_scope('net3'):
net = tf.nn.conv2d(net, filter=get_variable('w', [3, 3, net1_kernel_size, net3_kernel_size]),
strides=[1, 1, 1, 1],
net = tf.nn.relu(net)
# maxpool
with tf.variable_scope('net4'):
net = tf.nn.max_pool(net, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# conv3-256 conv3-256
with tf.variable_scope('net5'):
net = tf.nn.conv2d(net, filter=get_variable('w1', [3, 3, net3_kernel_size, net5_kernal_size_1]),
strides=[1, 1, 1, 1],
net = tf.nn.relu(net)

net = tf.nn.conv2d(net, filter=get_variable('w2', [3, 3, net5_kernal_size_1, net5_kernal_size_2]),
strides=[1, 1, 1, 1],
net = tf.nn.relu(net)
# maxpool
with tf.variable_scope('net6'):
net = tf.nn.max_pool(net, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# conv3-512 conv3-512
with tf.variable_scope('net7'):
net = tf.nn.conv2d(net, filter=get_variable('w1', [3, 3, net5_kernal_size_2, net7_kernal_size_1]),
strides=[1, 1, 1, 1],
net = tf.nn.relu(net)

net = tf.nn.conv2d(net, filter=get_variable('w2', [3, 3, net7_kernal_size_1, net7_kernal_size_2]),
strides=[1, 1, 1, 1],
net = tf.nn.relu(net)
# maxpool
with tf.variable_scope('net8'):
net = tf.nn.max_pool(net, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# conv3-512 conv3-512
with tf.variable_scope('net9'):
net = tf.nn.conv2d(net, filter=get_variable('w1', [3, 3, net7_kernal_size_2, net9_kernal_size_1]),
strides=[1, 1, 1, 1],
net = tf.nn.relu(net)

net = tf.nn.conv2d(net, filter=get_variable('w2', [3, 3, net9_kernal_size_1, net9_kernal_size_2]),
strides=[1, 1, 1, 1],
net = tf.nn.relu(net)
# maxpool
with tf.variable_scope('net10'):
net = tf.nn.max_pool(net, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# fc
with tf.variable_scope('net11'):
# 将四维的数据转换为两维的数据
shape = net.get_shape()
feature_number = shape[1] * shape[2] * shape[3]
net = tf.reshape(net, shape=[-1, feature_number])
# 全连接
net = tf.add(tf.matmul(net, get_variable('w', [feature_number, net11_unit_size])),
get_variable('b', [net11_unit_size]))
# fc
with tf.variable_scope('net12'):
# 全连接
net = tf.add(tf.matmul(net, get_variable('w', [net11_unit_size, net12_unit_size])),
get_variable('b', [net12_unit_size]))
# fc
with tf.variable_scope('net13'):
# 全连接
net = tf.add(tf.matmul(net, get_variable('w', [net12_unit_size, net13_unit_size])),
get_variable('b', [net13_unit_size]))

# softmax
with tf.variable_scope('net14'):
# softmax
act = tf.nn.softmax(net)

return act

0 条评论

## 相关文章

1.5K30

82720

### 详解 BP 神经网络基本原理及 C 语言实现

BP（Back Propagation）即反向传播，指的是一种按照误差反向传播来训练神经网络的方法。而 BP 神经网络即为一种按照误差反向传播的方法训练的神经网...

55640

31360

598100

### 支持向量机原理(四)SMO算法原理

在SVM的前三篇里，我们优化的目标函数最终都是一个关于$\alpha$向量的函数。而怎么极小化这个函数，求出对应的$\alpha$向量，进而求出分离超平面我...

11920

### 数字图像处理之亮度变换

by方阳

17140

25080

16760

7110