# TensorFlow基本操作 实现卷积和池化

```def conv2d(input,
filter,
strides,
use_cudnn_on_gpu=None,
data_format=None,
name=None)```

```def get_variable(name,
shape=None,
dtype=None,
initializer=None,
regularizer=None,
trainable=True,
collections=None,
caching_device=None,
partitioner=None,
validate_shape=True,
use_resource=None,
custom_getter=None)```

```filter_weight = tf.get_variable(name,dtype)
biases = tf.get_variable(name,dtype)
actived_conv = tf.cnn.relu(bias)```

```def avg_pool(value,
ksize,
strides,
data_format="NHWC",
name=None):```

```import tensorflow as tf
import numpy as np

M = np.array([
[[1],[-1],[0]],
[[-1],[2],[1]],
[[0],[2],[-2]]
])
#打印输入矩阵shape
print("Matrix shape is: ",M.shape)
#卷积操作
filter_weight = tf.get_variable('weights',
[2, 2, 1, 1],
initializer = tf.constant_initializer([                                                                       [1, -1],                                                                       [0, 2]]))
biases = tf.get_variable('biases',
[1],
initializer = tf.constant_initializer(1))
#调整输入矩阵的格式符合TensorFlow的要求，batch=1
M = np.asarray(M, dtype='float32')
M = M.reshape(1, 3, 3, 1)

x = tf.placeholder('float32', [1, None, None, 1])
conv = tf.nn.conv2d(x, filter_weight, strides = [1, 2, 2, 1], padding = 'SAME')
pool = tf.nn.avg_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
with tf.Session() as sess:
tf.global_variables_initializer().run()
#将M feed到定义好的操作中
convoluted_M = sess.run(bias,feed_dict={x:M})
pooled_M = sess.run(pool,feed_dict={x:M})

print("convoluted_M: \n", convoluted_M)
print("pooled_M: \n", pooled_M)```

[[-1.] [-1.]]]] pooled_M: [[[[ 0.25] [ 0.5 ]]

[[ 1. ] [-2. ]]]]

（3-2+1）/2+1=2

96 篇文章36 人订阅

0 条评论

## 相关文章

3696

3445

### 【深度学习系列】卷积神经网络详解(二)——自己手写一个卷积神经网络

Screenshot (15).png 　　上篇文章中我们讲解了卷积神经网络的基本原理，包括几个基本层的定义、运算规则等。本文主要写卷积神经网络如何进行一次完...

9529

5562

2179

1663

1705

742

2363

1341