# TensorFlow基本操作 实现卷积和池化

```def conv2d(input,
filter,
strides,
use_cudnn_on_gpu=None,
data_format=None,
name=None)```

```def get_variable(name,
shape=None,
dtype=None,
initializer=None,
regularizer=None,
trainable=True,
collections=None,
caching_device=None,
partitioner=None,
validate_shape=True,
use_resource=None,
custom_getter=None)```

```filter_weight = tf.get_variable(name,dtype)
biases = tf.get_variable(name,dtype)
actived_conv = tf.cnn.relu(bias)```

```def avg_pool(value,
ksize,
strides,
data_format="NHWC",
name=None):```

```import tensorflow as tf
import numpy as np

M = np.array([        [[1],[-1],[0]],        [[-1],[2],[1]],        [[0],[2],[-2]]
])
#打印输入矩阵shapeprint("Matrix shape is: ",M.shape)
#卷积操作
filter_weight = tf.get_variable('weights',
[2, 2, 1, 1],
initializer = tf.constant_initializer([                                                                       [1, -1],                                                                       [0, 2]]))
biases = tf.get_variable('biases',
[1],
initializer = tf.constant_initializer(1))
#调整输入矩阵的格式符合TensorFlow的要求，batch=1M = np.asarray(M, dtype='float32')
M = M.reshape(1, 3, 3, 1)

x = tf.placeholder('float32', [1, None, None, 1])
conv = tf.nn.conv2d(x, filter_weight, strides = [1, 2, 2, 1], padding = 'SAME')
pool = tf.nn.avg_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
with tf.Session() as sess:
tf.global_variables_initializer().run()
#将M feed到定义好的操作中
convoluted_M = sess.run(bias,feed_dict={x:M})
pooled_M = sess.run(pool,feed_dict={x:M})    print("convoluted_M: \n", convoluted_M)    print("pooled_M: \n", pooled_M)```

[[-1.] [-1.]]]] pooled_M: [[[[ 0.25] [ 0.5 ]]

[[ 1. ] [-2. ]]]]

（3-2+1）/2+1=2

END.

https://blog.csdn.net/chaipp0607/article/details/61192003

• TensorFlow和深度学习入门教程
• Tensorflow入门-白话mnist手写数字识别
• TensorFlow图像分类教程
• 3天学会TensorFlow | 香港科技大学
• 自创数据集，使用TensorFlow预测股票入门
• 干货 | TensorFlow 技术与应用（内附31页PDF下载）
• 用TensorFlow实现神经网络很难吗？看完这篇详解，「小白」也可秒懂！

2122 篇文章98 人订阅

0 条评论

## 相关文章

48711

3447

1.4K3

5268

### 机器学习算法KNN简介及实现

KNN(K近邻算法)是一种不需要学习任何参数同时也非常简单的机器学习算法，既可以用来解决分类问题也可以用来解决回归问题。直观解释这个算法就是'近朱者赤，近墨者黑...

1252

2436

5714

7526

964

3737