前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布

TF-CNN

作者头像
AngelNH
发布2020-04-16 15:30:36
3890
发布2020-04-16 15:30:36
举报
文章被收录于专栏:AngelNIAngelNIAngelNI

低头不是认输,是要看清自己的路;仰头不是骄傲,是要看见自己的天空。——科比·布莱恩特

TF-ConvNets

卷积神经网络比普通的神经网络多了卷积层,池化层和平滑层,最后一层的激活函数为softmax。

import tensorflow as tf
#手写数字数据集
import tensorflow.examples.tutorials.mnist.input_data as input_data
import numpy as np
import matplotlib.pyplot as plt 
from time import time 
import os
#屏蔽INFO + WARNING,输出ERROR + FATAL
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

'''
定义重复使用的函数
'''
#显示手写图片
def show(image):
    plt.imshow(image.reshape(28,28),cmap='binary')
    plt.show()
def plot_image_label_prediction(images,labels,prediction=[],idx= 0 ,num = 10):
    fig = plt.gcf()
    fig.set_size_inches(12,14)
    if num>25:
        num = 25
    for i in range(num):
        ax = plt.subplot(5,5,1+i)
        ax.imshow(np.reshape(images[idx],(28,28)),cmap = "binary")
        title = "label ="+str(np.argmax(labels[idx]))
        if len(prediction)>0:
            title += ", prediction = "+ str(prediction[idx])
        ax.set_title(title,fontsize = 10)
        ax.set_xticks([])
        ax.set_yticks([])
        idx+=1
    plt.show()


#定义隐藏层
def layter(out_dim,in_dim,inputs,activation = None):
    w = tf.Variable(tf.random_normal([in_dim,out_dim]))#权值
    b = tf.Variable(tf.random_normal([1,out_dim]))#偏执
    wbx = tf.matmul(inputs,w)+b#计算
    #激活函数
    if activation is None:
        outputs = wbx
    else:
        outputs = activation(wbx)
    return outputs
#定义权值变量
def weight(shape):
    return tf.Variable(tf.truncated_normal(shape,stddev = 0.1),name = 'w')
#定义偏执变量
def bias(shape):
    return tf.Variable(tf.constant(0.1,shape = shape),name = 'b')
#定义卷积层
def conv2d(x,w):
    return tf.nn.conv2d(x,w,strides = [1,1,1,1],padding = 'SAME')
#定义最大池化
def max_pool_2x2(x):
    return tf.nn.max_pool(x,ksize = [1,2,2,1],strides = [1,2,2,1],padding = 'SAME')



#下载数据集
mnist = input_data.read_data_sets("data/MNIST_data/", one_hot = True)
#打印第一个标签
print("labels[0]: ",mnist.train.labels[0])
print("labels[0]: ",np.argmax(mnist.train.labels[0]))

#两个卷积层和池化层
with tf.name_scope("Input_layter"):
    x = tf.placeholder("float",[None,28*28],name='x')#占位符
    x_image = tf.reshape(x,[-1,28,28,1])
with tf.name_scope("C1_Conv"):
    w1 = weight([5,5,1,16])
    b1 = bias([16])
    Conv1 = conv2d(x_image,w1) + b1
    C1_Conv = tf.nn.relu(Conv1)
with tf.name_scope("C1_Pool"):
    C1_Pool = max_pool_2x2(C1_Conv)
with tf.name_scope("C2_Conv"):
    w2 = weight([5,5,16,36])
    b2 = bias([36])
    Conv2 = conv2d(C1_Pool,w2)+b2
    C2_Conv = tf.nn.relu(Conv2)
with tf.name_scope("C2_Pool"):
    C2_Pool = max_pool_2x2(C2_Conv)

#平化层
with tf.name_scope("D_Flat"):
    D_Flat = tf.reshape(C2_Pool,[-1,1764])
#隐藏层
with tf.name_scope("D_Hidden_Layer"):
    w3 = weight([1764,128])
    b3 = bias([128])
    D_Hidden = tf.nn.relu(tf.matmul(D_Flat,w3)+b3)
    D_Hidden_Dropout = tf.nn.dropout(D_Hidden,keep_prob= 0.8)
#输出层
with tf.name_scope("Output_layter"):
    w4 = weight([128,10])
    b4 = bias([10])
    y_pre = tf.nn.softmax(tf.matmul(D_Hidden_Dropout,w4)+b4) 
#优化器
with tf.name_scope("Optimizer"):
    y_label = tf.placeholder("float",[None,10],name = "y_label")
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = y_pre,labels = y_label))
    optimizer = tf.train.AdamOptimizer(learning_rate = 0.001).minimize(loss)
#评估
with tf.name_scope("evaluate_accuracy"):
    correct_predict = tf.equal(tf.argmax(y_label,1),tf.argmax(y_pre,1))
    accuracy = tf.reduce_mean(tf.cast(correct_predict,"float"))
#定义超参数
epochs = 15
batch_size = 100
total_batches = int(mnist.train.num_examples/batch_size)

#列表储存结果
loss_list = []
epochs_list = []
accuracy_list = []


start_time = time()
#全局变量初始化
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    print('-'*24)
    for i in range(epochs):
        for j in range(total_batches):
            batch_x , batch_y = mnist.train.next_batch(batch_size)
            sess.run(optimizer,feed_dict = {x:batch_x,y_label: batch_y})
        los , acc = sess.run([loss,accuracy],feed_dict = {x:mnist.validation.images,y_label:mnist.validation.labels})
        epochs_list.append(i)
        loss_list.append(los)
        accuracy_list.append(acc)
        print("Train Epoch: ","%2d, "%(i+1),"Loss = {:.9f}, ".format(los),"Accuracy = ",acc)
        print("-"*24)
    duration = time() - start_time
    print("Train finished task: ",duration)
    print("-"*24)
    print("Accuracy: ",sess.run(accuracy,feed_dict = {x:mnist.test.images,y_label:mnist.test.labels}))
    prediction_result = sess.run(tf.argmax(y_pre,1),feed_dict = {x:mnist.test.images})
    print("predict result: ",prediction_result[:10])
    

    plot_image_label_prediction(mnist.test.images,mnist.test.labels,prediction_result,num = 25)
    merged = tf.summary.merge_all()
    train_train_writer = tf.summary.FileWriter("log/tfCNN/", sess.graph)

运行结果

labels[0]:  [0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
labels[0]:  7
------------------------
Train Epoch:   1,  Loss = 1.583467722,  Accuracy =  0.8794
------------------------
Train Epoch:   2,  Loss = 1.575986981,  Accuracy =  0.884
------------------------
Train Epoch:   3,  Loss = 1.484323025,  Accuracy =  0.9774
------------------------
Train Epoch:   4,  Loss = 1.478819370,  Accuracy =  0.9828
------------------------
Train Epoch:   5,  Loss = 1.477949262,  Accuracy =  0.9838
------------------------
Train Epoch:   6,  Loss = 1.478563309,  Accuracy =  0.983
------------------------
Train Epoch:   7,  Loss = 1.475089312,  Accuracy =  0.9864
------------------------
Train Epoch:   8,  Loss = 1.475567698,  Accuracy =  0.9858
------------------------
Train Epoch:   9,  Loss = 1.474923730,  Accuracy =  0.9864
------------------------
Train Epoch:  10,  Loss = 1.473058224,  Accuracy =  0.9884
------------------------
Train Epoch:  11,  Loss = 1.471417427,  Accuracy =  0.99
------------------------
Train Epoch:  12,  Loss = 1.473668575,  Accuracy =  0.988
------------------------
Train Epoch:  13,  Loss = 1.472185969,  Accuracy =  0.9886
------------------------
Train Epoch:  14,  Loss = 1.474017739,  Accuracy =  0.9866
------------------------
Train Epoch:  15,  Loss = 1.472573996,  Accuracy =  0.9886
------------------------
Train finished task:  1015.9626131057739
------------------------
Accuracy:  0.9875
predict result:  [7 2 1 0 4 1 4 9 5 9]

重要函数分解

1.tf.nn.conv2d

W = tf.truncated_normal([5, 5, 1, 32], stddev=0.1)
tf.nn.conv2d(x, W, strides=[1, 2, 2, 1], padding='SAME')

1.shape = [5,5,1,32] ,卷积核长宽为5,5;通道数为1,卷积核个数32(输出32张图)
2.strides=[1, 2, 2, 1],规定前后必唯 1 ,中间两个数表示水平滑动和垂直滑动步长值
3.padding='SAME',表示在扫描时,如果遇到卷积核比剩下的元素要大时,这个时候需要补0进行最后一次的行扫描或者列扫描

2.tf.nn.max_pool

tf.nn.max_pool(value, ksize, strides, padding, name=None)
1.value,池化输入,通常是feature map ,shape=[1,height,width,1]
2.ksize,池化窗口大小,一般是[1, height, width, 1]
3.strides,与卷积类似,窗口在每一个维度上滑动的步长,一般也是[1, stride,stride, 1]
4.padding,和卷积类似,shape=[batch, height, width, channels]

3.tf.nn.softmax_cross_entropy_with_logits

损失函数
tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)
1.logits,神经网络最后一层的输出
如果有batch的话,它的大小就是[batchsize,num_classes],单样本的话,大小就是num_classes
2.labels,实际的标签

4.tf.equal

入门必备——判断是否相等

5.tf.argmax

tf.argmax(vector, 1):
返回的是vector中的最大值的索引号,
如果vector是一个向量,那就返回一个值,如果是一个矩阵,那就返回一个向量,这个向量的每一个维度都是相对应矩阵行的最大值元素的索引号。

6.tf.summary.merge_all

自动管理模式

7.tf.nn.softmax

激活函数
函数定义
softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)

8.tf.reshape

函数的作用是将tensor变换为参数shape形式

9.tf.train.AdamOptimizer

tf.train.AdamOptimizer()函数是Adam优化算法:是一个寻找全局最优点的优化算法,引入了二次方梯度校正。
tf.train.AdamOptimizer.__init__(
	learning_rate=0.001, 
	beta1=0.9, 
	beta2=0.999, 
	epsilon=1e-08, 
	use_locking=False, 
	name='Adam'
)

10.tf.cast

数据类型转换

11.tf.nn.dropout

tf.nn.dropout()是tensorflow里面为了防止或减轻过拟合而使用的函数,它一般用在全连接层
tf.nn.dropout(
    x,
    keep_prob,
    noise_shape=None,
    seed=None
    name=None
)
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2020-01-27|,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • TF-ConvNets
  • 重要函数分解
    • 1.tf.nn.conv2d
      • 2.tf.nn.max_pool
        • 3.tf.nn.softmax_cross_entropy_with_logits
          • 4.tf.equal
            • 5.tf.argmax
              • 6.tf.summary.merge_all
                • 7.tf.nn.softmax
                  • 8.tf.reshape
                    • 9.tf.train.AdamOptimizer
                      • 10.tf.cast
                        • 11.tf.nn.dropout
                        领券
                        问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档