Tensorflow实践:用神经网络训练分类器

任务: 使用tensorflow训练一个神经网络作为分类器,分类的数据点如下:

螺旋形数据点

原理

数据点一共有三个类别,而且是螺旋形交织在一起,显然是线性不可分的,需要一个非线性的分类器。这里选择神经网络。

输入的数据点是二维的,因此每个点只有x,y坐标这个原始特征。这里设计的神经网络有两个隐藏层,每层有50个神经元,足够抓住数据点的高维特征(实际上每层10个都够用了)。最后输出层是一个逻辑回归,根据隐藏层计算出的50个特征来预测数据点的分类(红、黄、蓝)。

一般训练数据多的话,应该用随机梯度下降来训练神经网络,这里训练数据较少(300),就直接批量梯度下降了。

# 导入包、初始化
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf  %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray
'# 生成螺旋形的线形不可分数据点
np.random.seed(0) N = 100 # 每个类的数据个数
D = 2 # 输入维度
K = 3 # 类的个数
X = np.zeros((N*K,D)) num_train_examples = X.shape[0] y = np.zeros(N*K, dtype='uint8')
for j in xrange(K):   ix = range(N*j,N*(j+1))   r = np.linspace(0.0,1,N) # radius   t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta   X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]   y[ix] = j fig = plt.figure() plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral) plt.xlim([-1,1]) plt.ylim([-1,1])

螺旋形数据点

打印输出输入X和label的shape

num_label = 3
labels = (np.arange(num_label) == y[:,None]).astype(np.float32) labels.shape
(300, 3)
X.shape
(300, 2)

用tensorflow构建神经网络

import math  N = 100 # 每个类的数据个数
D = 2 # 输入维度
num_label = 3 # 类的个数
num_data = N * num_label hidden_size_1 = 50
hidden_size_2 = 50
beta = 0.001 # L2 正则化系数
learning_rate = 0.1 # 学习速率
labels = (np.arange(num_label) == y[:,None]).astype(np.float32)  graph = tf.Graph()
with graph.as_default():     x = tf.constant(X.astype(np.float32))     tf_labels = tf.constant(labels)         # 隐藏层1     hidden_layer_weights_1 = tf.Variable(     tf.truncated_normal([D, hidden_size_1], stddev=math.sqrt(2.0/num_data)))     hidden_layer_bias_1 = tf.Variable(tf.zeros([hidden_size_1]))         # 隐藏层2     hidden_layer_weights_2 = tf.Variable(     tf.truncated_normal([hidden_size_1, hidden_size_2], stddev=math.sqrt(2.0/hidden_size_1)))     hidden_layer_bias_2 = tf.Variable(tf.zeros([hidden_size_2]))         # 输出层     out_weights = tf.Variable(     tf.truncated_normal([hidden_size_2, num_label], stddev=math.sqrt(2.0/hidden_size_2)))     out_bias = tf.Variable(tf.zeros([num_label]))          z1 = tf.matmul(x, hidden_layer_weights_1) + hidden_layer_bias_1     h1 = tf.nn.relu(z1)          z2 = tf.matmul(h1, hidden_layer_weights_2) + hidden_layer_bias_2     h2 = tf.nn.relu(z2)          logits = tf.matmul(h2, out_weights) + out_bias         # L2正则化     regularization = tf.nn.l2_loss(hidden_layer_weights_1) + tf.nn.l2_loss(hidden_layer_weights_2) + tf.nn.l2_loss(out_weights)     loss = tf.reduce_mean(         tf.nn.softmax_cross_entropy_with_logits(labels=tf_labels, logits=logits) + beta * regularization)           optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)          train_prediction = tf.nn.softmax(logits)      weights = [hidden_layer_weights_1, hidden_layer_bias_1, hidden_layer_weights_2, hidden_layer_bias_2, out_weights, out_bias]  

上一步相当于搭建了神经网络的骨架,现在需要训练。每1000步训练,打印交叉熵损失和正确率。

num_steps = 50000

def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0])def relu(x): return np.maximum(0,x)

with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized')

for step in range(num_steps): _, l, predictions = session.run([optimizer, loss, train_prediction]) if (step % 1000 == 0): print('Loss at step %d: %f' % (step, l)) print('Training accuracy: %.1f%%' % accuracy( predictions, labels)) w1, b1, w2, b2, w3, b3 = weights

# 显示分类器 h = 0.02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = np.dot(relu(np.dot(relu(np.dot(np.c_[xx.ravel(), yy.ravel()], w1.eval()) + b1.eval()), w2.eval()) + b2.eval()), w3.eval()) + b3.eval() Z = np.argmax(Z, axis=1) Z = Z.reshape(xx.shape) fig = plt.figure() plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max())

Initialized

Loss at step 0: 1.132545
Training accuracy: 43.7%
Loss at step 1000: 0.257016
Training accuracy: 94.0%
Loss at step 2000: 0.165511
Training accuracy: 98.0%
Loss at step 3000: 0.149266
Training accuracy: 99.0%
Loss at step 4000: 0.142311
Training accuracy: 99.3%
Loss at step 5000: 0.137762
Training accuracy: 99.3%
Loss at step 6000: 0.134356
Training accuracy: 99.3%
Loss at step 7000: 0.131588
Training accuracy: 99.3%
Loss at step 8000: 0.129299
Training accuracy: 99.3%
Loss at step 9000: 0.127340
Training accuracy: 99.3%
Loss at step 10000: 0.125686
Training accuracy: 99.3%
Loss at step 11000: 0.124293
Training accuracy: 99.3%
Loss at step 12000: 0.123130
Training accuracy: 99.3%
Loss at step 13000: 0.122149
Training accuracy: 99.3%
Loss at step 14000: 0.121309
Training accuracy: 99.3%
Loss at step 15000: 0.120542
Training accuracy: 99.3%
Loss at step 16000: 0.119895
Training accuracy: 99.3%
Loss at step 17000: 0.119335
Training accuracy: 99.3%
Loss at step 18000: 0.118836
Training accuracy: 99.3%
Loss at step 19000: 0.118376
Training accuracy: 99.3%
Loss at step 20000: 0.117974
Training accuracy: 99.3%
Loss at step 21000: 0.117601
Training accuracy: 99.3%
Loss at step 22000: 0.117253
Training accuracy: 99.3%
Loss at step 23000: 0.116887
Training accuracy: 99.3%
Loss at step 24000: 0.116561
Training accuracy: 99.3%
Loss at step 25000: 0.116265
Training accuracy: 99.3%
Loss at step 26000: 0.115995
Training accuracy: 99.3%
Loss at step 27000: 0.115750
Training accuracy: 99.3%
Loss at step 28000: 0.115521
Training accuracy: 99.3%
Loss at step 29000: 0.115310
Training accuracy: 99.3%
Loss at step 30000: 0.115111
Training accuracy: 99.3%
Loss at step 31000: 0.114922
Training accuracy: 99.3%
Loss at step 32000: 0.114743
Training accuracy: 99.3%
Loss at step 33000: 0.114567
Training accuracy: 99.3%
Loss at step 34000: 0.114401
Training accuracy: 99.3%
Loss at step 35000: 0.114242
Training accuracy: 99.3%
Loss at step 36000: 0.114086
Training accuracy: 99.3%
Loss at step 37000: 0.113933
Training accuracy: 99.3%
Loss at step 38000: 0.113785
Training accuracy: 99.3%
Loss at step 39000: 0.113644
Training accuracy: 99.3%
Loss at step 40000: 0.113504
Training accuracy: 99.3%
Loss at step 41000: 0.113366
Training accuracy: 99.3%
Loss at step 42000: 0.113229
Training accuracy: 99.3%
Loss at step 43000: 0.113096
Training accuracy: 99.3%
Loss at step 44000: 0.112966
Training accuracy: 99.3%
Loss at step 45000: 0.112838
Training accuracy: 99.3%
Loss at step 46000: 0.112711
Training accuracy: 99.3%
Loss at step 47000: 0.112590
Training accuracy: 99.3%
Loss at step 48000: 0.112472
Training accuracy: 99.3%
Loss at step 49000: 0.112358
Training accuracy: 99.3%

分类器.png

原文发布于微信公众号 - 人工智能LeadAI(atleadai)

原文发表时间:2018-02-11

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏Petrichor的专栏

深度学习: 目标函数

目标函数 (object function) = 损失函数 (loss function) = 代价函数 (cost function)

1614
来自专栏机器学习算法原理与实践

逻辑回归原理小结

    逻辑回归是一个分类算法,它可以处理二元分类以及多元分类。虽然它名字里面有“回归”两个字,却不是一个回归算法。那为什么有“回归”这个误导性的词呢?个人认为...

692
来自专栏机器之心

徒手实现CNN:综述论文详解卷积网络的数学本质

选自arXiv 机器之心编译 参与:黄小天、路雪、蒋思源 近日南洋理工大学研究者发布了一篇描述卷积网络数学原理的论文,该论文从数学的角度阐述整个卷积网络的运算与...

32511
来自专栏xingoo, 一个梦想做发明家的程序员

吴恩达机器学习笔记 —— 11 应用机器学习的建议

如果已经创建好了一个机器学习的模型,当我们训练之后发现还存在很大的误差,下一步应该做什么呢?通常能想到的是:

740
来自专栏机器学习算法与Python学习

机器学习(5) -- 模型评估与选择

Content   6. 学习模型的评估与选择     6.1 如何调试学习算法     6.2 评估假设函数(Evaluating a hypothesis)...

2765
来自专栏大数据挖掘DT机器学习

机器学习、深度学习 知识点总结及面试题

1、反向传播思想: 计算出输出与标签间的损失函数值,然后计算其相对于每个神经元的梯度,根据梯度方向更新权值。 (1)将训练集数据输入到ANN的输入层,经过隐藏...

6678
来自专栏人人都是极客

机器学习三要素之数据、模型、算法

我们都知道,机器学习需要大量的数据来训练模型,尤其是训练神经网络。在进行机器学习时,数据集一般会被划分为训练集和测试集,很多时候还会划分出验证集。

862
来自专栏杨熹的专栏

详解循环神经网络(Recurrent Neural Network)

今天的学习资料是这篇文章,写的非常详细,有理论有代码,本文是补充一些小细节,可以二者结合看效果更好: https://zybuluo.com/hanbingt...

3606
来自专栏大数据挖掘DT机器学习

统计学习方法概论

1.统计学习 统计学习的对象是数据,它从数据出发,提取数据的特征,抽象出数据的模型,发现数据中的知识,又回到对数据的分析与预测中去。统计学习...

2714
来自专栏机器学习算法原理与实践

梯度下降(Gradient Descent)小结

    在求解机器学习算法的模型参数,即无约束优化问题时,梯度下降(Gradient Descent)是最常采用的方法之一,另一种常用的方法是最小二乘法。这里就...

391

扫码关注云+社区