首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

Tensorflow实践:用神经网络训练分类器

正文共5873个字,3张图,预计阅读时间15分钟 。

任务:

使用tensorflow训练一个神经网络作为分类器,分类的数据点如下:

螺旋形数据点

原理

数据点一共有三个类别,而且是螺旋形交织在一起,显然是线性不可分的,需要一个非线性的分类器。这里选择神经网络。

输入的数据点是二维的,因此每个点只有x,y坐标这个原始特征。这里设计的神经网络有两个隐藏层,每层有50个神经元,足够抓住数据点的高维特征(实际上每层10个都够用了)。最后输出层是一个逻辑回归,根据隐藏层计算出的50个特征来预测数据点的分类(红、黄、蓝)。

一般训练数据多的话,应该用随机梯度下降来训练神经网络,这里训练数据较少(300),就直接批量梯度下降了。

# 导入包、初始化

importnumpyasnp

importmatplotlib.pyplotasplt

importtensorflowastf%matplotlib inlineplt.rcParams['figure.figsize'] = (10.0,8.0)# set default size of plots

plt.rcParams['image.interpolation'] ='nearest'

plt.rcParams['image.cmap'] ='gray

'# 生成螺旋形的线形不可分数据点

np.random.seed()N =100# 每个类的数据个数

D =2# 输入维度

K =3# 类的个数

X = np.zeros((N*K,D))num_train_examples = X.shape[]y = np.zeros(N*K, dtype='uint8')

forjinxrange(K): ix = range(N*j,N*(j+1)) r = np.linspace(0.0,1,N)# radiust = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2# thetaX[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = jfig = plt.figure()plt.scatter(X[:,], X[:,1], c=y, s=40, cmap=plt.cm.Spectral)plt.xlim([-1,1])plt.ylim([-1,1])

螺旋形数据点

打印输出输入X和label的shape

num_label =3

labels = (np.arange(num_label) == y[:,None]).astype(np.float32)labels.shape

(300, 3)

X.shape

(300, 2)

用tensorflow构建神经网络

importmathN =100# 每个类的数据个数

D =2# 输入维度

num_label =3# 类的个数

num_data = N * num_labelhidden_size_1 =50

hidden_size_2 =50

beta =0.001# L2 正则化系数

learning_rate =0.1# 学习速率

labels = (np.arange(num_label) == y[:,None]).astype(np.float32)graph = tf.Graph()

withgraph.as_default(): x = tf.constant(X.astype(np.float32)) tf_labels = tf.constant(labels)# 隐藏层1hidden_layer_weights_1 = tf.Variable( tf.truncated_normal([D, hidden_size_1], stddev=math.sqrt(2.0/num_data))) hidden_layer_bias_1 = tf.Variable(tf.zeros([hidden_size_1]))# 隐藏层2hidden_layer_weights_2 = tf.Variable( tf.truncated_normal([hidden_size_1, hidden_size_2], stddev=math.sqrt(2.0/hidden_size_1))) hidden_layer_bias_2 = tf.Variable(tf.zeros([hidden_size_2]))# 输出层out_weights = tf.Variable( tf.truncated_normal([hidden_size_2, num_label], stddev=math.sqrt(2.0/hidden_size_2))) out_bias = tf.Variable(tf.zeros([num_label])) z1 = tf.matmul(x, hidden_layer_weights_1) + hidden_layer_bias_1 h1 = tf.nn.relu(z1) z2 = tf.matmul(h1, hidden_layer_weights_2) + hidden_layer_bias_2 h2 = tf.nn.relu(z2) logits = tf.matmul(h2, out_weights) + out_bias# L2正则化 regularization = tf.nn.l2_loss(hidden_layer_weights_1) + tf.nn.l2_loss(hidden_layer_weights_2) + tf.nn.l2_loss(out_weights) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_labels, logits=logits) + beta * regularization) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) train_prediction = tf.nn.softmax(logits) weights = [hidden_layer_weights_1, hidden_layer_bias_1, hidden_layer_weights_2, hidden_layer_bias_2, out_weights, out_bias]

上一步相当于搭建了神经网络的骨架,现在需要训练。每1000步训练,打印交叉熵损失和正确率。

num_steps =50000

defaccuracy(predictions, labels):return(100.0* np.sum(np.argmax(predictions,1) == np.argmax(labels,1)) / predictions.shape[])defrelu(x):returnnp.maximum(,x)

withtf.Session(graph=graph)assession: tf.global_variables_initializer().run() print('Initialized')

forstepinrange(num_steps): _, l, predictions = session.run([optimizer, loss, train_prediction])if(step %1000==): print('Loss at step %d: %f'% (step, l)) print('Training accuracy: %.1f%%'% accuracy( predictions, labels)) w1, b1, w2, b2, w3, b3 = weights

# 显示分类器h =0.02x_min, x_max = X[:,].min() -1, X[:,].max() +1y_min, y_max = X[:,1].min() -1, X[:,1].max() +1xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = np.dot(relu(np.dot(relu(np.dot(np.c_[xx.ravel(), yy.ravel()], w1.eval()) + b1.eval()), w2.eval()) + b2.eval()), w3.eval()) + b3.eval() Z = np.argmax(Z, axis=1) Z = Z.reshape(xx.shape) fig = plt.figure() plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:,], X[:,1], c=y, s=40, cmap=plt.cm.Spectral) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max())

Initialized

Lossatstep0: 1.132545

Trainingaccuracy: 43.7%

Lossatstep1000: 0.257016

Trainingaccuracy: 94.0%

Lossatstep2000: 0.165511

Trainingaccuracy: 98.0%

Lossatstep3000: 0.149266

Trainingaccuracy: 99.0%

Lossatstep4000: 0.142311

Trainingaccuracy: 99.3%

Lossatstep5000: 0.137762

Trainingaccuracy: 99.3%

Lossatstep6000: 0.134356

Trainingaccuracy: 99.3%

Lossatstep7000: 0.131588

Trainingaccuracy: 99.3%

Lossatstep8000: 0.129299

Trainingaccuracy: 99.3%

Lossatstep9000: 0.127340

Trainingaccuracy: 99.3%

Lossatstep10000: 0.125686

Trainingaccuracy: 99.3%

Lossatstep11000: 0.124293

Trainingaccuracy: 99.3%

Lossatstep12000: 0.123130

Trainingaccuracy: 99.3%

Lossatstep13000: 0.122149

Trainingaccuracy: 99.3%

Lossatstep14000: 0.121309

Trainingaccuracy: 99.3%

Lossatstep15000: 0.120542

Trainingaccuracy: 99.3%

Lossatstep16000: 0.119895

Trainingaccuracy: 99.3%

Lossatstep17000: 0.119335

Trainingaccuracy: 99.3%

Lossatstep18000: 0.118836

Trainingaccuracy: 99.3%

Lossatstep19000: 0.118376

Trainingaccuracy: 99.3%

Lossatstep20000: 0.117974

Trainingaccuracy: 99.3%

Lossatstep21000: 0.117601

Trainingaccuracy: 99.3%

Lossatstep22000: 0.117253

Trainingaccuracy: 99.3%

Lossatstep23000: 0.116887

Trainingaccuracy: 99.3%

Lossatstep24000: 0.116561

Trainingaccuracy: 99.3%

Lossatstep25000: 0.116265

Trainingaccuracy: 99.3%

Lossatstep26000: 0.115995

Trainingaccuracy: 99.3%

Lossatstep27000: 0.115750

Trainingaccuracy: 99.3%

Lossatstep28000: 0.115521

Trainingaccuracy: 99.3%

Lossatstep29000: 0.115310

Trainingaccuracy: 99.3%

Lossatstep30000: 0.115111

Trainingaccuracy: 99.3%

Lossatstep31000: 0.114922

Trainingaccuracy: 99.3%

Lossatstep32000: 0.114743

Trainingaccuracy: 99.3%

Lossatstep33000: 0.114567

Trainingaccuracy: 99.3%

Lossatstep34000: 0.114401

Trainingaccuracy: 99.3%

Lossatstep35000: 0.114242

Trainingaccuracy: 99.3%

Lossatstep36000: 0.114086

Trainingaccuracy: 99.3%

Lossatstep37000: 0.113933

Trainingaccuracy: 99.3%

Lossatstep38000: 0.113785

Trainingaccuracy: 99.3%

Lossatstep39000: 0.113644

Trainingaccuracy: 99.3%

Lossatstep40000: 0.113504

Trainingaccuracy: 99.3%

Lossatstep41000: 0.113366

Trainingaccuracy: 99.3%

Lossatstep42000: 0.113229

Trainingaccuracy: 99.3%

Lossatstep43000: 0.113096

Trainingaccuracy: 99.3%

Lossatstep44000: 0.112966

Trainingaccuracy: 99.3%

Lossatstep45000: 0.112838

Trainingaccuracy: 99.3%

Lossatstep46000: 0.112711

Trainingaccuracy: 99.3%

Lossatstep47000: 0.112590

Trainingaccuracy: 99.3%

Lossatstep48000: 0.112472

Trainingaccuracy: 99.3%

Lossatstep49000: 0.112358

Trainingaccuracy: 99.3%

分类器.png

  • 发表于:
  • 原文链接http://kuaibao.qq.com/s/20180211A0SGVR00?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券