前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >基于tensorflow+CNN的新闻文本分类

基于tensorflow+CNN的新闻文本分类

作者头像
潇洒坤
发布2018-10-09 11:46:40
1.7K0
发布2018-10-09 11:46:40
举报
文章被收录于专栏:简书专栏简书专栏

2018年10月4日笔记

tensorflow是谷歌google的深度学习框架,tensor中文叫做张量,flow叫做流。 CNN是convolutional neural network的简称,中文叫做卷积神经网络。 文本分类是NLP(自然语言处理)的经典任务。

0.编程环境

操作系统:Win10 tensorflow版本:1.6 tensorboard版本:1.6 python版本:3.6

1.致谢声明

本文是作者学习《使用卷积神经网络以及循环神经网络进行中文文本分类》的成果,感激前辈; github链接:https://github.com/gaussic/text-classification-cnn-rnn

2.配置环境

使用循环神经网络模型要求有较高的机器配置,如果使用CPU版tensorflow会花费大量时间。 读者在有nvidia显卡的情况下,安装GPU版tensorflow会提高计算速度50倍。 安装教程链接:https://blog.csdn.net/qq_36556893/article/details/79433298 如果没有nvidia显卡,但有visa信用卡,请阅读我的另一篇文章《在谷歌云服务器上搭建深度学习平台》,链接:https://www.jianshu.com/p/893d622d1b5a

3.下载并解压数据集

数据集下载链接: https://pan.baidu.com/s/1oLZZF4AHT5X_bzNl2aF2aQ 提取码: 5sea 下载压缩文件cnews.zip完成后,选择解压到cnews。 文件夹结构如下图所示:

image.png

4.完整代码

代码文件需要放到和cnews文件夹同级目录

代码语言:javascript
复制
with open('./cnews/cnews.train.txt', encoding='utf8') as file:
    line_list = [k.strip() for k in file.readlines()]
    train_label_list = [k.split()[0] for k in line_list]
    train_content_list = [k.split(maxsplit=1)[1] for k in line_list]
with open('./cnews/cnews.vocab.txt', encoding='utf8') as file:
    vocabulary_list = [k.strip() for k in file.readlines()]
word2id_dict = dict(((b, a) for a, b in enumerate(vocabulary_list)))
content2vector = lambda content : [word2id_dict[word] for word in content if word in word2id_dict]
train_vector_list = [content2vector(content) for content in train_content_list ]
vocab_size = 5000  # 词汇表达小
embedding_dim = 64  # 词向量维度
seq_length = 600  # 序列长度
num_classes = 10  # 类别数
num_filters = 256  # 卷积核数目
kernel_size = 5  # 卷积核尺寸
hidden_dim = 128  # 全连接层神经元
dropout_keep_prob = 0.5  # dropout保留比例
learning_rate = 1e-3  # 学习率
batch_size = 64  # 每批训练大小
import tensorflow.contrib.keras as kr
train_X = kr.preprocessing.sequence.pad_sequences(train_vector_list, 600)
from sklearn.preprocessing import LabelEncoder
labelEncoder = LabelEncoder()
train_y = labelEncoder.fit_transform(train_label_list)
train_Y = kr.utils.to_categorical(train_y, num_classes=10)
import tensorflow as tf
tf.reset_default_graph()
X_holder = tf.placeholder(tf.int32, [None, seq_length])
Y_holder = tf.placeholder(tf.float32, [None, num_classes])

embedding = tf.get_variable('embedding', [vocab_size, embedding_dim])
embedding_inputs = tf.nn.embedding_lookup(embedding, X_holder)
conv = tf.layers.conv1d(embedding_inputs, num_filters, kernel_size)
max_pooling = tf.reduce_max(conv, reduction_indices=[1])
full_connect = tf.layers.dense(max_pooling, hidden_dim)
full_connect_dropout = tf.contrib.layers.dropout(full_connect, keep_prob=1)
full_connect_activate = tf.nn.relu(full_connect_dropout)
softmax_before = tf.layers.dense(full_connect_activate, num_classes)
predict_Y = tf.nn.softmax(softmax_before)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_holder, logits=softmax_before)
loss = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate)
train = optimizer.minimize(loss)
isCorrect = tf.equal(tf.argmax(Y_holder, 1), tf.argmax(predict_Y, 1))
accuracy = tf.reduce_mean(tf.cast(isCorrect, tf.float32))

init = tf.global_variables_initializer()
session = tf.Session()
session.run(init)

import random
for i in range(3000):
    selected_index = random.sample(list(range(len(train_y))), k=64)
    batch_X = train_X[selected_index]
    batch_Y = train_Y[selected_index]
    session.run(train, {X_holder:batch_X, Y_holder:batch_Y})
    step = i + 1 
    if step % 100 == 0:
        selected_index = random.sample(list(range(len(train_y))), k=200)
        batch_X = train_X[selected_index]
        batch_Y = train_Y[selected_index]
        loss_value, accuracy_value = session.run([loss, accuracy], {X_holder:batch_X, Y_holder:batch_Y})
        print('step:%d loss:%.4f accuracy:%.4f' %(step, loss_value, accuracy_value))

上面一段代码的运行结果如下(只截取前十行):

step:100 loss:0.5491 accuracy:0.8650 step:200 loss:0.2495 accuracy:0.9200 step:300 loss:0.1928 accuracy:0.9450 step:400 loss:0.1123 accuracy:0.9700 step:500 loss:0.1183 accuracy:0.9800 step:600 loss:0.0946 accuracy:0.9800 step:700 loss:0.1316 accuracy:0.9600 step:800 loss:0.1455 accuracy:0.9650 step:900 loss:0.1226 accuracy:0.9600 step:1000 loss:0.0686 accuracy:0.9800

5.数据准备

代码语言:javascript
复制
with open('./cnews/cnews.train.txt', encoding='utf8') as file:
    line_list = [k.strip() for k in file.readlines()]
    train_label_list = [k.split()[0] for k in line_list]
    train_content_list = [k.split(maxsplit=1)[1] for k in line_list]
with open('./cnews/cnews.vocab.txt', encoding='utf8') as file:
    vocabulary_list = [k.strip() for k in file.readlines()]
word2id_dict = dict(((b, a) for a, b in enumerate(vocabulary_list)))
content2vector = lambda content : [word2id_dict[word] for word in content if word in word2id_dict]
train_vector_list = [content2vector(content) for content in train_content_list ]
vocab_size = 5000  # 词汇表达小
embedding_dim = 64  # 词向量维度
seq_length = 600  # 序列长度
num_classes = 10  # 类别数
num_filters = 256  # 卷积核数目
kernel_size = 5  # 卷积核尺寸
hidden_dim = 128  # 全连接层神经元
dropout_keep_prob = 0.5  # dropout保留比例
learning_rate = 1e-3  # 学习率
batch_size = 64  # 每批训练大小
import tensorflow.contrib.keras as kr
train_X = kr.preprocessing.sequence.pad_sequences(train_vector_list, 600)
from sklearn.preprocessing import LabelEncoder
labelEncoder = LabelEncoder()
train_y = labelEncoder.fit_transform(train_label_list)
train_Y = kr.utils.to_categorical(train_y, num_classes=10)
import tensorflow as tf
tf.reset_default_graph()
X_holder = tf.placeholder(tf.int32, [None, seq_length])
Y_holder = tf.placeholder(tf.float32, [None, num_classes])

6.搭建神经网络

代码语言:javascript
复制
embedding = tf.get_variable('embedding', [vocab_size, embedding_dim])
embedding_inputs = tf.nn.embedding_lookup(embedding, X_holder)
conv = tf.layers.conv1d(embedding_inputs, num_filters, kernel_size)
max_pooling = tf.reduce_max(conv, reduction_indices=[1])
full_connect = tf.layers.dense(max_pooling, hidden_dim)
full_connect_dropout = tf.contrib.layers.dropout(full_connect, keep_prob=1)
full_connect_activate = tf.nn.relu(full_connect_dropout)
softmax_before = tf.layers.dense(full_connect_activate, num_classes)
predict_Y = tf.nn.softmax(softmax_before)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_holder, logits=softmax_before)
loss = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate)
train = optimizer.minimize(loss)
isCorrect = tf.equal(tf.argmax(Y_holder, 1), tf.argmax(predict_Y, 1))
accuracy = tf.reduce_mean(tf.cast(isCorrect, tf.float32))

7.参数初始化

代码语言:javascript
复制
init = tf.global_variables_initializer()
session = tf.Session()
session.run(init)

8.模型训练

代码语言:javascript
复制
import random
for i in range(3000):
    selected_index = random.sample(list(range(len(train_y))), k=64)
    batch_X = train_X[selected_index]
    batch_Y = train_Y[selected_index]
    session.run(train, {X_holder:batch_X, Y_holder:batch_Y})
    step = i + 1 
    if step % 100 == 0:
        selected_index = random.sample(list(range(len(train_y))), k=200)
        batch_X = train_X[selected_index]
        batch_Y = train_Y[selected_index]
        loss_value, accuracy_value = session.run([loss, accuracy], {X_holder:batch_X, Y_holder:batch_Y})
        print('step:%d loss:%.4f accuracy:%.4f' %(step, loss_value, accuracy_value))
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2018.10.04 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 0.编程环境
  • 1.致谢声明
  • 2.配置环境
  • 3.下载并解压数据集
  • 4.完整代码
  • 5.数据准备
  • 6.搭建神经网络
  • 7.参数初始化
  • 8.模型训练
相关产品与服务
NLP 服务
NLP 服务(Natural Language Process,NLP)深度整合了腾讯内部的 NLP 技术,提供多项智能文本处理和文本生成能力,包括词法分析、相似词召回、词相似度、句子相似度、文本润色、句子纠错、文本补全、句子生成等。满足各行业的文本智能需求。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档