首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >使用CNN进行情感分类

使用CNN进行情感分类

作者头像
Michael阿明
发布2021-02-19 15:17:21
发布2021-02-19 15:17:21
1.2K00
代码可运行
举报
运行总次数:0
代码可运行

文章目录

参考 基于深度学习的自然语言处理

1. 读取数据

数据文件:

代码语言:javascript
代码运行次数:0
运行
复制
import numpy as np
import pandas as pd

data = pd.read_csv("yelp_labelled.txt", sep='\t', names=['sentence', 'label'])

data.head() # 1000条数据
代码语言:javascript
代码运行次数:0
运行
复制
# 数据 X 和 标签 y
sentence = data['sentence'].values
label = data['label'].values

2. 数据集拆分

代码语言:javascript
代码运行次数:0
运行
复制
# 训练集 测试集拆分
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(sentence, label, test_size=0.3, random_state=1)

3. 文本向量化

  • 训练 tokenizer,文本转成 ids 序列
代码语言:javascript
代码运行次数:0
运行
复制
# 文本向量化
import keras
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=6000)
tokenizer.fit_on_texts(X_train) # 训练tokenizer
X_train = tokenizer.texts_to_sequences(X_train) # 转成 [[ids...],[ids...],...]
X_test = tokenizer.texts_to_sequences(X_test)
vocab_size = len(tokenizer.word_index)+1 # +1 是因为index 0, 0 不对应任何词,用来pad
  • pad ids 序列,使之有相同的长度
代码语言:javascript
代码运行次数:0
运行
复制
maxlen = 100
# pad 保证每个句子的长度相等
from keras.preprocessing.sequence import pad_sequences
X_train = pad_sequences(X_train, maxlen=maxlen, padding='post')
# post 尾部补0,pre 前部补0
X_test = pad_sequences(X_test, maxlen=maxlen, padding='post')

4. 建立CNN模型

代码语言:javascript
代码运行次数:0
运行
复制
from keras import layers
embeddings_dim = 150
filters = 64
kernel_size = 5
batch_size = 64


nn_model = keras.Sequential()
nn_model.add(layers.Embedding(input_dim=vocab_size, output_dim=embeddings_dim, input_length=maxlen))
nn_model.add(layers.Conv1D(filters=filters,kernel_size=kernel_size,activation='relu'))
nn_model.add(layers.GlobalMaxPool1D())
nn_model.add(layers.Dropout(0.3))
# 上面 GlobalMaxPool1D 后,维度少了一维,下面自定义layers再扩展一维
nn_model.add(layers.Lambda(lambda x : keras.backend.expand_dims(x, axis=-1)))
nn_model.add(layers.Conv1D(filters=filters,kernel_size=kernel_size,activation='relu'))
nn_model.add(layers.GlobalMaxPool1D())
nn_model.add(layers.Dropout(0.3))
nn_model.add(layers.Dense(10, activation='relu'))
nn_model.add(layers.Dense(1, activation='sigmoid')) # 二分类sigmoid, 多分类 softmax

参考文章: Embedding层详解 Keras: GlobalMaxPooling vs. MaxPooling

  • 配置模型
代码语言:javascript
代码运行次数:0
运行
复制
nn_model.compile(optimizer='adam', loss='binary_crossentropy',metrics=['accuracy'])
nn_model.summary()
from keras.utils import plot_model
plot_model(nn_model, to_file='model.jpg') # 绘制模型结构到文件
代码语言:javascript
代码运行次数:0
运行
复制
Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_4 (Embedding)      (None, 100, 150)          251400    
_________________________________________________________________
conv1d_8 (Conv1D)            (None, 96, 64)            48064     
_________________________________________________________________
global_max_pooling1d_7 (Glob (None, 64)                0         
_________________________________________________________________
dropout_7 (Dropout)          (None, 64)                0         
_________________________________________________________________
lambda_4 (Lambda)            (None, 64, 1)             0         
_________________________________________________________________
conv1d_9 (Conv1D)            (None, 60, 64)            384       
_________________________________________________________________
global_max_pooling1d_8 (Glob (None, 64)                0         
_________________________________________________________________
dropout_8 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_6 (Dense)              (None, 10)                650       
_________________________________________________________________
dense_7 (Dense)              (None, 1)                 11        
=================================================================
Total params: 300,509
Trainable params: 300,509
Non-trainable params: 0

5. 训练、测试

代码语言:javascript
代码运行次数:0
运行
复制
history = nn_model.fit(X_train,y_train,batch_size=batch_size,
             epochs=50,verbose=2,validation_data=(X_test,y_test))
# verbose 是否显示日志信息,0不显示,1显示进度条,2不显示进度条
loss, accuracy = nn_model.evaluate(X_train, y_train, verbose=1)
print("训练集:loss {0:.3f}, 准确率:{1:.3f}".format(loss, accuracy))
loss, accuracy = nn_model.evaluate(X_test, y_test, verbose=1)
print("测试集:loss {0:.3f}, 准确率:{1:.3f}".format(loss, accuracy))

# 绘制训练曲线
from matplotlib import pyplot as plt
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1) # set the vertical range to [0-1]
plt.show()

输出:

代码语言:javascript
代码运行次数:0
运行
复制
Epoch 1/50
11/11 - 1s - loss: 0.6933 - accuracy: 0.5014 - val_loss: 0.6933 - val_accuracy: 0.4633
Epoch 2/50
11/11 - 0s - loss: 0.6931 - accuracy: 0.5214 - val_loss: 0.6935 - val_accuracy: 0.4633
Epoch 3/50
11/11 - 1s - loss: 0.6930 - accuracy: 0.5257 - val_loss: 0.6936 - val_accuracy: 0.4633
....省略
11/11 - 0s - loss: 0.0024 - accuracy: 1.0000 - val_loss: 0.7943 - val_accuracy: 0.7600
Epoch 49/50
11/11 - 1s - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.7970 - val_accuracy: 0.7600
Epoch 50/50
11/11 - 0s - loss: 0.0027 - accuracy: 1.0000 - val_loss: 0.7994 - val_accuracy: 0.7600
22/22 [==============================] - 0s 4ms/step - loss: 9.0586e-04 - accuracy: 1.0000
训练集:loss 0.001, 准确率:1.000
10/10 [==============================] - 0s 5ms/step - loss: 0.7994 - accuracy: 0.7600
测试集:loss 0.799, 准确率:0.760

训练集:loss 0.001, 准确率:1.000 测试集:loss 0.799, 准确率:0.760 存在过拟合,训练集准确率很高,测试集效果差

  • 随意测试
代码语言:javascript
代码运行次数:0
运行
复制
text = ["i am not very good.", "i am very good."]
x = tokenizer.texts_to_sequences(text)
x = pad_sequences(x, maxlen=maxlen, padding='post')
pred = nn_model.predict(x)
print("预测{}的类别为:".format(text[0]), 1 if pred[0][0]>=0.5 else 0)
print("预测{}的类别为:".format(text[1]), 1 if pred[1][0]>=0.5 else 0)

输出:

代码语言:javascript
代码运行次数:0
运行
复制
预测i am not very good.的类别为: 0
预测i am very good.的类别为: 1
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2020/12/06 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 文章目录
  • 1. 读取数据
  • 2. 数据集拆分
  • 3. 文本向量化
  • 4. 建立CNN模型
  • 5. 训练、测试
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档