前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >中文NLP笔记:13 用 Keras 实现一个简易聊天机器人

中文NLP笔记:13 用 Keras 实现一个简易聊天机器人

作者头像
杨熹
发布2019-02-20 16:36:24
1.8K0
发布2019-02-20 16:36:24
举报
文章被收录于专栏:杨熹的专栏杨熹的专栏

第一步,引入需要的包:

代码语言:javascript
复制
from keras.models import Model

from keras.layers import Input, LSTM, Dense

import numpy as np

import pandas as pd

第二步,定义模型超参数、迭代次数、语料路径:

代码语言:javascript
复制
#Batch size 的大小

batch_size = 32 

# 迭代次数epochs

epochs = 100

# 编码空间的维度Latent dimensionality

latent_dim = 256 

# 要训练的样本数

num_samples = 5000

#设置语料的路径

data_path = 'D://nlp//ch13//files.txt'

第三步,把语料向量化:

代码语言:javascript
复制
#把数据向量化

input_texts = []

target_texts = []

input_characters = set()

target_characters = set()



with open(data_path, 'r', encoding='utf-8') as f:

    lines = f.read().split('\n')

for line in lines[: min(num_samples, len(lines) - 1)]:

    #print(line)

    input_text, target_text = line.split('\t')

    # We use "tab" as the "start sequence" character

    # for the targets, and "\n" as "end sequence" character.

    target_text = target_text[0:100]

    target_text = '\t' + target_text + '\n'

    input_texts.append(input_text)

    target_texts.append(target_text)



    for char in input_text:

        if char not in input_characters:

            input_characters.add(char)

    for char in target_text:

        if char not in target_characters:

            target_characters.add(char)



input_characters = sorted(list(input_characters))

target_characters = sorted(list(target_characters))

num_encoder_tokens = len(input_characters)

num_decoder_tokens = len(target_characters)

max_encoder_seq_length = max([len(txt) for txt in input_texts])

max_decoder_seq_length = max([len(txt) for txt in target_texts])



print('Number of samples:', len(input_texts))

print('Number of unique input tokens:', num_encoder_tokens)

print('Number of unique output tokens:', num_decoder_tokens)

print('Max sequence length for inputs:', max_encoder_seq_length)

print('Max sequence length for outputs:', max_decoder_seq_length)



input_token_index = dict(

    [(char, i) for i, char in enumerate(input_characters)])

target_token_index = dict(

    [(char, i) for i, char in enumerate(target_characters)])



encoder_input_data = np.zeros(

    (len(input_texts), max_encoder_seq_length, num_encoder_tokens),dtype='float32')

decoder_input_data = np.zeros(

    (len(input_texts), max_decoder_seq_length, num_decoder_tokens),dtype='float32')

decoder_target_data = np.zeros(

    (len(input_texts), max_decoder_seq_length, num_decoder_tokens),dtype='float32')



for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):

    for t, char in enumerate(input_text):

        encoder_input_data[i, t, input_token_index[char]] = 1.

    for t, char in enumerate(target_text):

        # decoder_target_data is ahead of decoder_input_data by one timestep

        decoder_input_data[i, t, target_token_index[char]] = 1.

        if t > 0:

            # decoder_target_data will be ahead by one timestep

            # and will not include the start character.

            decoder_target_data[i, t - 1, target_token_index[char]] = 1.

第四步,LSTM_Seq2Seq 模型定义、训练和保存:

代码语言:javascript
复制
encoder_inputs = Input(shape=(None, num_encoder_tokens))

encoder = LSTM(latent_dim, return_state=True)

encoder_outputs, state_h, state_c = encoder(encoder_inputs)

# 输出 `encoder_outputs`

encoder_states = [state_h, state_c]



# 状态 `encoder_states`

decoder_inputs = Input(shape=(None, num_decoder_tokens))

decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)

decoder_outputs, _, _ = decoder_lstm(decoder_inputs,

                      initial_state=encoder_states)

decoder_dense = Dense(num_decoder_tokens, activation='softmax')

decoder_outputs = decoder_dense(decoder_outputs)



# 定义模型

model = Model([encoder_inputs, decoder_inputs], decoder_outputs)



# 训练

model.compile(optimizer='rmsprop', loss='categorical_crossentropy')

model.fit([encoder_input_data, decoder_input_data], decoder_target_data,

          batch_size=batch_size,

          epochs=epochs,

          validation_split=0.2)

# 保存模型

model.save('s2s.h5')

第五步,Seq2Seq 的 Encoder 操作:

代码语言:javascript
复制
encoder_model = Model(encoder_inputs, encoder_states)



decoder_state_input_h = Input(shape=(latent_dim,))

decoder_state_input_c = Input(shape=(latent_dim,))

decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]

decoder_outputs, state_h, state_c = decoder_lstm(

    decoder_inputs, initial_state=decoder_states_inputs)

decoder_states = [state_h, state_c]

decoder_outputs = decoder_dense(decoder_outputs)

decoder_model = Model(

    [decoder_inputs] + decoder_states_inputs,

    [decoder_outputs] + decoder_states)

第六步,把索引和分词转成序列:

代码语言:javascript
复制
reverse_input_char_index = dict(

    (i, char) for char, i in input_token_index.items())

reverse_target_char_index = dict(

    (i, char) for char, i in target_token_index.items())

第七步,定义预测函数,先使用预模型预测,然后编码成汉字结果:

代码语言:javascript
复制
def decode_sequence(input_seq):

    # Encode the input as state vectors.

    states_value = encoder_model.predict(input_seq)

    #print(states_value)



    # Generate empty target sequence of length 1.

    target_seq = np.zeros((1, 1, num_decoder_tokens))

    # Populate the first character of target sequence with the start character.

    target_seq[0, 0, target_token_index['\t']] = 1.



    # Sampling loop for a batch of sequences

    # (to simplify, here we assume a batch of size 1).

    stop_condition = False

    decoded_sentence = ''

    while not stop_condition:

        output_tokens, h, c = decoder_model.predict(

            [target_seq] + states_value)



        # Sample a token

        sampled_token_index = np.argmax(output_tokens[0, -1, :])

        sampled_char = reverse_target_char_index[sampled_token_index]

        decoded_sentence += sampled_char

        if (sampled_char == '\n' or

          len(decoded_sentence) > max_decoder_seq_length):

            stop_condition = True



        # Update the target sequence (of length 1).

        target_seq = np.zeros((1, 1, num_decoder_tokens))

        target_seq[0, 0, sampled_token_index] = 1.

        # 更新状态

        states_value = [h, c]

    return decoded_sentence

第九步:模型预测

首先,定义一个预测函数:

代码语言:javascript
复制
def predict_ans(question):

        inseq = np.zeros((len(question), max_encoder_seq_length, num_encoder_tokens),dtype='float16')

        decoded_sentence = decode_sequence(inseq)

        return decoded_sentence

然后进行预测:

代码语言:javascript
复制
print('Decoded sentence:', predict_ans("挖掘机坏了怎么办"))

学习资料:

《中文自然语言处理入门实战》

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2019.02.14 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
NLP 服务
NLP 服务(Natural Language Process,NLP)深度整合了腾讯内部的 NLP 技术,提供多项智能文本处理和文本生成能力,包括词法分析、相似词召回、词相似度、句子相似度、文本润色、句子纠错、文本补全、句子生成等。满足各行业的文本智能需求。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档