我想用python中的NLTK训练一个语言模型,但我遇到了几个问题。首先,我不知道为什么当我写下这样的东西时,我的单词变成了字符:
s = "Natural-language processing (NLP) is an area of computer science " \
"and artificial intelligence concerned with the interactions " \
"between computers and human (natural) languages."
s = s.lower();
paddedLine = pad_both_ends(word_tokenize(s),n=2);
train, vocab = padded_everygram_pipeline(2, paddedLine)
print(list(vocab))
lm = MLE(2);
lm.fit(train,vocab)打印出来的单词显然是不正确的(我不想处理字符!),这是输出的一部分:
<s>', '<', 's', '>', '</s>', '<s>', 'n', 'a', 't', 'u', 'r', 'a', 'l', '-', 'l', 'a', 'n', 'g', 'u', 'a', 'g', 'e', '</s>', '<s>', 'p', 'r', 'o', 'c', 'e', 's', 's', 'i', 'n', 'g', '</s>', '<s>', '(', '</s>', '<s>', 'n', 'l', 'p', '</s>', '<s>', ')', '</s>'为什么我的输入会变成字符?我用另一种方法完成了这项工作,但没有运气:
paddedLine = pad_both_ends(word_tokenize(s),n=2);
#train, vocab = padded_everygram_pipeline(2, tokens)
#train = everygrams(paddedLine,max_len = 2);
train = ngrams(paddedLine,2);
vocab = Vocabulary(paddedLine,unk_cutoff = 1);
print(list(train))
lm = MLE(2);
lm.fit(train,vocab)当我运行这段代码时,我的列车绝对是空的!它显示我"[]“!!wired的事情是当我在上面的代码中注释这行时:
vocab = Vocabulary(paddedLine,unk_cutoff = 1);现在我的训练数据是正确的,如下所示是正确的:
[('<s>', 'natural-language'), ('natural-language', 'processing'), ('processing', '('), ('(', 'nlp'), ('nlp', ')'), (')', 'is'), ('is', 'an'), ('an', 'area'), ('area', 'of'), ('of', 'computer'), ('computer', 'science'), ('science', 'and'), ('and', 'artificial'), ('artificial', 'intelligence'), ('intelligence', 'concerned'), ('concerned', 'with'), ('with', 'the'), ('the', 'interactions'), ('interactions', 'between'), ('between', 'computers'), ('computers', 'and'), ('and', 'human'), ('human', '('), ('(', 'natural'), ('natural', ')'), (')', 'languages'), ('languages', '.'), ('.', '</s>')]它有什么问题?顺便说一句,我不是python或NLTK方面的专家,这是我的第一次体验。下一个问题是如何在训练语言模型上使用kneser-ney平滑或加1平滑?我做语言模型训练的方式是正确的吗?我的训练数据很简单:
"Natural-language processing (NLP) is an area of computer science " \
"and artificial intelligence concerned with the interactions " \
"between computers and human (natural) languages."谢谢。
发布于 2019-03-03 01:30:34
padded_everygram_pipeline函数需要n元语法列表的列表。你应该修复你的第一段代码,如下所示。另外,python生成器是惰性序列,您不能多次迭代它们。
from nltk import word_tokenize
from nltk.lm import MLE
from nltk.lm.preprocessing import pad_both_ends, padded_everygram_pipeline
s = "Natural-language processing (NLP) is an area of computer science " \
"and artificial intelligence concerned with the interactions " \
"between computers and human (natural) languages."
s = s.lower()
paddedLine = [list(pad_both_ends(word_tokenize(s), n=2))]
train, vocab = padded_everygram_pipeline(2, paddedLine)
lm = MLE(2)
lm.fit(train, vocab)
print(lm.counts)https://stackoverflow.com/questions/54959340
复制相似问题