我正在尝试编写一个程序,在给定句子列表的情况下,返回最可能的句子。我想使用GPT-2,但我对它的使用还很陌生(因为我真的不知道怎么做)。我计划找出给定先前单词的单词的概率,并将所有概率相乘以获得该句子出现的总体概率,但是我不知道如何在给定先前单词的情况下找到单词出现的概率。这是我的(psuedo)代码:
sentences = # my list of sentences
max_prob = 0
best_sentence = sentences[0]
for sentence in sentences:
prob = 1 #probability of that sentence
for idx, word in enumerate(sentence.split()[1:]):
prob *= probability(word, " ".join(sentence[:idx])) # this is where I need help
if prob > max_prob:
max_prob = prob
best_sentence = sentence
print(best_sentence)
能帮我个忙吗?
发布于 2020-12-11 05:58:03
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import numpy as np
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
def score(tokens_tensor):
loss=model(tokens_tensor, labels=tokens_tensor)[0]
return np.exp(loss.cpu().detach().numpy())
texts = ['i would like to thank you mr chairman', 'i would liking to thanks you mr chair in', 'thnks chair' ]
for text in texts:
tokens_tensor = tokenizer.encode( text, add_special_tokens=False, return_tensors="pt")
print (text, score(tokens_tensor))
这段代码可能是你所寻找的一个例子。你向模型提供一个句子列表,它会给每个句子打分,但分数越低越好。
上面代码的输出是:
i would like to thank you mr chairman 122.3066
i would liking to thanks you mr chair in 1183.7637
thnks chair 14135.129
发布于 2021-03-28 01:15:05
我写了一组functions,可以精确地做你想要的事情。回想一下,GPT-2将其输入解析为标记(而不是单词):'Joe flicked the grasshopper‘中的最后一个单词实际上是三个标记:’gras‘、'ho’和'pper‘。cloze_finalword
函数考虑到这一点,并计算所有标记的概率(以出现在它们之前的标记为条件)。您可以调整此函数的一部分,使其返回您正在查找的内容。我希望你能找到有用的代码!
https://stackoverflow.com/questions/63543006
复制相似问题