我试图使用Deberta执行NER分类任务,但是我被托卡器错误堆叠在一起。这是我的代码(我的输入句子必须逐字逐句",:):
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
import transformers
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)
tokenizer(["Hello", ",", "this", "is", "one", "sentence", "split", "into", "words", "."])我有这样的结果:
{'input_ids': [[1, 31414, 2], [1, 6, 2], [1, 9226, 2], [1, 354, 2], [1, 1264, 2], [1, 19530, 4086, 2], [1, 44154, 2], [1, 12473, 2], [1, 30938, 2], [1, 4, 2]], 'token_type_ids': [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], 'attention_mask': [[1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]]} 然后我继续,但我有一个错误:
tokenized_input = tokenizer(example["tokens"])
tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])
print(tokens)TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'我认为原因是我需要得到以下格式的标记结果(这是不可能的,因为我的句子被",“分隔开了:
tokenizer("Hello, this is one sentence!")
{'input_ids': [1, 31414, 6, 42, 16, 65, 3645, 328, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}所以我尝试了这两种方式,但我是堆栈,不知道如何做。关于Deberta的在线文档很少。
tokenizer(["Hello", ",", "this", "is", "one", "sentence", "split", "into", "words", "."], is_split_into_words=True)
AssertionError: You need to instantiate DebertaTokenizerFast with add_prefix_space=True to use it with pretokenized inputs.tokenizer(["Hello", ",", "this", "is", "one", "sentence", "split", "into", "words", "."], is_split_into_words=True,add_prefix_space=True)而错误仍然是一样的。非常感谢!
发布于 2022-01-21 09:53:56
让我们试一试:
input_ids = [1, 31414, 6, 42, 16, 65, 3645, 328, 2]
input_ids = ','.join(map(str, input_ids ))
input_ids = ["Hello", ",", "this", "is", "one", "sentence", "split", "into", "words", "."]
input_ids = ','.join(map(str, input_ids ))
input_idshttps://stackoverflow.com/questions/70799226
复制相似问题