在尝试用相同的数据集依次评估一组变压器模型时,检查哪一台的性能更好。
模型清单是这样的:
MODELS = [
('xlm-mlm-enfr-1024' ,"XLMModel"),
('distilbert-base-cased', "DistilBertModel"),
('bert-base-uncased' ,"BertModel"),
('roberta-base' ,"RobertaModel"),
("cardiffnlp/twitter-roberta-base-sentiment","RobertaSentTW"),
('xlnet-base-cased' ,"XLNetModel"),
#('ctrl' ,"CTRLModel"),
('transfo-xl-wt103' ,"TransfoXLModel"),
('bert-base-cased' ,"BertModelUncased"),
('xlm-roberta-base' ,"XLMRobertaModel"),
('openai-gpt' ,"OpenAIGPTModel"),
('gpt2' ,"GPT2Model")
在“ctrl”模型返回此错误之前,它们都可以正常工作:
Asking to pad but the tokenizer does not have a padding token. Please select a token to use as 'pad_token' '(tokenizer.pad_token = tokenizer.eos_token e.g.)' or add a new pad token via 'tokenizer.add_special_tokens({'pad_token': '[PAD]'})'.
当标记我的数据集的句子时。
标记代码是
SEQ_LEN = MAX_LEN #(50)
for pretrained_weights, model_name in MODELS:
print("***************** INICIANDO " ,model_name,", weights ",pretrained_weights, "********* ")
print("carganzo el tokenizador ()")
tokenizer = AutoTokenizer.from_pretrained(pretrained_weights)
print("creando el modelo preentrenado")
transformer_model = TFAutoModel.from_pretrained(pretrained_weights)
print("aplicando el tokenizador al dataset")
##APLICAMOS EL TOKENIZADOR##
def tokenize(sentence):
tokens = tokenizer.encode_plus(sentence, max_length=MAX_LEN,
truncation=True, padding='max_length',
add_special_tokens=True, return_attention_mask=True,
return_token_type_ids=False, return_tensors='tf')
return tokens['input_ids'], tokens['attention_mask']
# initialize two arrays for input tensors
Xids = np.zeros((len(df), SEQ_LEN))
Xmask = np.zeros((len(df), SEQ_LEN))
for i, sentence in enumerate(df['tweet']):
Xids[i, :], Xmask[i, :] = tokenize(sentence)
if i % 10000 == 0:
print(i) # do this so we can see some progress
arr = df['label'].values # take label column in df as array
labels = np.zeros((arr.size, arr.max()+1)) # initialize empty (all zero) label array
labels[np.arange(arr.size), arr] = 1 # add ones in indices where we have a value`
我试图按照解决方案的要求定义填充令牌,但随后出现了此错误。
could not broadcast input array from shape (3,) into shape (50,)
在队列中
Xids[i, :], Xmask[i, :] = tokenize(sentence)
我也尝试过这个解决方案,但两者都不起作用。
如果你一直读到这里,谢谢你。
任何帮助都是需要的。
发布于 2022-01-01 17:01:26
您可以使用[PAD]
API添加add_special_tokens
令牌。
tokenizer = AutoTokenizer.from_pretrained(pretrained_weights)
if tokenizer.pad_token is None:
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
发布于 2022-07-27 11:22:10
https://stackoverflow.com/questions/70544129
复制相似问题