最初,我使用文本分类数据集对基于BERT的模型进行了微调,为此我使用了BertforSequenceClassification类。model = BertForSequenceClassification.from_pretrained(
"bert-base-uncased", # Use the 12-layer BERToutput_hidden_states = False, # Whether the model returns
我想用huggingface做中文文本相似度: tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
model = TFBertForSequenceClassification.from_pretrained('bert-base-chinese') 它不工作,系统报告错误: Some weights of the model checkpoint at bert-base-chinese were notSome weigh