我一直在尝试使用PYTORCH-TRANSFORMERS预训练模型。使用collab模板中默认的所有内容,使用从huggingface/pytorch-transformers到bert-base-uncased的torch.hub.load()作为“模型”
代码示例
import torch
model = torch.hub.load('huggingface/pytorch-transformers', 'model', 'bert-base-uncased') # Download model and configuration from S3 and cache.我看到了这个错误
Using cache found in /root/.cache/torch/hub/huggingface_pytorch-transformers_master
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-19-ad22a1a34951> in <module>()
1 import torch
----> 2 model = torch.hub.load('huggingface/pytorch-transformers', 'model', 'somethingelse') # Download model and configuration from S3 and cache.
3 model = torch.hub.load('huggingface/pytorch-transformers', 'model', './test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')`
4 model = torch.hub.load('huggingface/pytorch-transformers', 'model', 'bert-base-uncased', output_attentions=True) # Update configuration during loading
5 assert model.config.output_attentions == True
13 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in decorator(arg)
177 raise ValueError(
178 '%s has already been registered to %s' %
--> 179 (registered_name, _GLOBAL_CUSTOM_OBJECTS[registered_name]))
180
181 if arg in _GLOBAL_CUSTOM_NAMES:
ValueError: Custom>TFBertMainLayer has already been registered to <class 'src.transformers.modeling_tf_bert.TFBertMainLayer'>我不太明白到底是怎么回事。
发布于 2020-11-23 00:35:45
如果您只想使用PyTorch版本的转换器,要解决这个问题,您可以从环境中卸载TensorFlow,或者注释掉转换器源代码中_init_.py中的TensorFlow part (以If is_tf_available()开头:)
https://stackoverflow.com/questions/61382917
复制相似问题