前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Pytorch-Transformers 1.0发布,支持六个预训练框架,含27个预训练模型

Pytorch-Transformers 1.0发布,支持六个预训练框架,含27个预训练模型

作者头像
AI科技评论
发布2019-07-22 11:12:34
9210
发布2019-07-22 11:12:34
举报
文章被收录于专栏:AI科技评论AI科技评论

哪些支持

PyTorch-Transformers(此前叫做pytorch-pretrained-bert)是面向自然语言处理,当前性能最高的预训练模型开源库。

该开源库现在包含了 PyTorch 实现、预训练模型权重、运行脚本和以下模型的转换工具:

1、谷歌的 BERT

论文:“BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”

2、OpenAI 的GPT

论文:“ Improving Language Understanding by Generative Pre-Training”

3、OpenAI 的 GPT-2

论文:“ Language Models are Unsupervised Multitask Learners”

4、谷歌和 CMU 的 Transformer-XL

论文:“ Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context”

5、谷歌和 CMU 的XLNet

论文:“XLNet: Generalized Autoregressive Pretraining for Language Understanding”

6、Facebook的 XLM

论文:“ Cross-lingual Language Model Pretraining”

这些实现都在几个数据集(参见示例脚本)上进行了测试,性能与原始实现相当,例如 BERT中文全词覆盖在 SQuAD数据集上的F1分数为93 , OpenAI GPT 在 RocStories上的F1分数为88, Transformer-XL在 WikiText 103 上的困惑度为18.3, XLNet在STS-B的皮尔逊相关系数为0.916。

27个预训练模型

项目中提供了27个预训练模型,下面是这些模型的完整列表,以及每个模型的简短介绍。

Architecture

Shortcut name

Details of the model

BERT

bert-base-uncased

12-layer, 768-hidden, 12-heads, 110M parameters Trained on lower-cased English text

bert-large-uncased

24-layer, 1024-hidden, 16-heads, 340M parameters Trained on lower-cased English text

bert-base-cased

12-layer, 768-hidden, 12-heads, 110M parameters Trained on cased English text

bert-large-cased

24-layer, 1024-hidden, 16-heads, 340M parameters Trained on cased English text

bert-base-multilingual-uncased

(Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters Trained on lower-cased text in the top 102 languages with the largest Wikipedias (see details)

bert-base-multilingual-cased

(New, recommended) 12-layer, 768-hidden, 12-heads, 110M parameters Trained on cased text in the top 104 languages with the largest Wikipedias (see details)

bert-base-chinese

12-layer, 768-hidden, 12-heads, 110M parameters Trained on cased Chinese Simplified and Traditional text

bert-base-german-cased

12-layer, 768-hidden, 12-heads, 110M parameters Trained on cased German text by Deepset.ai (see details on deepset.ai website)

bert-large-uncased-whole-word-masking

24-layer, 1024-hidden, 16-heads, 340M parameters Trained on lower-cased English text using Whole-Word-Masking (see details)

bert-large-cased-whole-word-masking

24-layer, 1024-hidden, 16-heads, 340M parameters Trained on cased English text using Whole-Word-Masking (see details)

bert-large-uncased-whole-word-masking-finetuned-squad

24-layer, 1024-hidden, 16-heads, 340M parameters The bert-large-uncased-whole-word-masking model fine-tuned on SQuAD (see details of fine-tuning in the example section)

bert-large-cased-whole-word-masking-finetuned-squad

24-layer, 1024-hidden, 16-heads, 340M parameters The bert-large-cased-whole-word-maskingmodel fine-tuned on SQuAD (see details of fine-tuning in the example section)

bert-base-cased-finetuned-mrpc

12-layer, 768-hidden, 12-heads, 110M parameters The bert-base-casedmodel fine-tuned on MRPC (see details of fine-tuning in the example section)

GPT

openai-gpt

12-layer, 768-hidden, 12-heads, 110M parameters OpenAI GPT English model

GPT-2

gpt2

12-layer, 768-hidden, 12-heads, 117M parameters OpenAI GPT-2 English model

gpt2-medium

24-layer, 1024-hidden, 16-heads, 345M parameters OpenAI’s Medium-sized GPT-2 English model

Transformer-XL

transfo-xl-wt103

18-layer, 1024-hidden, 16-heads, 257M parameters English model trained on wikitext-103

XLNet

xlnet-base-cased

12-layer, 768-hidden, 12-heads, 110M parameters XLNet English model

xlnet-large-cased

24-layer, 1024-hidden, 16-heads, 340M parameters XLNet Large English model

XLM

xlm-mlm-en-2048

12-layer, 1024-hidden, 8-heads XLM English model

xlm-mlm-ende-1024

12-layer, 1024-hidden, 8-heads XLM English-German Multi-language model

xlm-mlm-enfr-1024

12-layer, 1024-hidden, 8-heads XLM English-French Multi-language model

xlm-mlm-enro-1024

12-layer, 1024-hidden, 8-heads XLM English-Romanian Multi-language model

xlm-mlm-xnli15-1024

12-layer, 1024-hidden, 8-heads XLM Model pre-trained with MLM on the 15 XNLI languages.

xlm-mlm-tlm-xnli15-1024

12-layer, 1024-hidden, 8-heads XLM Model pre-trained with MLM + TLM on the 15 XNLI languages.

xlm-clm-enfr-1024

12-layer, 1024-hidden, 8-heads XLM English model trained with CLM (Causal Language Modeling)

xlm-clm-ende-1024

12-layer, 1024-hidden, 8-heads XLM English-German Multi-language model trained with CLM (Causal Language Modeling)

例子

BERT-base和BERT-large分别是110M和340M参数模型,并且很难在单个GPU上使用推荐的批量大小对其进行微调,来获得良好的性能(在大多数情况下批量大小为32)。

为了帮助微调这些模型,我们提供了几种可以在微调脚本中激活的技术 run_bert_classifier.py 和 run_bert_squad.py:梯度累积(gradient-accumulation),多GPU训练(multi-gpu training),分布式训练(distributed training )和16- bits 训练( 16-bits training)。注意,这里要使用分布式训练和16- bits 训练,你需要安装NVIDIA的apex扩展。

作者在doc中展示了几个基于BERT原始实现(https://github.com/google-research/bert/)和扩展的微调示例,分别为:

  • 九个不同GLUE任务的序列级分类器;
  • 问答集数据集SQUAD上的令牌级分类器;
  • SWAG分类语料库中的序列级多选分类器;
  • 另一个目标语料库上的BERT语言模型。

我们这里仅展示GLUE的结果:

这里是使用uncased BERT基础模型在GLUE基准测试开发集上得到的结果。所有实验均在批量大小为32的P100 GPU上运行。尽管比较原始,但结果看起来还不错。

安装

该项目是在Python 2.7和3.5+上测试(例子只在python 3.5+上测试)和PyTorch 0.4.1到1.1.0测试

pip 安装:

代码语言:javascript
复制
pip install pytorch-transformers

测试:

代码语言:javascript
复制
python -m pytest -sv ./pytorch_transformers/tests/
python -m pytest -sv ./examples/

传送门:

源码:

https://github.com/huggingface/pytorch-transformers

文档:

https://huggingface.co/pytorch-transformers/index.html

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2019-07-17,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 AI科技评论 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 哪些支持
  • PyTorch-Transformers(此前叫做pytorch-pretrained-bert)是面向自然语言处理,当前性能最高的预训练模型开源库。
  • 27个预训练模型
  • 例子
  • 安装
  • 该项目是在Python 2.7和3.5+上测试(例子只在python 3.5+上测试)和PyTorch 0.4.1到1.1.0测试
    • 测试:
    相关产品与服务
    NLP 服务
    NLP 服务(Natural Language Process,NLP)深度整合了腾讯内部的 NLP 技术,提供多项智能文本处理和文本生成能力,包括词法分析、相似词召回、词相似度、句子相似度、文本润色、句子纠错、文本补全、句子生成等。满足各行业的文本智能需求。
    领券
    问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档