我是torchtext
的新手,我一直在使用Multi30k
数据集学习基础知识。在学习基础知识的过程中,我想使用其他的数据集,比如IWSLT2017
。我阅读了文档,他们向我展示了如何加载数据。
这就是我加载Multi30k
数据集的方式
# creating the fields
SRC = data.Field(
tokenize = tokenize_de,
lower= True,
init_token = "<sos>",
eos_token = "<eos>"
)
TRG = data.Field(
tokenize = tokenize_en,
lower= True,
init_token = "<sos>",
eos_token = "<eos>"
)
### Splitting the sets
train_data, valid_data, test_data = datasets.Multi30k.splits(
exts=('.de', '.en'),
fields = (SRC, TRG)
)
当我运行以下命令时:
print(vars(train_data.examples[0]))
我明白了:
{'src': ['zwei', 'junge', 'weiße', 'männer', 'sind', 'im', 'freien', 'in', 'der', 'nähe', 'vieler', 'büsche', '.'], 'trg': ['two', 'young', ',', 'white', 'males', 'are', 'outside', 'near', 'many', 'bushes', '.']}
我的问题是,当我调用print(vars(train_data.examples[0]))
时,如何加载IWSLT2017
才能获得类似的结果。
下面是我尝试过的:
from torchtext.datasets import IWSLT2017
train_iter, valid_iter, test_iter = IWSLT2017(
root='.data', split=('train', 'valid', 'test'), language_pair=('it', 'en')
)
src_sentence, tgt_sentence = next(train_iter)
它返回一个元组,如下所示:
('Sono impressionato da questa conferenza, e voglio ringraziare tutti voi per i tanti, lusinghieri commenti, anche perché... Ne ho bisogno!!!\n',
'I have been blown away by this conference, and I want to thank all of you for the many nice comments\n')
我的问题是,我如何才能从这一步转到获得如下内容的步骤:
{'src': ['zwei', 'junge', 'weiße', 'männer', 'sind', 'im', 'freien', 'in', 'der', 'nähe', 'vieler', 'büsche', '.'], 'trg': ['two', 'young', ',', 'white', 'males', 'are', 'outside', 'near', 'many', 'bushes', '.']}
我们将非常感谢您的帮助。
发布于 2021-07-29 16:24:01
为此,您可以使用例如spacy的processing_pipeline。示例如下所示:
import spacy
from torchtext.data.utils import get_tokenizer
from torchtext.datasets import IWSLT2017
train_iter, valid_iter, test_iter = IWSLT2017(root='.data', split=('train', 'valid', 'test'), language_pair=('it', 'en'))
src_sentence, tgt_sentence = next(train_iter)
print(src_sentence,tgt_sentence)
nlp = spacy.load("it_core_news_sm")
for doc in nlp.pipe([src_sentence]):
# Do something with the doc here
print([(ent.text) for ent in doc])
nlp = spacy.load("en_core_web_sm")
for doc in nlp.pipe([tgt_sentence]):
# Do something with the doc here
print([(ent.text) for ent in doc])
第一个例句的输出:
Grazie mille, Chris. E’ veramente un grande onore venire su questo palco due volte. Vi sono estremamente grato.
Thank you so much, Chris. And it's truly a great honor to have the opportunity to come to this stage twice; I'm extremely grateful.
标记化句子的输出:
['Grazie', 'mille', ',', 'Chris', '.', 'E', '’', 'veramente', 'un', 'grande', 'onore', 'venire', 'su', 'questo', 'palco', 'due', 'volte', '.', 'Vi', 'sono', 'estremamente', 'grato', '.', '\n']
['Thank', 'you', 'so', 'much', ',', 'Chris', '.', 'And', 'it', "'s", 'truly', 'a', 'great', 'honor', 'to', 'have', 'the', 'opportunity', 'to', 'come', 'to', 'this', 'stage', 'twice', ';', 'I', "'m", 'extremely', 'grateful', '.', '\n']
https://stackoverflow.com/questions/68398231
复制相似问题