我想从存储在Pandas数据框列中的25万字符串对象中创建Spacy nlp对象。有没有办法优化下面的“应用”方法,也就是说,有没有办法向量化spacy nlp对象的调用?
import pandas as pd
import spacy
nlp = spacy.load("en_core_web_sm")
df = pd.DataFrame({"id": [1, 2, 3], "text": ["this is a text", "another easy one", "oh you come on"]})
df["nlp"] = df.apply(lambda x: nlp(x.text), axis=1)发布于 2020-08-08 12:21:19
根据我在29,071个字符串的语料库上的测试,使用nlp.pipe时比apply更快的方法
import pandas as pd
import spacy
from time import time
from nltk.corpus import webtext
nlp = spacy.load("en_core_web_sm")
texts = webtext.raw().split('\n')
df = pd.DataFrame({"text":texts})
#apply method
start = time()
df["nlp"] = df.apply(lambda x: nlp(x.text), axis=1)
end = time()
print(end - start)
# batch method
start = time()
df["nlp"] = [doc for doc in nlp.pipe(df["text"].tolist())]
end = time()
print(end - start)
#print(Counter([tok.dep_ for tok in doc if tok.pos_=='VERB']))输出:
apply method: 209.74427151679993
batch method: 51.40181493759155https://stackoverflow.com/questions/63057742
复制相似问题