我在寻找在nltk中使用斯坦福单词标记器的方法,因为当我比较斯坦福和nltk单词标记器的结果时,它们都是不同的。我知道可能有办法使用斯坦福令牌器,就像我们可以在NLTK中使用斯坦福POS标签和NER一样。
在不运行服务器的情况下,是否可以使用斯坦福令牌程序?
谢谢
发布于 2017-12-04 05:44:48
在NLTK之外,您可以使用 Python interface that's recently release by Stanford NLP
安装
cd ~
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31
pip3 install -U https://github.com/stanfordnlp/python-stanford-corenlp/archive/master.zip设置环境
# On Mac
export CORENLP_HOME=/Users/<username>/stanford-corenlp-full-2016-10-31/
# On linux
export CORENLP_HOME=/home/<username>/stanford-corenlp-full-2016-10-31/在Python中
>>> import corenlp
>>> with corenlp.client.CoreNLPClient(annotators="tokenize ssplit".split()) as client:
... ann = client.annotate(text)
...
[pool-1-thread-4] INFO CoreNLP - [/0:0:0:0:0:0:0:1:55475] API call w/annotators tokenize,ssplit
Chris wrote a simple sentence that he parsed with Stanford CoreNLP.
>>> sentence = ann.sentence[0]
>>>
>>> [token.word for token in sentence.token]
['Chris', 'wrote', 'a', 'simple', 'sentence', 'that', 'he', 'parsed', 'with', 'Stanford', 'CoreNLP', '.']发布于 2017-12-04 05:33:33
注:此解决方案只适用于:
首先,必须先正确安装Java8,如果斯坦福CoreNLP在命令行上工作,那么NLTKv3.2.5中的Stanford如下所示。
注意:在中使用新的CoreNLP API之前,必须在终端中启动CoreNLP服务器。
在航站楼上:
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-preload tokenize,ssplit,pos,lemma,parse,depparse \
-status_port 9000 -port 9000 -timeout 15000在Python中:
>>> from nltk.parse.corenlp import CoreNLPParser
>>> st = CoreNLPParser()
>>> tokenized_sent = list(st.tokenize('What is the airspeed of an unladen swallow ?'))
>>> tokenized_sent
['What', 'is', 'the', 'airspeed', 'of', 'an', 'unladen', 'swallow', '?']https://stackoverflow.com/questions/47624742
复制相似问题