在这种集成中,NLTK文档非常糟糕。I 关注的步骤是:
/home/me/stanford/home/me/stanford然后在ipython控制台中:
11:进口nltk
In [12]: nltk.__version__
Out[12]: '3.1'
In [13]: from nltk.tag import StanfordNERTagger然后
st = StanfordNERTagger('/home/me/stanford/stanford-postagger-full-2015-04-20.zip', '/home/me/stanford/stanford-spanish-corenlp-2015-01-08-models.jar')但当我试图运行它时:
st.tag('Adolfo se la pasa corriendo'.split())
Error: no se ha encontrado o cargado la clase principal edu.stanford.nlp.ie.crf.CRFClassifier
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-14-0c1a96b480a6> in <module>()
----> 1 st.tag('Adolfo se la pasa corriendo'.split())
/home/nanounanue/.pyenv/versions/3.4.3/lib/python3.4/site-packages/nltk/tag/stanford.py in tag(self, tokens)
64 def tag(self, tokens):
65 # This function should return list of tuple rather than list of list
---> 66 return sum(self.tag_sents([tokens]), [])
67
68 def tag_sents(self, sentences):
/home/nanounanue/.pyenv/versions/3.4.3/lib/python3.4/site-packages/nltk/tag/stanford.py in tag_sents(self, sentences)
87 # Run the tagger and get the output
88 stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
---> 89 stdout=PIPE, stderr=PIPE)
90 stanpos_output = stanpos_output.decode(encoding)
91
/home/nanounanue/.pyenv/versions/3.4.3/lib/python3.4/site-packages/nltk/__init__.py in java(cmd, classpath, stdin, stdout, stderr, blocking)
132 if p.returncode != 0:
133 print(_decode_stdoutdata(stderr))
--> 134 raise OSError('Java command failed : ' + str(cmd))
135
136 return (stdout, stderr)
OSError: Java command failed : ['/usr/bin/java', '-mx1000m', '-cp', '/home/nanounanue/Descargas/stanford-spanish-corenlp-2015-01-08-models.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/nanounanue/Descargas/stanford-postagger-full-2015-04-20.zip', '-textFile', '/tmp/tmp6y169div', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf8']同样的情况发生在StandfordPOSTagger中。
注意:我需要这将是西班牙版本。注意到:我在python 3.4.3中运行这个
发布于 2015-12-02 10:16:07
尝试:
# StanfordPOSTagger
from nltk.tag.stanford import StanfordPOSTagger
stanford_dir = '/home/me/stanford/stanford-postagger-full-2015-04-20/'
modelfile = stanford_dir + 'models/english-bidirectional-distsim.tagger'
jarfile = stanford_dir + 'stanford-postagger.jar'
st = StanfordPOSTagger(model_filename=modelfile, path_to_jar=jarfile)
# NERTagger
stanford_dir = '/home/me/stanford/stanford-ner-2015-04-20/'
jarfile = stanford_dir + 'stanford-ner.jar'
modelfile = stanford_dir + 'classifiers/english.all.3class.distsim.crf.ser.gz'
st = StanfordNERTagger(model_filename=modelfile, path_to_jar=jarfile)有关使用斯坦福工具的NLTK的详细信息,请参阅:https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software#stanford-tagger-ner-tokenizer-and-parser
注意: NLTK是针对个别斯坦福工具的,如果您使用的是斯坦福核心NLP,最好遵循http://www.eecs.qmul.ac.uk/~dm303/stanford-dependency-parser-nltk-and-anaconda.html上的@dimazest指令
编辑
至于西班牙NER标签,我强烈建议您使用斯坦福核心NLP (http://nlp.stanford.edu/software/corenlp.shtml),而不是使用斯坦福NER软件包(http://nlp.stanford.edu/software/CRF-NER.shtml)。并遵循@dimazest解决方案读取JSON文件。
或者,如果您必须使用NLTK,您可以尝试遵循cli的说明(免责声明:此回购并不正式隶属于NLTK)。在unix命令行上执行以下操作:
cd $HOME
wget http://nlp.stanford.edu/software/stanford-spanish-corenlp-2015-01-08-models.jar
unzip stanford-spanish-corenlp-2015-01-08-models.jar -d stanford-spanish
cp stanford-spanish/edu/stanford/nlp/models/ner/* /home/me/stanford/stanford-ner-2015-04-20/ner/classifiers/然后在python中:
# NERTagger
stanford_dir = '/home/me/stanford/stanford-ner-2015-04-20/'
jarfile = stanford_dir + 'stanford-ner.jar'
modelfile = stanford_dir + 'classifiers/spanish.ancora.distsim.s512.crf.ser.gz'
st = StanfordNERTagger(model_filename=modelfile, path_to_jar=jarfile)发布于 2015-12-02 09:52:12
错误在于为StanfordNerTagger函数编写的参数。
第一个参数应该是模型文件或您正在使用的分类器。你可以在斯坦福邮编文件中找到那个文件。例如:
st = StanfordNERTagger('/home/me/stanford/stanford-postagger-full-2015-04-20/classifier/tagger.ser.gz', '/home/me/stanford/stanford-spanish-corenlp-2015-01-08-models.jar')发布于 2020-09-22 17:08:27
POS标记
为了将StanfordPOSTagger与python一起用于西班牙语,您必须安装StanfordPOSTagger,其中包含西班牙语的模型。
在本例中,我在/content文件夹上下载了标签
cd /content
wget https://nlp.stanford.edu/software/stanford-tagger-4.1.0.zip
unzip stanford-tagger-4.1.0.zip解压缩后,我在/content中有一个文件夹满-2020-08-06,所以我可以在以下内容中使用标签:
from nltk.tag.stanford import StanfordPOSTagger
stanford_dir = '/content/stanford-postagger-full-2020-08-06'
modelfile = f'{stanford_dir}/models/spanish-ud.tagger'
jarfile = f'{stanford_dir}/stanford-postagger.jar'
st = StanfordPOSTagger(model_filename=modelfile, path_to_jar=jarfile)为了检查一切是否正常,我们可以:
>st.tag(["Juan","Medina","es","un","ingeniero"])
>[('Juan', 'PROPN'),
('Medina', 'PROPN'),
('es', 'AUX'),
('un', 'DET'),
('ingeniero', 'NOUN')]纳塔格
在这种情况下,需要分别下载NER核心和西班牙模型。
cd /content
#download NER core
wget https://nlp.stanford.edu/software/stanford-ner-4.0.0.zip
unzip stanford-ner-4.0.0.zip
#download spanish models
wget http://nlp.stanford.edu/software/stanford-spanish-corenlp-2018-02-27-models.jar
unzip stanford-spanish-corenlp-2018-02-27-models.jar -d stanford-spanish
#copy only the necessary files
cp stanford-spanish/edu/stanford/nlp/models/ner/* stanford-ner-4.0.0/classifiers/
rm -rf stanford-spanish stanford-ner-4.0.0.zip stanford-spanish-corenlp-2018-02-27-models.jar要在python上使用它:
from nltk.tag.stanford import StanfordNERTagger
stanford_dir = '/content/stanford-ner-4.0.0/'
jarfile = f'{stanford_dir}/stanford-ner.jar'
modelfile = f'{stanford_dir}/classifiers/spanish.ancora.distsim.s512.crf.ser.gz'
st = StanfordNERTagger(model_filename=modelfile, path_to_jar=jarfile)为了检查一切是否正常,我们可以:
>st.tag(["Juan","Medina","es","un","ingeniero"])
>[('Juan', 'PERS'),
('Medina', 'PERS'),
('es', 'O'),
('un', 'O'),
('ingeniero', 'O')]https://stackoverflow.com/questions/34037094
复制相似问题