尝试加载punkt
标记器时...
import nltk.data
tokenizer = nltk.data.load('nltk:tokenizers/punkt/english.pickle')
...a LookupError
被提了出来:
> LookupError:
> *********************************************************************
> Resource 'tokenizers/punkt/english.pickle' not found. Please use the NLTK Downloader to obtain the resource: nltk.download(). Searched in:
> - 'C:\\Users\\Martinos/nltk_data'
> - 'C:\\nltk_data'
> - 'D:\\nltk_data'
> - 'E:\\nltk_data'
> - 'E:\\Python26\\nltk_data'
> - 'E:\\Python26\\lib\\nltk_data'
> - 'C:\\Users\\Martinos\\AppData\\Roaming\\nltk_data'
> **********************************************************************
发布于 2012-06-01 23:12:26
我也有同样的问题。进入python shell并输入:
>>> import nltk
>>> nltk.download()
然后会出现一个安装窗口。转到'Models‘选项卡,并从'Identifier’列下选择'punkt‘。然后单击Download,它将安装必要的文件。那么它应该是有效的!
发布于 2014-12-30 21:50:10
你看到这个错误的主要原因是nltk找不到punkt
包。由于nltk
套件的大小,当用户安装它时,默认情况下并不会下载所有可用的软件包。
您可以像这样下载punkt
包。
import nltk
nltk.download('punkt')
from nltk import word_tokenize,sent_tokenize
在较新版本的错误消息中也建议这样做:
LookupError:
**********************************************************************
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
Searched in:
- '/root/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- '/usr/nltk_data'
- '/usr/lib/nltk_data'
- ''
**********************************************************************
如果不向download
函数传递任何参数,它将下载所有包,即chunkers
、grammars
、misc
、sentiment
、taggers
、corpora
、help
、models
、<代码>D14、<代码>D15。
nltk.download()
上面的函数将包保存到特定的目录。您可以在此处的注释中找到该目录位置。https://github.com/nltk/nltk/blob/67ad86524d42a3a86b1f5983868fd2990b59f1ba/nltk/downloader.py#L1051
发布于 2015-07-17 16:00:46
这就是刚才对我有效的方法:
# Do this in a separate python interpreter session, since you only have to do it once
import nltk
nltk.download('punkt')
# Do this in your ipython notebook or analysis script
from nltk.tokenize import word_tokenize
sentences = [
"Mr. Green killed Colonel Mustard in the study with the candlestick. Mr. Green is not a very nice fellow.",
"Professor Plum has a green plant in his study.",
"Miss Scarlett watered Professor Plum's green plant while he was away from his office last week."
]
sentences_tokenized = []
for s in sentences:
sentences_tokenized.append(word_tokenize(s))
sentences_tokenized是令牌列表的列表:
[['Mr.', 'Green', 'killed', 'Colonel', 'Mustard', 'in', 'the', 'study', 'with', 'the', 'candlestick', '.', 'Mr.', 'Green', 'is', 'not', 'a', 'very', 'nice', 'fellow', '.'],
['Professor', 'Plum', 'has', 'a', 'green', 'plant', 'in', 'his', 'study', '.'],
['Miss', 'Scarlett', 'watered', 'Professor', 'Plum', "'s", 'green', 'plant', 'while', 'he', 'was', 'away', 'from', 'his', 'office', 'last', 'week', '.']]
这些句子取自示例ipython notebook accompanying the book "Mining the Social Web, 2nd Edition"
https://stackoverflow.com/questions/4867197
复制相似问题