我正在处理utf-8格式的文本。我想对其进行标记化,然后将其转换为列表。然而,我得到了以下错误。
import nltk, jieba, re, os
with open('file.txt') as f:
tokenized_text = jieba.cut(f,cut_all=True)
type(tokenized_text)
generator
word_list = list(tokenized_text)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-16b25477c71d> in <module>()
----> 1 list(new)
~/anaconda3/lib/python3.6/site-packages/jieba/__init__.py in cut(self, sentence, cut_all, HMM)
280 - HMM: Whether to use the Hidden Markov Model.
281 '''
--> 282 sentence = strdecode(sentence)
283
284 if cut_all:
~/anaconda3/lib/python3.6/site-packages/jieba/_compat.py in strdecode(sentence)
35 if not isinstance(sentence, text_type):
36 try:
---> 37 sentence = sentence.decode('utf-8')
38 except UnicodeDecodeError:
39 sentence = sentence.decode('gbk', 'ignore')
AttributeError: '_io.TextIOWrapper' object has no attribute 'decode'
我知道问题出在jieba包的某个地方。我还尝试将代码更改为
with open('file.txt') as f:
new = jieba.cut(f,cut_all=False)
但得到的结果是一样的。
发布于 2018-06-03 09:08:43
jieba.cut
接受字符串,而不是文件。这在readme中有解释。
https://stackoverflow.com/questions/50662459
复制相似问题