我试图使用Python的Tfidf来转换文本的语料库。然而,当我尝试fit_transform它时,我得到一个值错误ValueError:空词汇表;也许文档只包含停止词。
In [69]: TfidfVectorizer().fit_transform(smallcorp)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-69-ac16344f3129> in <module>()
----> 1 TfidfVectorizer().fit_transform(smallcorp)
/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
1217 vectors : array, [n_samples, n_features]
1218 """
-> 1219 X = super(TfidfVectorizer, self).fit_transform(raw_documents)
1220 self._tfidf.fit(X)
1221 # X is already a transformed view of raw_documents so
/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
778 max_features = self.max_features
779
--> 780 vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
781 X = X.tocsc()
782
/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in _count_vocab(self, raw_documents, fixed_vocab)
725 vocabulary = dict(vocabulary)
726 if not vocabulary:
--> 727 raise ValueError("empty vocabulary; perhaps the documents only"
728 " contain stop words")
729
ValueError: empty vocabulary; perhaps the documents only contain stop words
我在这里阅读了SO问题:Problems using a custom vocabulary for TfidfVectorizer scikit-learn,并尝试了ogrisel关于使用Problems using a custom vocabulary for TfidfVectorizer scikit-learn来检查文本分析步骤的结果的建议,这似乎正像预期的那样工作:下面的代码片段:
In [68]: TfidfVectorizer().build_analyzer()(smallcorp)
Out[68]:
[u'due',
u'to',
u'lack',
u'of',
u'personal',
u'biggest',
u'education',
u'and',
u'husband',
u'to',
还有什么事我做错了吗?我给它喂食的语料库只是一个巨大的长字符串,用新行来点缀。
谢谢!
发布于 2014-01-05 13:06:10
我想是因为你只有一根绳子。尝试将其拆分为字符串列表,例如:
In [51]: smallcorp
Out[51]: 'Ah! Now I have done Philosophy,\nI have finished Law and Medicine,\nAnd sadly even Theology:\nTaken fierce pains, from end to end.\nNow here I am, a fool for sure!\nNo wiser than I was before:'
In [52]: tf = TfidfVectorizer()
In [53]: tf.fit_transform(smallcorp.split('\n'))
Out[53]:
<6x28 sparse matrix of type '<type 'numpy.float64'>'
with 31 stored elements in Compressed Sparse Row format>
https://stackoverflow.com/questions/20928769
复制相似问题