我想使用python计算一个文件中所有二元语法(相邻单词对)出现的次数。在这里,我正在处理非常大的文件,所以我正在寻找一种有效的方法。我尝试在文件内容上使用count方法和正则表达式"\w+\s\w+“,但它没有被证明是有效的。
例如,假设我想要计算文件a.txt中的二元模型的数量,该文件包含以下内容:
"the quick person did not realize his speed and the quick person bumped "
对于上述文件,二元组集合及其计数将为:
(the,quick) = 2
(quick,person) = 2
(person,did) = 1
(did, not) = 1
(not, realize) = 1
(realize,his) = 1
(his,speed) = 1
(speed,and) = 1
(and,the) = 1
(person, bumped) = 1
我在Python中遇到了一个计数器对象的示例,它用于计算unigram(单个单词)。它还使用正则表达式方法。
这个例子是这样的:
>>> # Find the ten most common words in Hamlet
>>> import re
>>> from collections import Counter
>>> words = re.findall('\w+', open('a.txt').read())
>>> print Counter(words)
以上代码的输出为:
[('the', 2), ('quick', 2), ('person', 2), ('did', 1), ('not', 1),
('realize', 1), ('his', 1), ('speed', 1), ('bumped', 1)]
我想知道是否可以使用计数器对象来获取二元语法的计数。除了计数器对象或正则表达式之外的任何方法也将受到重视。
发布于 2012-09-19 12:54:54
itertools
的一些魔力:
>>> import re
>>> from itertools import islice, izip
>>> words = re.findall("\w+",
"the quick person did not realize his speed and the quick person bumped")
>>> print Counter(izip(words, islice(words, 1, None)))
输出:
Counter({('the', 'quick'): 2, ('quick', 'person'): 2, ('person', 'did'): 1,
('did', 'not'): 1, ('not', 'realize'): 1, ('and', 'the'): 1,
('speed', 'and'): 1, ('person', 'bumped'): 1, ('his', 'speed'): 1,
('realize', 'his'): 1})
Bonus
获取任意n元语法的频率:
from itertools import tee, islice
def ngrams(lst, n):
tlst = lst
while True:
a, b = tee(tlst)
l = tuple(islice(a, n))
if len(l) == n:
yield l
next(b)
tlst = b
else:
break
>>> Counter(ngrams(words, 3))
输出:
Counter({('the', 'quick', 'person'): 2, ('and', 'the', 'quick'): 1,
('realize', 'his', 'speed'): 1, ('his', 'speed', 'and'): 1,
('person', 'did', 'not'): 1, ('quick', 'person', 'did'): 1,
('quick', 'person', 'bumped'): 1, ('did', 'not', 'realize'): 1,
('speed', 'and', 'the'): 1, ('not', 'realize', 'his'): 1})
这也适用于懒惰的迭代器和生成器。因此,您可以编写一个生成器,逐行读取文件,生成单词,并将其传递给ngarms
进行惰性消费,而无需读取内存中的整个文件。
发布于 2012-09-19 12:59:24
zip()
怎么样?
import re
from collections import Counter
words = re.findall('\w+', open('a.txt').read())
print(Counter(zip(words,words[1:])))
发布于 2019-01-16 10:18:41
您可以简单地对任何n_gram使用Counter
,如下所示:
from collections import Counter
from nltk.util import ngrams
text = "the quick person did not realize his speed and the quick person bumped "
n_gram = 2
Counter(ngrams(text.split(), n_gram))
>>>
Counter({('and', 'the'): 1,
('did', 'not'): 1,
('his', 'speed'): 1,
('not', 'realize'): 1,
('person', 'bumped'): 1,
('person', 'did'): 1,
('quick', 'person'): 2,
('realize', 'his'): 1,
('speed', 'and'): 1,
('the', 'quick'): 2})
对于3-gram,只需将n_gram
更改为3:
n_gram = 3
Counter(ngrams(text.split(), n_gram))
>>>
Counter({('and', 'the', 'quick'): 1,
('did', 'not', 'realize'): 1,
('his', 'speed', 'and'): 1,
('not', 'realize', 'his'): 1,
('person', 'did', 'not'): 1,
('quick', 'person', 'bumped'): 1,
('quick', 'person', 'did'): 1,
('realize', 'his', 'speed'): 1,
('speed', 'and', 'the'): 1,
('the', 'quick', 'person'): 2})
https://stackoverflow.com/questions/12488722
复制相似问题