我正在尝试抓取已经修复的chrome bug列表。它适用于第一页和第二页,但是由于某种原因,它在第三页停止。我在setting.py中设置了DEPTH_LIMIT =1。这是否与chrome策略有关,在该策略中,他们可能限制了可以抓取的数据量?提前感谢!
class MySpider(CrawlSpider):
name = "craig"
start_urls = ["http://code.google.com/p/chromium/issues/list?can=1&q=status%3Afixed&sort=&groupby=&colspec=ID+Pri+M+Iteration+ReleaseBlock+Cr+Status+Owner+Summary+OS+Modified+Type+Priority+Milestone+Attachments+Stars+Opened+Closed+BlockedOn+Blocking+Blocked+MergedInto+Reporter+Cc+Project+Os+Mstone+Releaseblock+Build+Size+Restrict+Security_severity+Security_impact+Area+Stability+Not+Crash+Internals+Movedfrom+Okr+Review+Taskforce+Valgrind+Channel+3rd"]
rules = (
Rule(SgmlLinkExtractor(restrict_xpaths=('//a[starts-with(., "Next")]/@href'))),
Rule(SgmlLinkExtractor(allow=("status%3Afixed",), deny=("detail?",)), callback="parse_items", follow=True)
)
def parse_items(self, response):
hxs = HtmlXPathSelector(response)
table = hxs.select("//table[@id='resultstable']")
items = []
count = 1
for count in range(1,100):
row = table.select("tr[" + str(count) + "][@class='ifOpened cursor_off']")
item = CraiglistSampleItem()
item["summary"] = row.select("td[@class='vt col_8'][2]/a/text()").extract()
item["summary"] = str(item["summary"][0].encode("ascii","ignore")).strip()
item["id"] = row.select("td[@class='vt id col_0']/a/text()").extract()
item["id"] = str(item["id"][0].encode("ascii","ignore")).strip()
print item["summary"]
count = count + 1
items.append(item)
return(items)
发布于 2013-09-09 00:55:44
这就是DEPTH_LIMIT = 1
所做的。第三个页面是深度2,所以不会被抓取。设置DEPTH_LIMIT = 0
,爬虫就可以工作了。
https://stackoverflow.com/questions/18686225
复制相似问题