首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >Portia中的Spider中间件未被调用

Portia中的Spider中间件未被调用
EN

Stack Overflow用户
提问于 2015-03-24 18:40:27
回答 1查看 376关注 0票数 0

我已经修改了Using Middleware to ignore duplicates in Scrapy的代码。

代码语言:javascript
运行
复制
from scrapy.exceptions import DropItem
from scrapy import log
import os.path

class IgnoreDuplicates():

    def __init__(self):
        self._cu_file = open("crawled_urls.txt", "a+")
        self._crawled_urls = set([line.strip() for line in self._cu_file.readlines()])

    def process_request(self, request, spider):
        if request.url in self._crawled_urls:
            raise DropItem("Duplicate product scrape caught by IgnoreDuplicates at <%s>" % (url))
        else:
            self._crawled_urls.add(request.url)
            self._cu_file.write(request.url + '\n')
            log.msg("IgnoreDuplicates recorded this url " + request.url, level=log.DEBUG)
            return None

我还将中间件模块添加到settings.py中:

代码语言:javascript
运行
复制
SPIDER_MANAGER_CLASS = 'slybot.spidermanager.SlybotSpiderManager'
EXTENSIONS = {'slybot.closespider.SlybotCloseSpider': 1}
ITEM_PIPELINES = {'slybot.dupefilter.DupeFilterPipeline': 1}
SPIDER_MIDDLEWARES = {'slybot.middleware.IgnoreDuplicates': 500, 'slybot.spiderlets.SpiderletsMiddleware': 999}  # as close as possible to spider output
PLUGINS = ['slybot.plugins.scrapely_annotations.Annotations']
SLYDUPEFILTER_ENABLED = True
PROJECT_DIR = 'slybot-project'
FEED_EXPORTERS = {
    'csv': 'slybot.exporter.SlybotCSVItemExporter',
}
CSV_EXPORT_FIELDS = None

try:
    from local_slybot_settings import *
except ImportError:
    pass

process_request函数不会被调用。我尝试在settings.py中更改中间件密钥的值,以便在SpiderletsMiddleware之前和之后执行它。但是,异常和日志消息没有显示在输出中。

如何确保调用中间件?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2015-03-24 20:59:47

对于蜘蛛中间件来说,回调函数是不同的。我使用这个片段中的代码作为参考:http://snipplr.com/view/67018/middleware-to-avoid-revisiting-already-visited-items/

下面是我在问题中发布的中间件代码的一个工作版本。

代码语言:javascript
运行
复制
from scrapy.http import Request
from scrapy import log
import os.path

class IgnoreVisitedItems(object):
    def __init__(self):
        # Load the URLs that have already been crawled
        self._cu_file = open("crawled_urls.txt", "a+")
        self._crawled_urls = set([line.strip() for line in self._cu_file.readlines()])

    def process_spider_output(self, response, result, spider):
        ret = []
        for x in result:
            # Find the URL in the result or response
            url = None
            if isinstance(x, Request):
                url = x.url
            else:
                url = response.request.url

            # Check if the URL has been crawled, and add
            # it to the list of crawled URLs.
            if url in self._crawled_urls:
                log.msg("Ignoring already visited: %s" % url,
                        level=log.INFO, spider=spider)
            else:
                log.msg("Adding %s to list of visited urls" % url,
                        level=log.INFO, spider=spider)
                self._cu_file.write(url + '\n')
                self._crawled_urls.add(url)
                ret.append(x)
        return ret
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/29240421

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档