首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >在Scrapy中向allowed_domains传递争论

在Scrapy中向allowed_domains传递争论
EN

Stack Overflow用户
提问于 2017-04-11 02:10:28
回答 1查看 4.1K关注 0票数 4

我正在创建一个抓取器,它接受用户的输入并为站点上的所有链接进行爬行。不过,我需要限制爬行和从该领域的链接提取链接,没有外部领域。我把它带到了我需要它的地方,从爬虫的角度来看。我的问题是,对于我的allows_domains函数,我似乎无法传递通过命令输入的scrapy选项。Bellow是第一个运行的脚本:

代码语言:javascript
运行
复制
# First Script
import os

def userInput():
    user_input = raw_input("Please enter URL. Please do not include http://: ")
    os.system("scrapy runspider -a user_input='http://" + user_input + "' crawler_prod.py")

userInput()

它运行的脚本是爬虫,爬虫将爬行给定的域。下面是爬虫代码:

代码语言:javascript
运行
复制
#Crawler
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import Request
from scrapy.http import Request

class InputSpider(CrawlSpider):
        name = "Input"
        #allowed_domains = ["example.com"]

        def allowed_domains(self):
            self.allowed_domains = user_input

        def start_requests(self):
            yield Request(url=self.user_input)

        rules = [
        Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')
        ]

        def parse_item(self, response):
            x = HtmlXPathSelector(response)
            filename = "output.txt"
            open(filename, 'ab').write(response.url + "\n")

但是,我尝试让终端命令发送的请求崩溃爬虫。我现在的情况也会让爬虫崩溃。我也尝试过只输入allowed_domains=[user_input],它向我报告它没有定义。我正在玩从Scrapy请求库,以使这个工作没有运气。是否有更好的方法来限制在给定域之外的爬行?

编辑:

这是我的新代码:

代码语言:javascript
运行
复制
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spiders import BaseSpider
from scrapy import Request
from scrapy.http import Request
from scrapy.utils.httpobj import urlparse
#from run_first import *

class InputSpider(CrawlSpider):
        name = "Input"
        #allowed_domains = ["example.com"]

        #def allowed_domains(self):
            #self.allowed_domains = user_input

        #def start_requests(self):
            #yield Request(url=self.user_input)

        def __init__(self, *args, **kwargs):
            inputs = kwargs.get('urls', '').split(',') or []
            self.allowed_domains = [urlparse(d).netloc for d in inputs]
            # self.start_urls = [urlparse(c).netloc for c in inputs] # For start_urls

        rules = [
        Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')
        ]

        def parse_item(self, response):
            x = HtmlXPathSelector(response)
            filename = "output.txt"
            open(filename, 'ab').write(response.url + "\n")

这是新代码的输出日志。

代码语言:javascript
运行
复制
2017-04-18 18:18:01 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2017-04-18 18:18:01 [scrapy] INFO: Optional features available: ssl, http11, boto
2017-04-18 18:18:01 [scrapy] INFO: Overridden settings: {'LOG_FILE': 'output.log'}
2017-04-18 18:18:43 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2017-04-18 18:18:43 [scrapy] INFO: Optional features available: ssl, http11, boto
2017-04-18 18:18:43 [scrapy] INFO: Overridden settings: {'LOG_FILE': 'output.log'}
2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:1: ScrapyDeprecationWarning: Module `scrapy.contrib.spiders` is deprecated, use `scrapy.spiders` instead
  from scrapy.contrib.spiders import CrawlSpider, Rule

2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:2: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors` is deprecated, use `scrapy.linkextractors` instead
  from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:2: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors.sgml` is deprecated, use `scrapy.linkextractors.sgml` instead
  from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:27: ScrapyDeprecationWarning: SgmlLinkExtractor is deprecated and will be removed in future releases. Please use scrapy.linkextractors.LinkExtractor
  Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')

2017-04-18 18:18:43 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2017-04-18 18:18:43 [boto] DEBUG: Retrieving credentials from metadata server.
2017-04-18 18:18:44 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 210, in retry_url
    r = opener.open(req, timeout=timeout)
  File "/usr/lib/python2.7/urllib2.py", line 429, in open
    response = self._open(req, data)
  File "/usr/lib/python2.7/urllib2.py", line 447, in _open
    '_open', req)
  File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
    raise URLError(err)
URLError: <urlopen error timed out>
2017-04-18 18:18:44 [boto] ERROR: Unable to read instance data, giving up
2017-04-18 18:18:44 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2017-04-18 18:18:44 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2017-04-18 18:18:44 [scrapy] INFO: Enabled item pipelines: 
2017-04-18 18:18:44 [scrapy] INFO: Spider opened
2017-04-18 18:18:44 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-18 18:18:44 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-04-18 18:18:44 [scrapy] ERROR: Error while obtaining start requests
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/scrapy/core/engine.py", line 110, in _next_request
    request = next(slot.start_requests)
  File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 70, in start_requests
    yield self.make_requests_from_url(url)
  File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 73, in make_requests_from_url
    return Request(url, dont_filter=True)
  File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 24, in __init__
    self._set_url(url)
  File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 59, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: 
2017-04-18 18:18:44 [scrapy] INFO: Closing spider (finished)
2017-04-18 18:18:44 [scrapy] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 4, 18, 22, 18, 44, 794155),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 3,
 'log_count/INFO': 7,
 'start_time': datetime.datetime(2017, 4, 18, 22, 18, 44, 790331)}
2017-04-18 18:18:44 [scrapy] INFO: Spider closed (finished)

编辑:

我能够找到我的问题的答案,通过查看答案和重读文档。下面是我添加到爬虫脚本以使它工作的内容。

代码语言:javascript
运行
复制
def __init__(self, url=None, *args, **kwargs):
    super(InputSpider, self).__init__(*args, **kwargs)
    self.allowed_domains = [url]
    self.start_urls = ["http://" + url]
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-04-11 06:10:49

你在这里少了几样东西。

  1. 首先,来自start_urls的请求没有被过滤。
  2. 一旦运行开始,您就不能重写allowed_domains

要解决这些问题,您需要编写自己的offiste中间件,或者至少修改现有的中间件,并进行所需的更改。

所发生的事情是,一旦蜘蛛OffsiteMiddleware打开,处理allowed_domains就会将allowed_domains值转换为正则表达式字符串,然后不再使用该参数。

添加这样的东西到您的middlewares.py

代码语言:javascript
运行
复制
from scrapy.spidermiddlewares.offsite import OffsiteMiddleware
from scrapy.utils.httpobj import urlparse_cached
class MyOffsiteMiddleware(OffsiteMiddleware):

    def should_follow(self, request, spider):
        """Return bool whether to follow a request"""
        # hostname can be None for wrong urls (like javascript links)
        host = urlparse_cached(request).hostname or ''
        if host in spider.allowed_domains:
            return True
        return False

setting.py中激活它

代码语言:javascript
运行
复制
SPIDER_MIDDLEWARES = {
    # enable our middleware
    'myspider.middlewares.MyOffsiteMiddleware': 500,
    # disable old middleware
    'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': None, 

}

现在,您的蜘蛛应该遵循allowed_domains中的任何内容,即使您在运行中修改了它。

编辑:为您的情况:

代码语言:javascript
运行
复制
from scrapy.utils.httpobj import urlparse
class MySpider(Spider):
    def __init__(self, *args, **kwargs):
        input = kwargs.get('urls', '').split(',') or []
        self.allowed_domains = [urlparse(d).netloc for d in input]

现在你可以跑了:

代码语言:javascript
运行
复制
scrapy crawl myspider -a "urls=foo.com,bar.com"
票数 6
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/43335638

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档