当我多次运行爬行进程时,会出现此错误。我正在使用scrapy2.6,这是我的代码:
from scrapy.crawler import CrawlerProcess
from football.spiders.laliga import LaligaSpider
from scrapy.utils.project import get_project_settings
process = CrawlerProcess(settings=get_project_settings())
for i in range(1, 29):
process.crawl(LaligaSpider, **{'week': i})
process.start()发布于 2022-04-22 07:13:56
对我来说,这是可行的,我把它放在CrawlerProcess之前
import sys
if "twisted.internet.reactor" in sys.modules:
del sys.modules["twisted.internet.reactor"]发布于 2022-03-28 07:05:45
此解决方案避免使用文档中所述的CrawlerProcess。https://docs.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script
还有另一个Scrapy实用程序,它提供了对爬行过程的更多控制: scrapy.crawler.CrawlerRunner。这个类是一个薄包装,封装了一些简单的帮助程序来运行多个爬行器,但它不会以任何方式启动或干扰现有的反应堆。如果您的应用程序已经在使用Twisted,并且希望在同一个反应堆中运行Scrapy,则建议您使用CrawlerRunner而不是CrawlerProcess。
from twisted.internet import reactor
from scrapy.crawler import CrawlerRunner
from scrapy.utils.project import get_project_settings
from scrapy.utils.log import configure_logging
from football.spiders.laliga import LaligaSpider
# Enable logging for CrawlerRunner
configure_logging()
runner = CrawlerRunner(settings=get_project_settings())
for i in range(1, 29):
runner.crawl(LaligaSpider, **{'week': i})
deferred = runner.join()
deferred.addBoth(lambda _: reactor.stop())
reactor.run() # the script will block here until all crawling jobs are finished发布于 2022-03-24 17:52:41
我也遇到过这个问题。https://docs.scrapy.org/en/latest/topics/practices.html的文档似乎不正确,因为每个新的爬行器都尝试加载一个新的反应堆实例,如果给它一个蜘蛛,那么就可以使用CrawlerProcess来运行由蜘蛛构建的多个爬虫。我能够通过使用CrawlerRunner来让我的代码工作,这一点在同一页中也有详细介绍。
import scrapy
from twisted.internet import reactor
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
from scrapy.utils.project import get_project_settings
class MySpider1(scrapy.Spider):
# Your first spider definition
...
class MySpider2(scrapy.Spider):
# Your second spider definition
...
configure_logging()
settings = get_project_settings() # settings not required if running
runner = CrawlerRunner(settings) # from script, defaults provided
runner.crawl(MySpider1) # your loop would go here
runner.crawl(MySpider2)
d = runner.join()
d.addBoth(lambda _: reactor.stop())
reactor.run() # the script will block here until all crawling jobs are finishedhttps://stackoverflow.com/questions/71548957
复制相似问题