QUEUE
中, 无法简单地从外部查看;https://github.com/rmax/scrapy-redis
安装
pip install scrapy-redis
SETTINGS设置
SCHEDULER
更换调度器
SCHEDULER = 'scrapy_redis.scheduler.Scheduler'
SCHEDULER_QUEUE_CLASS
更换消息队列
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'
DUPEFILTER_CLASS
更换过滤器, 将请求指纹保存在redis当中
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER_PERSIST
消息队列持久化, 不会清空redis中的消息队列
SCHEDULER_PERSIST = True
REDIS配置
# Redis settings
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
# Redis 参数配置
REDIS_PARAMS = {"db": 5}
SPIDER设置
修改继承的父类为scrapy_redis.spiders.RedisSpider
from scrapy_redis.spiders import RedisSpider
class JdSearch(RedisSpider):
添加redis_key配置
redis_key = f"{name}:start_urls"
将生产者从scrapy项目中拆分出去
import redis
import time
import json
redis_con = redis.Redis(host='localhost', port=6379, db=5)
def search_producer():
for keyword in ["鼠标", "键盘", "显卡", "耳机"]:
for page_num in range(1, 11):
url = f"https://search.jd.com/Search?keyword={keyword}&page={page_num}"
meta = {
"sta_date": time.strftime("%Y-%m-%d"),
"keyword": keyword,
"page_num": page_num
}
task = json.dumps({
"url": url,
"body": '',
"method": "GET",
"meta": meta
})
redis_con.lpush("jd_search:start_urls", task)
if __name__ == "__main__":
search_producer()
重写start_requests
def make_request_from_data(self, data):
task = json.loads(data.decode("utf-8"))
return scrapy.http.FormRequest(url=task['url'],
formdata=json.loads(task['body']) if task['body'] else '',
method=task['method'],
meta=task['meta'],
dont_filter=False,
callback=self.parse_search,
errback=self.process_error)