注意点:
[
复制代码
](javascript:void(0); "复制代码")
<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; overflow-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">""" 1、用命令创建一个crawlspider的模板:scrapy genspider -t crawl <爬虫名> <all_domain>,也可以手动创建 2、CrawlSpider中不能再有以parse为名字的数据提取方法,这个方法被CrawlSpider用来实现基础url提取等功能 3、一个Rule对象接受很多参数,首先第一个是包含url规则的LinkExtractor对象, 常有的还有callback(制定满足规则的解析函数的字符串)和follow(response中提取的链接是否需要跟进) 4、不指定callback函数的请求下,如果follow为True,满足rule的url还会继续被请求 5、如果多个Rule都满足某一个url,会从rules中选择第一个满足的进行操作 """</pre>
[
复制代码
](javascript:void(0); "复制代码")
1、创建工程
<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; overflow-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">scrapy startproject zjh</pre>
2、创建项目
<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; overflow-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">scrapy genspider -t crawl circ bxjg.circ.gov.cn 与scrapy不同的是添加了-t crawl参数</pre>
3、settings文件添加日志级别,USER_AGENT
image
[
复制代码
](javascript:void(0); "复制代码")
<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; overflow-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;"># -- coding: utf-8 --
BOT_NAME = 'zjh' SPIDER_MODULES = ['zjh.spiders'] NEWSPIDER_MODULE = 'zjh.spiders' LOG_LEVEL = "WARNING"
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32
[
复制代码
](javascript:void(0); "复制代码")
4、circ.py文件提取数据
[
复制代码
](javascript:void(0); "复制代码")
<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; overflow-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;"># -- coding: utf-8 -- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule import re class CircSpider(CrawlSpider): name = 'circ' allowed_domains = ['bxjg.circ.gov.cn'] start_urls = ['http://bxjg.circ.gov.cn/web/site0/tab5240/module14430/page1.htm'] #定义提取url地址规则 rules = ( #一个Rule一条规则,LinkExtractor表示链接提取器,提取url地址 #allow,提取的url,url不完整,但是crawlspider会帮我们补全,然后再请求 #callback 提取出来的url地址的response会交给callback处理 #follow 当前url地址的响应是否重新将过rules来提取url地址 Rule(LinkExtractor(allow=r'/web/site0/tab5240/info\d+.htm'), callback='parse_item'), #详情页数据,不需要follow Rule(LinkExtractor(allow=r'/web/site0/tab5240/module14430/page\d+.htm'),follow=True), # 下一页,不需要callback处理,但是需要follow不断循环翻页 ) #parse函数有特殊功能,不能定义 def parse_item(self, response): item = {} item["title"]= re.findall("(.*?)",response.body.decode())[0] item["publish_date"] =re.findall("发布时间:20\d{2}-\d{2}-\d{2}",response.body.decode())[0] print(item) #也可以使用Request()自动构造请求 # yield scrapy.Request( # url, # callback=parse_detail # meta={"item":item} # ) def parse_detail(self,response): pass</pre>
[
复制代码
](javascript:void(0); "复制代码")
5、扩展知识