scrapy_splash是scrapy的一个组件
splash官方文档 https://splash.readthedocs.io/en/stable/
scrapy-splash能够模拟浏览器加载js,并返回js运行后的数据
splash的dockerfile https://github.com/scrapinghub/splash/blob/master/Dockerfile
观察发现splash依赖环境略微复杂,所以我们可以直接使用splash的docker镜像
如果不使用docker镜像请参考 splash官方文档 安装相应的依赖环境
安装参考 https://blog.csdn.net/sanpic/article/details/81984683
在正确安装docker的基础上pull取splash的镜像
sudo docker pull scrapinghub/splash
运行splash的docker服务,并通过浏览器访问8050端口验证安装是否成功
sudo docker run -p 8050:8050 scrapinghub/splash
sudo docker run -d -p 8050:8050 scrapinghub/splash
访问 http://127.0.0.1:8050 看到如下截图内容则表示成功
以ubuntu18.04为例
sudo vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
需要先关闭容器后,再删除容器
sudo docker ps -a
sudo docker stop CONTAINER_ID
sudo docker rm CONTAINER_ID
pip install scrapy-splash
以baidu为例
scrapy startproject test_splash
cd test_splash
scrapy genspider no_splash baidu.com
scrapy genspider with_splash baidu.com
在settings.py文件中添加splash的配置以及修改robots协议
# 渲染服务的url
SPLASH_URL = 'http://127.0.0.1:8050'
# 下载器中间件
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
# 去重过滤器
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
# 使用Splash的Http缓存
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
在spiders/no_splash.py中完善
import scrapy
class NoSplashSpider(scrapy.Spider):
name = 'no_splash'
allowed_domains = ['baidu.com']
start_urls = ['https://www.baidu.com/s?wd=13161933309']
def parse(self, response):
with open('no_splash.html', 'w') as f:
f.write(response.body.decode())
import scrapy
from scrapy_splash import SplashRequest # 使用scrapy_splash包提供的request对象
class WithSplashSpider(scrapy.Spider):
name = 'with_splash'
allowed_domains = ['baidu.com']
start_urls = ['https://www.baidu.com/s?wd=13161933309']
def start_requests(self):
yield SplashRequest(self.start_urls[0],
callback=self.parse_splash,
args={'wait': 10}, # 最大超时时间,单位:秒
endpoint='render.html') # 使用splash服务的固定参数
def parse_splash(self, response):
with open('with_splash.html', 'w') as f:
f.write(response.body.decode())
scrapy crawl no_splash
scrapy crawl with_splash
不使用splash
使用splash
关于splash https://www.cnblogs.com/zhangxinqi/p/9279014.html
关于scrapy_splash(截屏,get_cookies等) https://www.e-learn.cn/content/qita/800748