首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Scrapy框架: 通用爬虫之CSVFeedSpider

Scrapy框架: 通用爬虫之CSVFeedSpider

作者头像
hankleo
发布2020-09-17 10:28:19
3400
发布2020-09-17 10:28:19
举报
文章被收录于专栏:Hank’s BlogHank’s Blog

步骤01: 创建项目

scrapy startproject csvfeedspider

步骤02: 使用csvfeed模版

scrapy genspider -t csvfeed csvdata gzdata.gov.cn

步骤03: 编写items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class CsvspiderItem(scrapy.Item):
    # define the fields for your item here like:
    # 姓名
    name = scrapy.Field()
    # 研究领域
    SearchField = scrapy.Field()
    # 服务分类
    Service = scrapy.Field()
    # 专业特长
    Specialty = scrapy.Field()

步骤04: 编写爬虫文件csvdata.py

# -*- coding: utf-8 -*-
from scrapy.spiders import CSVFeedSpider
from csvfeedspider.items import CsvspiderItem


class CsvparseSpider(CSVFeedSpider):
    name = 'csvdata'
    allowed_domains = ['gzdata.gov.cn']
    start_urls = ['http://gzopen.oss-cn-guizhou-a.aliyuncs.com/科技特派员.csv']
    headers = ['name', 'SearchField', 'Service', 'Specialty']
    delimiter = ','
    quotechar = "\n"

    # Do any adaptations you need here
    def adapt_response(self, response):
       return response.body.decode('gb18030')

    def parse_row(self, response, row):

        i = CsvspiderItem()
        try:
            i['name'] = row['name']
            i['SearchField'] = row['SearchField']
            i['Service'] = row['Service']
            i['Specialty'] = row['Specialty']

        except:
            pass
        yield i

步骤05: 运行爬虫文件

scrapy crawl csvdata
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2019-11-16 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档