首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >抓取器正在获取相关链接。

抓取器正在获取相关链接。
EN

Stack Overflow用户
提问于 2021-06-29 12:46:38
回答 1查看 55关注 0票数 1

我已经创建了一个使用scrapy.The爬虫爬虫的网站和抓取链接。**所使用的技术:**Python,Scrapy Error抓取相对urls,因为刮刀器无法抓取网页。我要爬虫只取无源网址。救命啊!!

代码语言:javascript
复制
import scrapy
import os
class MySpider(scrapy.Spider):
    name = 'feed_exporter_test'
    # this is equivalent to what you would set in settings.py file
    custom_settings = {
        'FEED_FORMAT': 'csv',
        'FEED_URI': 'file1.csv'
    }
    filePath='file1.csv'
    if os.path.exists(filePath):
     os.remove(filePath)
    else:
     print("Can not delete the file as it doesn't exists")
    start_urls = ['https://www.jamoona.com/']

    def parse(self, response):
        titles = response.xpath("//a/@href").extract()
        for  title in titles:
            yield {'title': title}

EN

回答 1

Stack Overflow用户

发布于 2021-06-29 15:25:37

这是答案。

代码语言:javascript
复制
import scrapy

import os

class MySpider(scrapy.Spider):
    name = 'feed_exporter_test'
    # this is equivalent to what you would set in settings.py file
    custom_settings = {
        'FEED_FORMAT': 'csv',
        'FEED_URI': 'file1.csv'
    }
    filePath = 'file1.csv'
    if os.path.exists(filePath):
        os.remove(filePath)
    else:
        print("Can not delete the file as it doesn't exists")
    start_urls = ['https://www.jamoona.com/']

    def parse(self, response):
        urls = response.xpath("//a/@href").extract()
        for url in urls:
            abs_url = response.urljoin(url)
            yield {'title': abs_url}
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/68178958

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档