首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >刮刮式网络爬虫与数据抽取器

刮刮式网络爬虫与数据抽取器
EN

Stack Overflow用户
提问于 2014-11-27 15:24:57
回答 1查看 173关注 0票数 0

我正在尝试创建一个带有刮痕的网络爬虫,我使用了以前使用过的模板,但我似乎无法得到它来解析urls。我可以看到它转到youtube,然后转到观察页面,但是从那里它不会提取标题或指示或任何东西,因为它总是无法解析。

代码语言:javascript
运行
复制
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy import log
from krakenkrawler.items import KrakenItem

class AttractionSpider(CrawlSpider):
name = "thekraken"
allowed_domains = ["youtube.com"]
start_urls = [
    "http://www.youtube.com/?gl=GB&hl=en-GB"
]
rules = ()

def __init__(self, name=None, **kwargs):
    super(AttractionSpider, self).__init__(name, **kwargs)
    self.items_buffer = {}
    self.base_url = "http://www.youtube.com"
    from scrapy.conf import settings
    settings.overrides['DOWNLOAD_TIMEOUT'] = 360

def parse(self, response):
    print "Start scrapping Attractions...."
    try:
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//h3[@class='yt-lockup-title']//a/@href")

        if not links:
            return
            log.msg("No Data to scrap")

        for link in links:
            v_url = ''.join( link.extract() )

            if not v_url:
                continue
            else:
                _url = self.base_url + v_url
                yield Request( url= _url, callback=self.parse_details )
    except Exception as e:
        log.msg("Parsing failed for URL {%s}"%format(response.request.url))
        raise 

def parse_details(self, response):
    print "Start scrapping Detailed Info...."
    try:
        hxs = HtmlXPathSelector(response)
        l_venue = KrakenItem()

        v_name = hxs.select("//*[@id='eow-title'].text").extract()
        if not v_name:
            v_name = hxs.select("//*[@id='eow-title'].text").extract()

        l_venue["name"] = v_name[0].strip()

        base = hxs.select("//*[@id='content']/div[7]")
        if base.extract()[0].strip() == "<div style=\"clear:both\"></div>":
            base = hxs.select("//*[@id='content']/div[8]")
        elif base.extract()[0].strip() == "<div style=\"padding-top:10px;margin-top:10px;border-top:1px dotted #DDD;\">\n  You must be logged in to add a tip\n  </div>":
            base = hxs.select("//*[@id='content']/div[6]")

        x_datas = base.select("div[1]/b").extract()
        v_datas = base.select("div[1]/text()").extract()
        i_d = 0;
        if x_datas:
            for x_data in x_datas:
                print "data is:" + x_data.strip()
                if x_data.strip() == "<b>Address:</b>":
                    l_venue["address"] = v_datas[i_d].strip()
                if x_data.strip() == "<b>Contact:</b>":
                    l_venue["contact"] = v_datas[i_d].strip()
                if x_data.strip() == "<b>Operating Hours:</b>":
                    l_venue["hours"] = v_datas[i_d].strip()
                if x_data.strip() == "<b>Website:</b>":
                    l_venue["website"] = (base.select("//*[@id='watch-actions-share-panel']/div/div[2]/div[2]/span[1]/input/text()").extract())[0].strip()

                i_d += 1

        v_photo = base.select("img/@src").extract()
        if v_photo:
            l_venue["photo"] = v_photo[0].strip()

        v_desc = base.select("div[3]/text()").extract()
        if v_desc:
            desc = ""
            for dsc in v_desc:
                desc += dsc
            l_venue["desc"] = desc.strip()

        v_video = hxs.select("//*[@id='content']/iframe/@src").extract()
        if v_video:
            l_venue["video"] = v_video[0].strip()


        yield l_venue
    except Exception as e:
        log.msg("Parsing failed for URL {%s}"%format(response.request.url))
        raise 

提前一吨谢谢。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2014-11-27 16:02:10

问题是,您要查找的结构“//h3@class=‘yt title’//a/@href”并不存在于所有页面中。

我修改了您的代码,以验证打开了哪些页面和提取了哪些数据:

代码语言:javascript
运行
复制
class AttractionSpider(CrawlSpider):
name = "thekraken"
bot_name = 'kraken'
allowed_domains = ["youtube.com"]
start_urls = ["http://www.youtube.com/?gl=GB&hl=en-GB"]
rules = (

     Rule(SgmlLinkExtractor(allow=('')), callback='parse_items',follow= True),
     )


def parse_items(self, response):
    print "Start scrapping Attractions...."
    print response.url
    try :
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//h3[@class='yt-lockup-title']//a/@href")
        for link in links:
            v_url = ''.join( link.extract() )
            print v_url

        if not links:
            log.msg("No Data to scrap")

    except :
        pass

结果是这样的:

开始拆除景点.http://www.youtube.com/watch?v=GBdCbciGLK0 开始拆除景点.http://www.youtube.com/watch?v=BxUjDpnSHyc&list=TL4PEfm95Wz3k 开始拆除景点..。http://www.youtube.com/watch?v=T-CZW4YjAig 开始拆除景点..。https://www.youtube.com/user/ComedyShortsGamer /watch?v=TdICODRvAhc&list=UUrqsNpKuDQZreGaxBL_a5Jg /watch?v=CDGzm5edrlw&list=UUrqsNpKuDQZreGaxBL_a5Jg /watch?v=F2oR5KS54JM&list=UUrqsNpKuDQZreGaxBL_a5Jg /watch?v=LHRzOIvqmQI&list=UUrqsNpKuDQZreGaxBL_a5Jg /watch?v=F4iqiM6h-2U&list=UUrqsNpKuDQZreGaxBL_a5Jg /watch?v=ug3UPIvWlvU&list=UUrqsNpKuDQZreGaxBL_a5Jg /watch?v=msiZs6lIZ9w&list=UUrqsNpKuDQZreGaxBL_a5Jg /watch?v=Jh6A3DoOLBg&list=UUrqsNpKuDQZreGaxBL_a5Jg

在没有结果的内部页面中,没有“yt title”类。简而言之,你必须改进你的蜘蛛。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/27174047

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档