首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >使用Scrapy难以从网页中抓取所需的数据

使用Scrapy难以从网页中抓取所需的数据
EN

Stack Overflow用户
提问于 2019-06-05 04:03:17
回答 2查看 55关注 0票数 0

我正在抓取以下网页http://www.starcitygames.com/catalog/category/Duel%20Decks%20Venser%20vs%20Koth,我需要获取卡名、价格、股票和条件。嗯,我让四个人中的三个都能工作,但我的身体状况有点问题。无论我尝试什么,它要么只是给我NULL,要么就是其他不正确的东西。

部分HTML代码

代码语言:javascript
复制
<td class="deckdbbody search_results_7">
<a href="http://www.starcitygames.com/content/cardconditions">NM/M</a>
</td>

SplashSpider.py

代码语言:javascript
复制
import csv
from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from ..items import GameItem

# process the csv file so the url + ip address + useragent pairs are the same as defined in the file # returns a list of dictionaries, example:
# [ {'url': 'http://www.starcitygames.com/catalog/category/Rivals%20of%20Ixalan',
#    'ip': 'http://204.152.114.244:8050',
#    'ua': "Mozilla/5.0 (BlackBerry; U; BlackBerry 9320; en-GB) AppleWebKit/534.11"},
#    ...
# ]
def process_csv(csv_file):
    data = []
    reader = csv.reader(csv_file)
    next(reader)
    for fields in reader:
        if fields[0] != "":
            url = fields[0]
        else:
            continue # skip the whole row if the url column is empty
        if fields[1] != "":
            ip = "http://" + fields[1] + ":8050" # adding http and port because this is the needed scheme
        if fields[2] != "":
            useragent = fields[2]
        data.append({"url": url, "ip": ip, "ua": useragent})
    return data


class MySpider(Spider):
    name = 'splash_spider'  # Name of Spider

    # notice that we don't need to define start_urls
    # just make sure to get all the urls you want to scrape inside start_requests function

    # getting all the url + ip address + useragent pairs then request them
    def start_requests(self):

        # get the file path of the csv file that contains the pairs from the settings.py
        with open(self.settings["PROXY_CSV_FILE"], mode="r") as csv_file:
           # requests is a list of dictionaries like this -> {url: str, ua: str, ip: str}
            requests = process_csv(csv_file)

        for req in requests:
            # no need to create custom middlewares
            # just pass useragent using the headers param, and pass proxy using the meta param

            yield SplashRequest(url=req["url"], callback=self.parse, args={"wait": 3},
                    headers={"User-Agent": req["ua"]},
                    splash_url = req["ip"],
                    )
    # Scraping
    def parse(self, response):
        item = GameItem()
        for game in response.css("tr[class^=deckdbbody]"):
            # Card Name
            item["card_name"] = game.css("a.card_popup::text").extract_first()
            item["condition"] = game.css("a::text").extract_first() #Problem is here

            item["stock"] = game.css("td[class^=deckdbbody].search_results_8::text").extract_first()
            item["price"] = game.css("td[class^=deckdbbody].search_results_9::text").extract_first()

            yield item
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/56450600

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档