首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >如何抓取这些数据?[使用Python的BeautifulSoup4]

如何抓取这些数据?[使用Python的BeautifulSoup4]
EN

Stack Overflow用户
提问于 2018-06-09 04:08:51
回答 1查看 237关注 0票数 0

我已经找到了一种抓取其他网站的方法,但是对于这段代码,它需要一个特殊的“浏览器”来访问html变量,事情是在我这样做之后,程序不会崩溃,但不再工作。

我想要的变量:排名、名称、代码、分数(https://imgur.com/a/FIWDFk1)

这是我写的代码,但它在这个网站上不起作用:运行,但没有读取/保存

代码语言:javascript
复制
from urllib.request import urlopen as uReq
from urllib.request import Request
from bs4 import BeautifulSoup as soup

myUrl = "https://mee6.xyz/levels/159962941502783488"

req = Request(
    myUrl, 
    data=None, 
    headers={
        'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'
    }
)

uClient = uReq(req)
pageHtml = uClient.read()
uClient.close()

page_soup = soup(pageHtml, "html.parser")

containers = page_soup.findAll("div",{"class":"Player"})
print(containers)

我使用的代码是来自youtube的一个教程,当更改url时,它不能与mee6排行榜一起工作,因为它拒绝浏览器:为mee6 url崩溃。

代码语言:javascript
复制
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
import csv

my_url = "https://www.newegg.ca/Product/ProductList.aspx?Submit=ENE&N=100007708%20601210955%20601203901%20601294835%20601295933%20601194948&IsNodeId=1&bop=And&Order=BESTSELLING&PageSize=96"

uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()

page_soup = soup(page_html, "html.parser")
containers = page_soup.findAll("div",{"class":"item-container"})
filename = "GPU Prices.csv"
header = ['Price', 'Product Brand', 'Product Name', 'Shipping Cost']

with open(filename, 'w', newline='') as f_output:
    csv_output = csv.writer(f_output)
    csv_output.writerow(header)

    for container in containers:
        price_container = container.findAll("li", {"class":"price-current"})
        price = price_container[0].text.replace('\xa0', ' ').strip(' –\r\n|')

        brand = container.div.div.a.img["title"]

        title_container = container.findAll("a", {"class":"item-title"})
        product_name = title_container[0].text

        shipping_container = container.findAll("li", {"class":"price-ship"})
        shipping = shipping_container[0].text.strip()

        csv_output.writerow([price, brand, product_name, shipping])
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2018-06-09 06:17:29

尝试使用下面的方法从该页面获取数据。网页会动态加载它的内容,所以如果你坚持使用原始的url,requests不会帮你抓取响应。使用dev工具收集json链接,就像我在这里所做的那样。试一试:

代码语言:javascript
复制
import requests

URL = 'https://mee6.xyz/api/plugins/levels/leaderboard/159962941502783488'

res = requests.get(URL)
for item in res.json()['players']:
    name = item['username']
    discriminator = item['discriminator']
    xp = item['xp']
    print(name,discriminator,xp)

输出结果如下:

代码语言:javascript
复制
Sil 5262 891462
Birdie♫ 6017 745639
Delta 5728 641571
Mr. Squishy 0001 308349
Majick 6918 251024
Samuel (xCykrix) 1101 226470
WolfGang1710 6782 222741

要将结果写入csv文件,您可以执行以下操作:

代码语言:javascript
复制
import requests
import csv

Headers = ['Name','Discriminator','Xp']
res = requests.get('https://mee6.xyz/api/plugins/levels/leaderboard/159962941502783488')

with open('leaderboard.csv','w', newline='', encoding = "utf-8") as infile:
    writer = csv.writer(infile)
    writer.writerow(Headers)
    for item in res.json()['players']:
        name = item['username']
        discriminator = item['discriminator']
        xp = item['xp']
        print(name,discriminator,xp)
        writer.writerow([name,discriminator,xp])
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/50767758

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档