首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >如何过滤掉漂亮汤中的文件?

如何过滤掉漂亮汤中的文件?
EN

Stack Overflow用户
提问于 2022-04-03 19:31:11
回答 2查看 42关注 0票数 -1
代码语言:javascript
运行
复制
import os
import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup

url = "https://papers.gceguide.com/A%20Levels/Physics%20(9702)/2015/"

folder_location = r'C:\Users\'
if not os.path.exists(folder_location):os.mkdir(folder_location)

response = requests.get(url)
soup= BeautifulSoup(response.text, "html.parser")
for link in soup.select("a[href$='.pdf']"):
    filename = os.path.join(folder_location,link['href'].split('/')[-1])
    with open(filename, 'wb') as f:
        f.write(requests.get(urljoin(url,link['href'])).content)

我如何过滤掉不必要的东西,让它下载所有的pdf文件,只包含'qp_2‘

EN

回答 2

Stack Overflow用户

发布于 2022-04-03 19:46:24

要下载任何在其文件名中包含pdfqp_2,可以使用下一个示例:

代码语言:javascript
运行
复制
import requests
from bs4 import BeautifulSoup


url = "https://papers.gceguide.com/A%20Levels/Physics%20(9702)/2015/"
soup = BeautifulSoup(requests.get(url).content, "html.parser")

for n in soup.select('a.name[href*="qp_2"]'):
    print("Downloading", n.text)
    with open(n.text, "wb") as f_out:
        r = requests.get(url + n.text)
        f_out.write(r.content)

打印和下载文件:

代码语言:javascript
运行
复制
Downloading 9702_s15_qp_21.pdf
Downloading 9702_s15_qp_22.pdf
Downloading 9702_s15_qp_23.pdf
Downloading 9702_w15_qp_21.pdf
Downloading 9702_w15_qp_22.pdf
Downloading 9702_w15_qp_23.pdf
票数 1
EN

Stack Overflow用户

发布于 2022-04-03 19:53:33

选择更具体的链接,并检查qp_2.pdfcss selector中的位置

代码语言:javascript
运行
复制
soup.select("a[href*='qp_2'][href$='.pdf']")

Alternativ是在迭代时重复检查:

代码语言:javascript
运行
复制
for a in soup.select("a[href*='qp_2']"):
    if a['href'].endswith('.pdf'):
        with open(a['href'], "wb") as f_out:
            r = requests.get(url + a['href'])
            f_out.write(r.content)

示例

代码语言:javascript
运行
复制
import requests
from bs4 import BeautifulSoup


url = "https://papers.gceguide.com/A%20Levels/Physics%20(9702)/2015/"
soup = BeautifulSoup(requests.get(url).content, "html.parser")

for a in soup.select("a[href*='qp_2'][href$='.pdf']"):
    with open(a['href'], "wb") as f_out:
        r = requests.get(url + a['href'])
        f_out.write(r.content)
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/71729216

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档