首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >python web-crawler猜测解析器警告

python web-crawler猜测解析器警告
EN

Stack Overflow用户
提问于 2020-08-23 21:41:41
回答 1查看 597关注 0票数 0

我正在尝试用python (3.8)做一个网络爬虫,我认为我已经完成了,但是我得到了这个错误,有没有人可以帮助我,提前感谢。

Python代码:

代码语言:javascript
运行
复制
import requests
from bs4 import BeautifulSoup


def aliexpress_spider (max_pages):
    page = 1
    while page <= max_pages:
        url = "https://www.aliexpress.com/af/ps4.html?trafficChannel=af&d=y&CatId=0&SearchText=ps4&ltype=affiliate&SortType=default&page=" + str(page)
        sourcecode = requests.get(url)
        plaintext = sourcecode.text
        soup = BeautifulSoup(plaintext)
        for link in soup.findAll('a' , {'class' : 'item-title'}):
            href = "https://www.aliexpress.com" + link.get("href")
            title  = link.string
            print(href)
            print(title)
        page += 1


aliexpress_spider(1)

错误消息:

代码语言:javascript
运行
复制
  GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.

The code that caused this warning is on line 11 of the file C:/Users/moham/PycharmProjects/moh/test.py. To get rid of this warning, pass the additional argument 'features="html.parser"' to the BeautifulSoup constructor.

  soup = BeautifulSoup(plaintext)
EN

回答 1

Stack Overflow用户

发布于 2021-02-27 00:06:12

代码语言:javascript
运行
复制
import requests
from bs4 import BeautifulSoup


def aliexpress_spider (max_pages):
    page = 1
    while page <= max_pages:
        url = "https://www.aliexpress.com/af/ps4.html?trafficChannel=af&d=y&CatId=0&SearchText=ps4&ltype=affiliate&SortType=default&page=" + str(page)
        sourcecode = requests.get(url)
        
        soup = BeautifulSoup(sourcecode.text ,"html.parser")
        for link in soup.findAll('a' , {'class' : 'item-title'}):
            href = "https://www.aliexpress.com" + link.get("href")
            title  = link.string
            print(href)
            print(title)
        print(soup.title)
        page += 1



aliexpress_spider(1)
票数 -1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/63547631

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档