专栏首页Python小屋Python爬虫扩展库BeautifulSoup4用法精要

Python爬虫扩展库BeautifulSoup4用法精要

BeautifulSoup是一个非常优秀的Python扩展库,可以用来从HTML或XML文件中提取我们感兴趣的数据,并且允许指定使用不同的解析器。由于beautifulsoup3已经不再继续维护,因此新的项目中应使用beautifulsoup4,目前最新版本是4.5.0,可以使用pip install beautifulsoup4直接进行安装,安装之后应使用from bs4 import BeautifulSoup导入并使用。下面我们就一起来简单看一下BeautifulSoup4的强大功能,更加详细完整的学习资料请参考https://www.crummy.com/software/BeautifulSoup/bs4/doc/。

>>> from bs4 import BeautifulSoup

>>> BeautifulSoup('hello world!', 'lxml') #自动添加和补全标签

<html><body><p>hello world!</p></body></html>

>>> html_doc = """

<html><head><title>The Dormouse's story</title></head>

<body>

<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were

<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,

<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and

<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;

and they lived at the bottom of a well.</p>

<p class="story">...</p>

"""

>>> soup = BeautifulSoup(html_doc, 'html.parser') #也可以使用lxml或其他解析器

>>> print(soup.prettify()) #以优雅的方式显示出来

<html>

<head>

<title>

The Dormouse's story

</title>

</head>

<body>

<p class="title">

<b>

The Dormouse's story

</b>

</p>

<p class="story">

Once upon a time there were three little sisters; and their names were

<a class="sister" href="http://example.com/elsie" id="link1">

Elsie

</a>

,

<a class="sister" href="http://example.com/lacie" id="link2">

Lacie

</a>

and

<a class="sister" href="http://example.com/tillie" id="link3">

Tillie

</a>

;

and they lived at the bottom of a well.

</p>

<p class="story">

...

</p>

</body>

</html>

>>> soup.title #访问特定的标签

<title>The Dormouse's story</title>

>>> soup.title.name #标签名字

'title'

>>> soup.title.text #标签文本

"The Dormouse's story"

>>> soup.title.string

"The Dormouse's story"

>>> soup.title.parent #上一级标签

<head><title>The Dormouse's story</title></head>

>>> soup.head

<head><title>The Dormouse's story</title></head>

>>> soup.b

<b>The Dormouse's story</b>

>>> soup.body.b

<b>The Dormouse's story</b>

>>> soup.name #把整个BeautifulSoup对象看做标签对象

'[document]'

>>> soup.body

<body>

<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were

<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,

<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and

<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;

and they lived at the bottom of a well.</p>

<p class="story">...</p>

</body>

>>> soup.p

<p class="title"><b>The Dormouse's story</b></p>

>>> soup.p['class'] #标签属性

['title']

>>> soup.p.get('class') #也可以这样查看标签属性

['title']

>>> soup.p.text

"The Dormouse's story"

>>> soup.p.contents

[<b>The Dormouse's story</b>]

>>> soup.a

<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

>>> soup.a.attrs #查看标签所有属性

{'class': ['sister'], 'href': 'http://example.com/elsie', 'id': 'link1'}

>>> soup.find_all('a') #查找所有<a>标签

[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

>>> soup.find_all(['a', 'b']) #同时查找<a>和<b>标签

[<b>The Dormouse's story</b>, <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

>>> import re

>>> soup.find_all(href=re.compile("elsie")) #查找href包含特定关键字的标签

[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

>>> soup.find(id='link3')

<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>

>>> soup.find_all('a', id='link3')

[<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

>>> for link in soup.find_all('a'):

print(link.text,':',link.get('href'))

Elsie : http://example.com/elsie

Lacie : http://example.com/lacie

Tillie : http://example.com/tillie

>>> print(soup.get_text()) #返回所有文本

The Dormouse's story

The Dormouse's story

Once upon a time there were three little sisters; and their names were

Elsie,

Lacie and

Tillie;

and they lived at the bottom of a well.

...

>>> soup.a['id'] = 'test_link1' #修改标签属性的值

>>> soup.a

<a class="sister" href="http://example.com/elsie" id="test_link1">Elsie</a>

>>> soup.a.string.replace_with('test_Elsie') #修改标签文本

'Elsie'

>>> soup.a.string

'test_Elsie'

>>> print(soup.prettify())

<html>

<head>

<title>

The Dormouse's story

</title>

</head>

<body>

<p class="title">

<b>

The Dormouse's story

</b>

</p>

<p class="story">

Once upon a time there were three little sisters; and their names were

<a class="sister" href="http://example.com/elsie" id="test_link1">

test_Elsie

</a>

,

<a class="sister" href="http://example.com/lacie" id="link2">

Lacie

</a>

and

<a class="sister" href="http://example.com/tillie" id="link3">

Tillie

</a>

;

and they lived at the bottom of a well.

</p>

<p class="story">

...

</p>

</body>

</html>

>>> for child in soup.body.children: #遍历直接子标签

print(child)

<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were

<a class="sister" href="http://example.com/elsie" id="test_link1">test_Elsie</a>,

<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and

<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;

and they lived at the bottom of a well.</p>

<p class="story">...</p>

>>> for string in soup.strings: #遍历所有文本,结果略

print(string)

>>> test_doc = '<html><head></head><body><p></p><p></p></body></heml>'

>>> s = BeautifulSoup(test_doc, 'lxml')

>>> for child in s.html.children: #遍历直接子标签

print(child)

<head></head>

<body><p></p><p></p></body>

>>> for child in s.html.descendants: #遍历子孙标签

print(child)

<head></head>

<body><p></p><p></p></body>

<p></p>

<p></p>

本文分享自微信公众号 - Python小屋(Python_xiaowu),作者:董付国

原文出处及转载信息见文内详细说明,如有侵权,请联系 yunjia_community@tencent.com 删除。

原始发表时间:2016-12-29

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • Python+pandas+matplotlib数据分析与可视化案例(附源码)

    问题描述:运行下面的程序,在当前文件夹中生成饭店营业额模拟数据文件data.csv ? 然后完成下面的任务: 1)使用pandas读取文件data.csv中的数...

    Python小屋屋主
  • Python+socket+多线程实现同时应答多客户端的自助聊天机器人

    Python小屋屋主
  • 使用Python开发会聊天的智能小机器人

    本文重点演示使用socket实现TCP通信以及字典和集合的用法,客户端发来信息之后,服务端会尽量猜测客户端要表达的真正意思,然后选择合适的内容进行回复。 服务...

    Python小屋屋主
  • android https安全连接

    如果不需要验证服务器端证书,直接照这里做 public class Demo extends Activity {   /** Called when ...

    xiangzhihong
  • GO中间件(Middleware )

    中间件是一种计算机软件,可为操作系统提供的软件应用程序提供服务,以便于各个软件之间的沟通,特别是系统软件和应用软件。广泛用于web应用和面向服务的体系结构等。

    孤烟
  • 使用go语言创建HTTP(s)代理100行代码

    这篇教程的目的是用go语言实现一个简单的HTTP(S)代理服务器. HTTPS代理服务器的大概就是转发客户端发送的网络请求,得到响应之后把远程服务器的请求在转发...

    mojocn
  • 前端框架汇总

    专注APP开发
  • go cookie 使用

    solate
  • 深度学习(Deep Learning) 学习资料

    用户1756920
  • 50. RESTful API的简单实现 | 厚土Go学习笔记

    RESTfull API是现在很流行的 API 设计风格。众所周知的 HTTP 1.1规范正是基于 REST 架构风格的指导原理来设计的。需要注意的是,REST...

    李海彬

扫码关注云+社区

领取腾讯云代金券