BeautifulSoup
的文档搜索方法有很多,官方文档中重点介绍了两个方法:find() 和 find_all()
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')
BeautifulSoup
会查找与字符串完全匹配的内容;b
标签:print(soup.find_all('b'))
[<b>The Dormouse's story</b>]
Beautiful Soup
会通过正则表达式的 match()
来匹配内容;b
开头的标签:import re
for tag in soup.find_all(re.compile("^b")):
print(tag.name)
body
b
Beautiful Soup
会将与列表中任一元素匹配的内容返回;a
标签和b
标签:print(soup.find_all(["a", "b"]))
[<b>The Dormouse's story</b>,
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
True
可以匹配任何值;tag
:for tag in soup.find_all(True):
print(tag.name)
html
head
title
body
p
b
p
a
a
a
p
True
表示当前元素匹配并且被找到,如果不是则反回 False
;tag
的所有tag
子节点,并判断是否符合过滤器的条件。print(soup.find_all("title"))
[<title>The Dormouse's story</title>]
find_all( name , attrs , recursive , string , **kwargs )
name
的tag
;print(soup.find_all("title"))
,输出为:[<title>The Dormouse's story</title>]
。tag
的属性来搜索;print(soup.find_all(id='link2'))
,输出为:[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
CSS
搜索,可以通过 class_
参数搜索有指定CSS
类名的tag
;print(soup.find_all("a", class_="sister"))
,输出为:[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
string
参数可以搜文档中的字符串内容.
与 name
参数的可选值一样;print(soup.find_all(string="Elsie"))
,输出为:['Elsie']
;limit
参数限制搜索返回结果的数量,避免返回结果很大速度很慢;soup.find_all("a", limit=2)
,输出为:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
tag
的直接子节点,可以使用参数 recursive=False
;<html>
<head>
<title>
The Dormouse's story
</title>
</head>
...
recursive
参数:print(soup.html.find_all("title"))
[<title>The Dormouse's story</title>]
recursive
参数:print(soup.html.find_all("title", recursive=False))
[]
find_all()
方法的返回结果是值包含一个元素的列表,而 find()
方法直接返回结果;find( name , attrs , recursive , string , **kwargs )find_all()
方法没有找到目标是返回空列表, find()
方法找不到目标时,返回 None
。print(soup.find("nosuchtag"))
,输出为:None
。find_parents( name , attrs , recursive , string , **kwargs )
find_parent( name , attrs , recursive , string , **kwargs )
find_parents() 和 find_parent()
用来搜索当前节点的父辈节点;find_all() 和 find()
只搜索当前节点的所有子节点,孙子节点等;a_string = soup.find(string="Lacie")
print(a_string)
print(a_string.find_parents("a"))
Lacie
[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
find_next_siblings( name , attrs , recursive , string , **kwargs )
find_next_sibling( name , attrs , recursive , string , **kwargs )
.next_siblings
属性对当tag
的所有后面解析的兄弟tag
节点进行迭代;find_next_siblings()
方法返回所有符合条件的后面的兄弟节点;find_next_sibling()
只返回符合条件的后面的第一个tag
节点;first_link = soup.a
print(first_link)
print(first_link.find_next_siblings("a"))
first_story_paragraph = soup.find("p", "story")
print(first_story_paragraph.find_next_sibling("p"))
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
<p class="story">...</p>
find_previous_siblings( name , attrs , recursive , string , **kwargs )
find_previous_sibling( name , attrs , recursive , string , **kwargs )
.previous_siblings
属性对当前tag
的前面解析的兄弟tag
节点进行迭代;find_previous_siblings()
方法返回所有符合条件的前面的兄弟节点;find_previous_sibling()
方法返回第一个符合条件的前面的兄弟节点。find_all_next( name , attrs , recursive , string , **kwargs )
find_next( name , attrs , recursive , string , **kwargs )
.next_elements
属性对当前tag的之后的tag
和字符串进行迭代;find_all_next()
方法返回所有符合条件的节点;find_next()
方法返回第一个符合条件的节点。find_all_previous( name , attrs , recursive , string , **kwargs )
find_previous( name , attrs , recursive , string , **kwargs )
.previous_elements
属性对当前节点前面的tag
和字符串进行迭代;find_all_previous()
方法返回所有符合条件的节点;find_previous()
方法返回第一个符合条件的节点。# -*- coding:utf-8 -*-
# 作者:NoamaNelson
# 日期:2023/2/17
# 文件名称:bs04.py
# 作用:beautifulsoup的应用
# 联系:VX(NoamaNelson)
# 博客:https://blog.csdn.net/NoamaNelson
from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
# ====== 过滤器 ======
# 字符串
print(soup.find_all('b'))
# 正则表达式
import re
for tag in soup.find_all(re.compile("^b")):
print(tag.name)
# 列表
print(soup.find_all(["a", "b"]))
# True
for tag in soup.find_all(True):
print(tag.name)
# ====== find_all() ======
print(soup.find_all("title"))
print(soup.find_all(id='link2'))
print(soup.find_all("a", class_="sister"))
print(soup.find_all(string="Elsie"))
print(soup.find_all("a", limit=2))
print(soup.html.find_all("title", recursive=False))
# ====== find() ======
print(soup.find("nosuchtag"))
a_string = soup.find(string="Lacie")
print(a_string)
print(a_string.find_parents("a"))
first_link = soup.a
print(first_link)
print(first_link.find_next_siblings("a"))
first_story_paragraph = soup.find("p", "story")
print(first_story_paragraph.find_next_sibling("p"))
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。