我是一个新的刮板和python,我已经写了一个代码来刮网页,
This is the link。使用下面给出的代码。但在响应中,它并不包含所有的html。未获取页面中间的数据。我尝试过lxml和html.parser,但没有区别。
from bs4 import BeautifulSoup
import requests
url = 'http://www.hl.co.uk/funds/fund-discounts,-prices--and--factsheets/search-results/a'
response = requests.get(url)
soup = BeautifulSoup(response.content,'lxml')
print(soup)
我不知道原因,可能是我漏掉了任何关键点或任何东西。
发布于 2018-07-17 03:39:05
from bs4 import BeautifulSoup
import requests
url = 'http://www.hl.co.uk/funds/fund-discounts,-prices--and--factsheets/search-results/a'
response = requests.get(url)
soup = BeautifulSoup(response.content,'html.parser')
for fund in soup.select("ul[class='list-unstyled list-indent'] > li > a"):
print(fund.attrs['title'])
结果将是
Aberdeen Asia Pacific and Japan Equity (Class I) Accumulation
Aberdeen Asia Pacific and Japan Equity Accumulation Inclusive
Aberdeen Asia Pacific Equity (Class I) Accumulation
Aberdeen Asia Pacific Equity (Class I) Income
.
.
.
AXA WF Framlington Robotech (Class F) Accumulation
AXA WF Framlington Robotech (Class F) Income
AXA WF Framlington UK (Class L) Accumulation
AXA WF Global Strategic Bonds (Class I H) Accumulation
https://stackoverflow.com/questions/51364142
复制相似问题