首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >如何使用selenium从网站中提取所有动态表数据?

如何使用selenium从网站中提取所有动态表数据?
EN

Stack Overflow用户
提问于 2019-08-14 23:13:07
回答 1查看 818关注 0票数 0

我对网络抓取是个新手。我正在尝试从福布斯排名前几位的跨国企业中提取表格数据。我成功地提取了一些数据。然而,我只能从列表中获得前10名。这张表格包含了中间的广告。如何获取所有数据?

代码语言:javascript
运行
复制
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd


driver = webdriver.Chrome(r'C:/Users/Shirly.Ang3/Desktop/BUSINESS STAT/GGR/chromedriver_win32/chromedriver.exe')


url = "https://www.forbes.com/top-multinational-performers/list/"


driver.get(url)

wait_row = WebDriverWait(driver, 30)
rows = wait_row.until(EC.presence_of_all_elements_located((By.XPATH,
                                        './/*[@id="the_list"]/tbody[@id="list-table-body"]')))


data = []

for row in rows:
    for i in row.find_elements_by_class_name("data"):
        try:
            if i.is_displayed(): 

                row_dict = {}

                row_dict['Rank'] = i.find_element_by_xpath('.//td[2]').text
                row_dict['Link'] = i.find_element_by_xpath('.//td[3]/a[@href]').get_attribute("href")
                row_dict['Company'] = i.find_element_by_xpath('.//td[3]').text
                row_dict['Industry'] = i.find_element_by_xpath('.//td[4]').text
                row_dict['Country'] = i.find_element_by_xpath('.//td[5]').text

                data.append(row_dict)

        except: 
            continue        

driver.close()             

df = pd.DataFrame(data)

df.to_csv("Forbes_TEST.csv", sep=",", index=False)
EN

回答 1

Stack Overflow用户

发布于 2019-08-17 08:15:26

要获取全部250条记录,您只需向现有代码添加滚动到页面底部的代码。所以添加如下内容:

代码语言:javascript
运行
复制
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(5)

之前:

代码语言:javascript
运行
复制
data = []

并添加import time

但是说你的代码真的很慢。即使将你的wait_row设置为3,在我的机器上运行也需要1m5.933s。以下代码花了0m12.978s才能运行。

代码语言:javascript
运行
复制
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
from bs4 import BeautifulSoup
import csv

driver = webdriver.Chrome(r'C:/Users/Shirly.Ang3/Desktop/BUSINESS STAT/GGR/chromedriver_win32/chromedriver.exe')
url = "https://www.forbes.com/top-multinational-performers/list/"
driver.get(url)
wait_row = WebDriverWait(driver, 3)
rows = wait_row.until(EC.presence_of_all_elements_located((By.XPATH, './/*[@id="the_list"]/tbody[@id="list-table-body"]')))
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(5)
ranks = []
links = []
companies = []
industries = []
countries = []
soup = BeautifulSoup(driver.page_source, "lxml")
table = soup.find("table", {"id": "the_list"})
for tr in table.find_all("tr", {"class": "data"}):
    tds = tr.find_all("td")
    ranks.append(tds[1].text)
    links.append(tds[2].find('a')['href'])
    companies.append(tds[2].text)
    industries.append(tds[3].text)
    countries.append(tds[4].text)
data = zip(ranks, links, companies, industries, countries)
with open('Forbes_TEST_02.csv', 'w') as csvfile:
    csv_out = csv.writer(csvfile)
    csv_out.writerow(['Rank', 'Link', 'Company','Industry', 'Country'])
    csv_out.writerows(data)
driver.close()
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/57497533

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档