前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Python selenium爬取微博数据代码实例

Python selenium爬取微博数据代码实例

作者头像
砸漏
发布2020-11-02 14:41:16
9510
发布2020-11-02 14:41:16
举报
文章被收录于专栏:恩蓝脚本

爬取某人的微博数据,把某人所有时间段的微博数据都爬下来。

具体思路:

创建driver—–get网页—-找到并提取信息—–保存csv—-翻页—-get网页(开始循环)—-…—-没有“下一页”就结束,

用了while True,没用自我调用函数

嘟大海的微博:https://weibo.com/u/1623915527

办公室小野的微博:https://weibo.com/bgsxy

代码如下

代码语言:javascript
复制
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import csv
import os
import time
#只有这2个参数设置,想爬谁的微博数据就在这里改地址和目标csv名称就行
weibo_url = 'https://weibo.com/bgsxy?profile_ftype=1&is_all=1#_0'
csv_name = 'bgsxy_allweibo.csv'
def start_chrome():
print('开始创建浏览器')
driver = webdriver.Chrome(executable_path='C:/Users/lori/Desktop/python52project/chromedriver_win32/chromedriver.exe')
driver.start_client()
return driver
def get_web(url):   #获取网页,并下拉到最底部
print('开始打开指定网页')
driver.get(url)
time.sleep(7)
scoll_down()
time.sleep(5)
def scoll_down():  # 滚轮下拉到最底部
html_page = driver.find_element_by_tag_name('html')
for i in range(7):
print(i)
html_page.send_keys(Keys.END)
time.sleep(1)
def get_data():
print('开始查找并提取数据')
card_sel = 'div.WB_cardwrap.WB_feed_type'
time_sel = 'a.S_txt2[node-type="feed_list_item_date"]'
source_sel = 'a.S_txt2[suda-uatrack="key=profile_feed&value=pubfrom_guest"]'
content_sel = 'div.WB_text.W_f14'
interact_sel = 'span.line.S_line1 span em:nth-child(2)'
cards = driver.find_elements_by_css_selector(card_sel)
info_list = []
for card in cards:
time = card.find_elements_by_css_selector(time_sel)[0].text #虽然有可能在一个card中有2个time元素,我们取第一个就对
if card.find_elements_by_css_selector(source_sel):
source = card.find_elements_by_css_selector(source_sel)[0].text
else:
source = ''
content = card.find_elements_by_css_selector(content_sel)[0].text
link = card.find_elements_by_css_selector(time_sel)[0].get_attribute('href')
trans = card.find_elements_by_css_selector(interact_sel)[1].text
comment = card.find_elements_by_css_selector(interact_sel)[2].text
like = card.find_elements_by_css_selector(interact_sel)[3].text
info_list.append([time,source,content,link,trans,comment,like])
return info_list
def save_csv(info_list,csv_name):
csv_path = './' + csv_name
print('开始写入csv文件')
if os.path.exists(csv_path):
with open(csv_path,'a',newline='',encoding='utf-8-sig') as f: #newline=''避免空行;encoding='utf-8-sig'比utf8牛,保存中文没问题
writer = csv.writer(f)
writer.writerows(info_list)
else:
with open(csv_path,'w+',newline='',encoding='utf-8-sig') as f:
writer = csv.writer(f)
writer.writerow(['发布时间','来源','内容','链接','转发数','评论数','点赞数'])
writer.writerows(info_list)
time.sleep(5)
def next_page_url():
next_page_sel = 'a.page.next'
next_page_ele = driver.find_elements_by_css_selector(next_page_sel)
if next_page_ele:
return next_page_ele[0].get_attribute('href')
else:
return None
driver = start_chrome()
input('请在chrome中登录weibo.com')   # 暂停程序,手动登录weibo.com
while True:
get_web(weibo_url)
info_list = get_data()
save_csv(info_list,csv_name)
if next_page_url():
weibo_url = next_page_url()
else:
print('爬取结束')
break

以上就是本文的全部内容,希望对大家的学习有所帮助。

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2020-09-11 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档