首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >读者投稿:selenium抓取bilibili拜年祭《千里之外》的评论

读者投稿:selenium抓取bilibili拜年祭《千里之外》的评论

作者头像
青南
发布2019-03-06 10:56:53
6560
发布2019-03-06 10:56:53
举报
文章被收录于专栏:未闻Code未闻Code

代码在微信公众号里面格式混乱了,想看代码的同学请点击阅读原文。

bilibili 2019年拜年祭的《千里之外》很好看,于是我想用《python爬虫开发与入门实战》第七章的爬虫技术抓取一下评论。打开页面观察源码和network发现是用ajax异步加载的,直接访问打不开,需要伪造headers,有些麻烦。(实际上伪造非常简单,但是从json串里提取结果很麻烦,远没有直接从网页的xpath提取简单,见 ajax_get_comment方法。其中 CrawlerUtility来自https://github.com/kingname/CrawlerUtility,感谢青南的小工具,解析headers方便多了。)

因此我决定用selenium抓取一下评论, 第一页的抓取只有进入该页面,然后定位到具体元素就可以爬取下来,但是抓取的时候,需要先等该元素加载好再去抓取,我将等待和抓取逻辑封装了一下,定义出一个函数方便使用,其中参数 parent可以是 driver,也可以是页面元素, find_methodexpected_conditions的条件之一,如 find_element_by_xpathfind_elements_by_xpath等:

def wait_until(self, parent, xpath, find_method):
    driver = parent or self.driver

    def find(driver):
        element = attrgetter(find_method)(driver)(xpath)
        if element:
            return element
        else:
            return False

    try:
        element = WebDriverWait(driver, self.TIME_OUT).until(find)
        return element
    except TimeoutException as _:
        raise TimeoutException('Too slow')

使用的时候可以这样用:

total_page = self.wait_until(None, "//div[@class='header-page paging-box']/span[@class='result']",
                                         self.FIND_ELEMENT_BY_XPATH).text

点击下一页,发现页面没有刷新,可以知道肯定是用ajax异步读取数据并加载进来了,因此需要定位到“下一页”的按钮,然后进入下一页后再抓取,可以用 wait...until语法先等按钮加载完成,再点击:

def _goto_next_page(self):
    driver = self.driver
    next_page_path = '//div[@class=\'header-page paging-box\']/a[@class="next"]'
    WebDriverWait(driver, self.TIME_OUT).until(EC.element_to_be_clickable((
        By.XPATH,
        next_page_path))
    )
    next_page = driver.find_element_by_xpath(next_page_path)
    next_page.click()

循环抓取直到最后一页的逻辑可以写成这样:

while True:
    current_page = self.get_single_page_comments()
    if current_page == self.total_page:
        break
    self._goto_next_page()

在做抓取时,我发现经常会报错 elementisnotattached to the page document, 即使做了wait也不行,后来我发现,加一行滚动到页面底部可以减少报错,虽然不能彻底消除报错:

self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight)")

如果还是报错,似乎是因为翻页太快导致的,虽然我无法理解wait了为什么还是报错,但是我找到了一种解决方案:重新进入同一个页面再抓一次,进入某页的方法如下:

def _goto_page(self, page):
    driver = self.driver
    path = "//div[@class='page-jump']/input"
    self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight)")
    WebDriverWait(driver, self.TIME_OUT).until(EC.presence_of_element_located((
        By.XPATH,
        path))
    )
    elem = driver.find_element_by_xpath(path)
    elem.clear()
    elem.send_keys(str(page))
    elem.send_keys(Keys.RETURN)

然后重复进入之后休眠1s钟,让所有页面元素都能加载好:

try:
    element = WebDriverWait(driver, self.TIME_OUT).until(find)
    return element
except TimeoutException as _:
    raise TimeoutException('Too slow')

    times = 0
    while times < self.RETRY_TIMES:
        try:
            self._receive_current_page(current_page)
            break
        except:
            print(f'重新进入第{current_page}页')
            self._goto_page(current_page)
            from time import sleep
            sleep(1)
        times += 1
    else:
        print(f'page{current_page}未爬全')

通过以上几种方式,我终于成功得把111页数据都抓取了下来,完整代码如下,你会发现我对其中的几个函数用了 retry装饰器,通过重复增加成功率,抓取的数据我放在一个字典里 self.comments,字典的key是页数,字典的值是一个存储该页评论的列表,如果重新要抓取某一页,记得要把该页先pop掉。

import json
import os
import re
import time
from collections import defaultdict
from operator import attrgetter

import requests
from CrawlerUtility import ChromeHeaders2Dict
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from tenacity import retry, stop_after_attempt, stop_after_delay


class BilibiliSpider:
    HOME_URL = 'https://www.bilibili.com/blackboard/bnj2019.html?spm_id_from=333.334.b_72616e6b696e675f646f756761.4&aid=36570507&p='
    COMMENT_LIST_API = 'https://api.bilibili.com/x/v2/reply?callback=jQuery172037695699199400234_1549378739473&jsonp=jsonp&pn=1&type=1&oid=36570507&sort=0&_=1549378775254'
    HEADERS = """
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7
Connection: keep-alive
Cookie: _uuid=3C34470A-8E07-EB73-821A-8C9296CE917C39306infoc; buvid3=38E4616F-6B80-4E47-830B-8BAD5C4EB5BB6688infoc; stardustvideo=1; fts=1539694104; rpdid=lmsiqospkdosklxlpopw; im_notify_type_1483693=0; CURRENT_FNVAL=16; UM_distinctid=166ab725293e7-044e2ac2d71b1e-1f396652-13c680-166ab7252943f5; sid=7wuz88lx; LIVE_BUVID=0579974c1f7c3c2575fe88d4443faa29; LIVE_BUVID__ckMd5=c5e8ececd91bd208; DedeUserID=1483693; DedeUserID__ckMd5=0bdf06a909d01153; SESSDATA=a2280675%2C1550158353%2C2abfd611; bili_jct=a9180d3b91a2e87f2ad305187aa98ff2; finger=e4810d01; _ga=GA1.2.540612000.1548599946; _gid=GA1.2.1043200605.1548599946; gr_user_id=f0122614-4e80-490c-93b7-2856cbd7a8ac; grwng_uid=890e5ba7-1aa0-490b-a21c-aa9d0cefab49; CURRENT_QUALITY=32; bp_t_offset_1483693=216899400389846136; _dfcaptcha=4ca4802a95feba42a36d95ad00f25022
Host: api.bilibili.com
Referer: https://www.bilibili.com/blackboard/bnj2019.html?spm_id_from=333.334.b_72616e6b696e675f646f756761.4&p=&aid=36570536
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
    """
    FIND_ELEMENTS_BY_XPATH = 'find_elements_by_xpath'
    FIND_ELEMENT_BY_XPATH = 'find_element_by_xpath'
    TIME_OUT = 50
    RETRY_TIMES = 10
    driver = None
    total_page = None

    def __init__(self):
        self.start = time.time()
        self.comments = defaultdict(list)

    def __enter__(self):
        chrome_options = webdriver.ChromeOptions()
        chrome_options.add_argument("--headless")
        self.driver = webdriver.Chrome(chrome_options=chrome_options, executable_path='./chromedriver')
        # self.driver = webdriver.PhantomJS('./phantomjs')
        return self

    def __exit__(self, type, value, traceback):
        print()
        print(f'elapse time: {time.time() - self.start}')
        print('total pages', len(self.comments))
        print('total comments', sum(len(page) for page in self.comments.values()))
        if self.driver:
            self.driver.quit()

    @property
    def headers(self):
        return ChromeHeaders2Dict(self.HEADERS)

    def ajax_get_comment(self):
        resp = requests.get(self.COMMENT_LIST_API, headers=self.headers).content.decode()
        json_str = resp[resp.find('{"'):-1]
        data = json.loads(json_str).get('data')
        print(data)

    def wait_until(self, parent, xpath, find_method):
        driver = parent or self.driver

        def find(driver):
            element = attrgetter(find_method)(driver)(xpath)
            if element:
                return element
            else:
                return False

        try:
            element = WebDriverWait(driver, self.TIME_OUT).until(find)
            return element
        except TimeoutException as _:
            raise TimeoutException('Too slow')

    def driver_get_comments(self):
        driver = self.driver
        driver.get(self.HOME_URL)
        while True:
            current_page = self.get_single_page_comments()
            if current_page == self.total_page:
                break
            self._goto_next_page()
        self.save_to_file()

    def save_to_file(self, filename='comments.md'):
        if os.path.exists(filename):
            os.remove(filename)
        with open(filename, 'a') as f:
            for page, comment_list in self.comments.items():
                f.write(f'- {page}\n')
                for i, comment in enumerate(comment_list):
                    f.write(f'    - {comment}\n')

    @retry(reraise=True, stop=stop_after_attempt(3))
    def _goto_next_page(self):
        driver = self.driver
        next_page_path = '//div[@class=\'header-page paging-box\']/a[@class="next"]'
        WebDriverWait(driver, self.TIME_OUT).until(EC.element_to_be_clickable((
            By.XPATH,
            next_page_path))
        )
        next_page = driver.find_element_by_xpath(next_page_path)
        next_page.click()

    @retry(reraise=True, stop=stop_after_attempt(3))
    def _goto_page(self, page):
        driver = self.driver
        path = "//div[@class='page-jump']/input"
        self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight)")
        WebDriverWait(driver, self.TIME_OUT).until(EC.presence_of_element_located((
            By.XPATH,
            path))
        )
        elem = driver.find_element_by_xpath(path)
        elem.clear()
        elem.send_keys(str(page))
        elem.send_keys(Keys.RETURN)

    def _get_total_page(self):
        if not self.total_page:
            total_page = self.wait_until(None, "//div[@class='header-page paging-box']/span[@class='result']",
                                         self.FIND_ELEMENT_BY_XPATH).text
            total_page = re.search('共(\d+)页', total_page).group(1)
            total_page = int(total_page)
            self.total_page = total_page
            print(f'共{total_page}页')

    def _get_current_page(self):
        while True:  # 这行代码一定要成功。。。
            try:
                current_page = self.wait_until(None, "//div[@class='header-page paging-box']/span[@class=\"current\"]",
                                               self.FIND_ELEMENT_BY_XPATH).text
                current_page = int(current_page)
                break
            except:
                print('这里是无限循环')
                continue
        return current_page

    def get_single_page_comments(self):
        self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight)")
        current_page = self._get_current_page()
        self._get_total_page()
        times = 0
        while times < self.RETRY_TIMES:
            try:
                self._receive_current_page(current_page)
                break
            except:
                print(f'重新进入第{current_page}页')
                content_list = self.comments.pop(f'page{current_page}', None)
                self._goto_page(current_page)
                from time import sleep
                sleep(1)
            times += 1
        else:
            self.comments[f'page{current_page}'] = content_list  # 把最后一次抓取到的再放回去
            print(f'page{current_page}未爬全')
        return current_page

    def _receive_current_page(self, current_page):
        print(f'page {current_page}')
        elements = self.wait_until(None, '//div[starts-with(@class, "list-item reply-wrap")]',
                                   self.FIND_ELEMENTS_BY_XPATH)
        for ele in elements:
            try:
                comment = self._get_one_comment(ele, current_page)
                self.comments[f'page{current_page}'].append(comment)
            except:
                raise

    @retry(reraise=True, stop=stop_after_attempt(3) | stop_after_delay(3))
    def _get_one_comment(self, ele, current_page):
        user = self.wait_until(ele, 'div/div[@class="user"]/a', self.FIND_ELEMENT_BY_XPATH).text
        comment_time = self.wait_until(ele, "div/div[@class='info']/span[@class='time']",
                                       self.FIND_ELEMENT_BY_XPATH).text
        content = self.wait_until(ele, 'div/p[@class="text"]', self.FIND_ELEMENT_BY_XPATH).text
        comment = f'{user}于{comment_time}说: {content}'
        return comment


if __name__ == '__main__':
    with BilibiliSpider() as spider:
        spider.driver_get_comments()
    # spider = BilibiliSpider()
    # spider.ajax_get_comment()

最后的结果存储在文件里, 效果如下:

参考文献 [1] https://jeffknupp.com/blog/2016/03/07/improve-your-python-the-with-statement-and-context-managers/ [2] https://blog.csdn.net/u013250416/article/details/61425207 [3] https://cuiqingcai.com/2599.html [4] 《python爬虫开发从入门到实战》(谢乾坤)

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2019-02-16,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 未闻Code 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档