前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >一起来豆瓣看书吧!

一起来豆瓣看书吧!

作者头像
我被狗咬了
发布2019-09-23 11:17:21
3790
发布2019-09-23 11:17:21
举报
文章被收录于专栏:Python乱炖Python乱炖

豆瓣的图书的书一直是比较全的,最近有的小伙伴想去豆瓣看看IT有关的书籍,说走就走,豆瓣我来了!

首先我们看看我们要爬的网址:

https://www.douban.com

那我们看看计算机相关的书籍:

再看看与深度学习相关的???:

ok,不多说了,我们开始吧!

准备工作:需要导入的包有:(如果没有的话自行pip安装吧!)

代码语言:javascript
复制
import importlib
import sys
import time
import urllib
import numpy as np
from bs4 import BeautifulSoup
from openpyxl import Workbook

这里使用urllib而不用requests的原因是因为 如果使用requests包,IP容易被封。

首先我们要准备一件很重要的事情,多准备几个header,那header是在哪里获取的呢?

我们需要打开开发者模式,选择Network,在里面选择一条请求:

四步走,我们一步一步来:

我们需要多个user-agent来防止反爬,

我们把它都放到header里面:

代码语言:javascript
复制
hds = [{'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'},
       {'User-Agent': 'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11'},
       {'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)'}]

下面我们开始获取图书信息了:

这里说明一下,我们要爬没个页数的时间采用随机休眠来控制反爬,

我们先来观察一下url:

https://www.douban.com/tag/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/book?start=0

url中固定的是https://www.douban.com/tag/和/book?start=

那下面我们就来拼接url吧:

代码语言:javascript
复制
url = 'http://www.douban.com/tag/' \
      + urllib.parse.quote(book_tag) \
      + '/book?start=' + str(page_num * 15)
print(url)

之后我们开去网上获取数据了:

代码语言:javascript
复制
# 随机休眠时间,防止反爬
time.sleep(np.random.rand() * 3)
req = urllib.request.Request(url, headers=hds[page_num % len(hds)])
source_code = urllib.request.urlopen(req).read()
plain_text = str(source_code)

拿到数据之后我们使用bs4去匹配我们需要的内容:

代码语言:javascript
复制
soup = BeautifulSoup(plain_text,features="lxml")
list_soup = soup.find('div', {'class': 'mod book-list'})

try_times += 1;
if list_soup == None and try_times < 200:
    continue
elif list_soup == None or len(list_soup) <= 1:
    break
# 遍历查找的集合,提取细节信息
for book_info in list_soup.findAll('dd'):
    title = book_info.find('a', {'class': 'title'}).string.strip()
    desc = book_info.find('div', {'class': 'desc'}).string.strip()
    desc_list = desc.split('/')
    book_url = book_info.find('a', {'class': 'title'}).get('href')

    try:
        author_info = '作者/译者: ' + '/'.join(desc_list[0:-3])
        pub_info = '出版信息: ' + '/'.join(desc_list[-3:])
        rating = book_info.find('span', {'class': 'rating_nums'}).string.strip()
        people_num = get_num(book_url)
        people_num = people_num.strip('人评价')
    except:
        author_info = '作者/译者: 暂无'
        pub_info = '出版信息: 暂无'
        rating = '0.0'
        people_num = '0'
        print('detail info has some error!')

    book_list.append([title, rating, people_num, author_info, pub_info])
    try_times = 0
page_num += 1

我们还要获取点评人数的信息(如果不想要这个字段可以把people_num注释掉):

代码语言:javascript
复制
try:
    req = urllib.request.Request(url, 
            headers=hds[np.random.randint(0, len(hds))])
    source_code = urllib.request.urlopen(req).read()
    plain_text = str(source_code)
except :
    print('http error!')
soup = BeautifulSoup(plain_text,features="lxml")
people_num = soup.find('div', 
                       {'class': 'rating_sum'}).findAll(
                        'span')[1].string.strip()

根据给定标签获取所有的书:

代码语言:javascript
复制
book_lists = []
book_tag_lists = ['计算机',
                  '机器学习',
                  'linux',
                  'android',
                  '数据库',
                  '互联网']
for book_tag in book_tag_lists:
    book_list = book_info(book_tag)
    book_list = sorted(book_list, key=lambda x: x[1], reverse=True)
    book_lists.append(book_list)

最后一步,我们将获取到的书的信息存到Excel里:

代码语言:javascript
复制
wb = Workbook(optimized_write=True)
ws = []
for i in range(len(book_tag_lists)):
    ws.append(wb.create_sheet(title=book_tag_lists[i].decode()))  # utf8->unicode
for i in range(len(book_tag_lists)):
    ws[i].append(['序号', '书名', '评分', '评价人数', '作者', '出版社'])
    count = 1
    for bl in book_lists[i]:
        ws[i].append([count, bl[0], float(bl[1]), int(bl[2]), bl[3], bl[4]])
        count += 1
save_path = 'book_list'
for i in range(len(book_tag_lists)):
    save_path += ('-' + book_tag_lists[i].decode())
save_path += '.xlsx'
wb.save(save_path)

这样我们就大功告成了,查看结果:

打开csv:

ok,完美获取。

以下是完整代码,点击阅读原文也可以获取。

代码语言:javascript
复制
import importlib
import sys
import time
import urllib
import numpy as np
from bs4 import BeautifulSoup
from openpyxl import Workbook

importlib.reload(sys)

# 给出多个User-Agent,防止反爬
hds = [{'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'},
       {'User-Agent': 'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11'},
       {'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)'}]


def book_info(book_tag):
    page_num = 0
    book_list = []
    try_times = 0
    while True:
        # url拼接
        url = 'http://www.douban.com/tag/' \
              + urllib.parse.quote(book_tag) \
              + '/book?start=' + str(page_num * 15)
        print(url)
        # 随机休眠时间,防止反爬
        time.sleep(np.random.rand() * 3)
        req = urllib.request.Request(url, headers=hds[page_num % len(hds)])
        source_code = urllib.request.urlopen(req).read()
        plain_text = str(source_code)

        ##如果使用requests包,IP容易被封号
        # source_code = requests.get(url)
        # plain_text = source_code.text
        # 创建bs4对象
        soup = BeautifulSoup(plain_text,features="lxml")
        list_soup = soup.find('div', {'class': 'mod book-list'})

        try_times += 1;
        if list_soup == None and try_times < 200:
            continue
        elif list_soup == None or len(list_soup) <= 1:
            break
        # 遍历查找的集合,提取细节信息
        for book_info in list_soup.findAll('dd'):
            title = book_info.find('a', {'class': 'title'}).string.strip()
            desc = book_info.find('div', {'class': 'desc'}).string.strip()
            desc_list = desc.split('/')
            book_url = book_info.find('a', {'class': 'title'}).get('href')

            try:
                author_info = '作者/译者: ' + '/'.join(desc_list[0:-3])
                pub_info = '出版信息: ' + '/'.join(desc_list[-3:])
                rating = book_info.find('span', {'class': 'rating_nums'}).string.strip()
                people_num = get_num(book_url)
                people_num = people_num.strip('人评价')
            except:
                author_info = '作者/译者: 暂无'
                pub_info = '出版信息: 暂无'
                rating = '0.0'
                people_num = '0'
                print('detail info has some error!')

            book_list.append([title, rating, people_num, author_info, pub_info])
            try_times = 0
        page_num += 1
        print('Downloading Information From Page %d' % page_num)
    return book_list


def get_num(url):
    try:
        req = urllib.request.Request(url, headers=hds[np.random.randint(0, len(hds))])
        source_code = urllib.request.urlopen(req).read()
        plain_text = str(source_code)
    except :
        print('http error!')
    soup = BeautifulSoup(plain_text,features="lxml")
    people_num = soup.find('div',
                           {'class': 'rating_sum'}).findAll(
                            'span')[1].string.strip()
    return people_num


def get_books(book_tag_lists):
    book_lists = []
    for book_tag in book_tag_lists:
        book_list = book_info(book_tag)
        book_list = sorted(book_list, key=lambda x: x[1], reverse=True)
        book_lists.append(book_list)
    return book_lists


def print_book_lists_excel(book_lists, book_tag_lists):
    wb = Workbook(optimized_write=True)
    ws = []
    for i in range(len(book_tag_lists)):
        ws.append(wb.create_sheet(title=book_tag_lists[i].decode()))  # utf8->unicode
    for i in range(len(book_tag_lists)):
        ws[i].append(['序号', '书名', '评分', '评价人数', '作者', '出版社'])
        count = 1
        for bl in book_lists[i]:
            ws[i].append([count, bl[0], float(bl[1]), int(bl[2]), bl[3], bl[4]])
            count += 1
    save_path = 'book_list'
    for i in range(len(book_tag_lists)):
        save_path += ('-' + book_tag_lists[i].decode())
    save_path += '.xlsx'
    wb.save(save_path)


if __name__ == '__main__':
    book_tag_lists = ['计算机','机器学习','linux','android','数据库','互联网']
    book_lists = get_books(book_tag_lists)
    print_book_lists_excel(book_lists, book_tag_lists)

代码地址:

https://www.bytelang.com/o/s/c/7QXO_UAlsLU=

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-12-28,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 Python乱炖 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档