前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >妹子图爬虫

妹子图爬虫

作者头像
obaby
发布2023-02-22 10:54:04
3520
发布2023-02-22 10:54:04
举报
文章被收录于专栏:obaby@mars

代码语言:javascript
复制
-- coding:utf-8 --

import requests
import os
import re
import time
import threading
from lxml import etree
from bs4 import BeautifulSoup
from multiprocessing import Pool, cpu_count

HEADERS = {
'X-Requested-With': 'XMLHttpRequest',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36',
'Referer': 'http://www.mzitu.com'
}

page_urls = ['http://www.mzitu.com/page/{cnt}'.format(cnt=cnt) for cnt in range(1, int(bs) + 1)]

DIR_PATH = "H:/mzitu"

def get_url():
    index_url = "https://www.mzitu.com"
    bs = etree.HTML(requests.get(url=index_url, headers=HEADERS).text).xpath(
    '/html/body/div[2]/div[1]/div[2]/nav/div/a[4]/text()')[0]
    print(bs)
    page_urls = ['http://www.mzitu.com/page/{cnt}'.format(cnt=cnt) for cnt in range(1, int(bs) + 1)]
    img_urls = []
    for i in page_urls:
        print('正在获取' + i + '链接图片地址')
        try:
            bs = BeautifulSoup(requests.get(url=i, headers=HEADERS, timeout=10).text, 'lxml').find('ul', id='pins')
            res = re.findall(r'href="(.*?)" target="_blank"><img', str(bs))
            img_url = [url.replace('"', "") for url in res]
            img_urls.extend(img_url)
        except Exception as e:
            print(e)
    return set(img_urls)

lock = threading.Lock()

def urls_crawler(url):

r = requests.get(url, headers=HEADERS, timeout=10).text

img_name = etree.HTML(r).xpath('//div[@class="main-image"]/p/a/img/@alt')[0]
print(img_name)
# with lock:
if mark_dir(img_name):
    max_count = etree.HTML(r).xpath('//div[@class="pagenavi"]/a[5]/span/text()')[0]
    page_url = [url + '/{cnt}'.format(cnt=cnt) for cnt in range(1, int(max_count) + 1)]
    img_urls = []
    for i, j in enumerate(page_url):
        time.sleep(0.3)
        r = requests.get(j, headers=HEADERS, timeout=10).text
        img_url = etree.HTML(r).xpath('//div[@class="main-image"]/p/a/img/@src')[0]
        img_urls.append(img_url)
    for cnt, url in enumerate(img_urls):
        save_pic(cnt, url)


def save_pic(cnt, url):

try:
    img = requests.get(url, headers=HEADERS, timeout=10).content
    img_name = '{}.jpg'.format(cnt)
    with open(img_name, 'ab') as f:
        f.write(img)
except Exception as e:
    print(e)


def mark_dir(flot_name):

PATH = os.path.join(DIR_PATH, flot_name)
if not os.path.exists(PATH):  
    os.makedirs(PATH)
    os.chdir(PATH)
    return True
print("Folder has existed! {}".format(flot_name))
return False


def delete_empty_dir(save_dir):

if os.path.exists(save_dir):  
    if os.path.isdir(save_dir):  
        for i in os.listdir(save_dir):  
            path = os.path.join(save_dir, i)  
            if os.path.isdir(path):  
                delete_empty_dir(path)  
    if not os.listdir(save_dir):  
        try:
            os.rmdir(save_dir)
        except:
            pass


if name == 'main':
    stratTime = time.time()
    urls = get_url()
    pool = Pool(processes=cpu_count())
    try:
    delete_empty_dir(DIR_PATH)
    pool.map(urls_crawler, urls)
    except Exception:
    time.sleep(30)
    delete_empty_dir(DIR_PATH)
    try:
    pool.map(urls_crawler, urls)
    except:
    pass
    stopTime = time.time()
    print(int(stopTime) - int(stratTime))

☆文章版权声明☆

* 网站名称:obaby@mars

* 网址:https://h4ck.org.cn/

* 本文标题: 《妹子图爬虫》

* 本文链接:https://h4ck.org.cn/2021/05/%e5%a6%b9%e5%ad%90%e5%9b%be%e7%88%ac%e8%99%ab/

* 转载文章请标明文章来源,原文标题以及原文链接。请遵从 《署名-非商业性使用-相同方式共享 2.5 中国大陆 (CC BY-NC-SA 2.5 CN) 》许可协议。


分享文章:

相关文章:

  1. Python requests socks代理
  2. Ganlinmu Spider
  3. 美图录 爬虫
  4. 获取网页中所有的文字
  5. missdica.com爬虫【美女图片爬虫】
  6. 基于ffmpeg的m3u8下载[调整key替换逻辑,更新解析逻辑]
  7. QQ音乐导出
  8. IDA批量模式 Python Script[fix]
  9. BeautifulSoup抓取js变量
  10. ncm2mp3
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2021年5月5日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 相关文章:
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档