前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Python与SEO,三大SEO网站查询工具关键词查询采集源码!

Python与SEO,三大SEO网站查询工具关键词查询采集源码!

作者头像
二爷
发布2022-12-01 11:23:05
5820
发布2022-12-01 11:23:05
举报
文章被收录于专栏:二爷记二爷记

网站关键词查询挖掘,包括三大常用网站seo查询工具站点,爱站,站长,以及5118,其中,爱站及站长最多可查询到50页,5118可查询到100页,如果想要查询完整网站关键词排名数据,需充值购买会员,当然免费的查询也是需要注册会员的,不然也是没有查询权限!

5118

须自行补齐网站地址及Cookie协议头,查询需要登陆权限!

代码语言:javascript
复制
# 5118网站关键词采集
# -*- coding: utf-8 -*-
import requests
from lxml import etree
import time
import logging

logging.basicConfig(filename='s5118.log', level=logging.DEBUG,format='%(asctime)s - %(levelname)s - %(message)s')

#获取关键词
def get_keywords(site,page):
    url="https://www.5118.com/seo/baidupc"
    headers={
        "Cookie":Cookie,
        "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
    }
    data={
        "isPager": "true",
        "viewtype": 2,
        "days": 90,
        "url": site,
        "orderField": "Rank",
        "orderDirection" : "sc",
        "pageIndex": page,
        "catalogName": "",
        "referKeyword": "",
    }
    response=requests.post(url=url,data=data,headers=headers,timeout=10)
    print(response.status_code)
    html=response.content.decode('utf-8')
    tree=etree.HTML(html)
    keywords=tree.xpath('//td[@class="list-col justify-content "]/a[@class="w100 all_array"]/text()')
    print(keywords)
    save_txt(keywords, site)
    return keywords


#存储为csv文件
def save_csv(keywords,site):
    filename=site.replace("www.",'').replace(".com",'').replace(".cn",'').replace('https://','').replace('http://','')
    for keyword in keywords:
        with open(f'5118_{filename}.csv','a+',encoding='utf-8-sig') as f:
            f.write(f'{keyword}\n')
    print("保存关键词列表成功!")


#存储为txt文件
def save_txt(keywords,site):
    filename=site.replace("www.",'').replace(".com",'').replace(".cn",'').replace('https://','').replace('http://','')
    for keyword in keywords:
        with open(f'5118_{filename}.txt','a+',encoding='utf-8') as f:
            f.write(f'{keyword}\n')
    print("保存关键词列表成功!")


def main(site):
    logging.info(f"开始爬取网站{site}关键词数据..")
    num = 100
    keys=[]
    for page in range(1,num+1):
        print(f"正在爬取第{page}页数据..")
        logging.info(f"正在爬取第{page}页数据..")
        try:
            keywords = get_keywords(site, page)
            keys.extend(keywords)
            time.sleep(8)
        except Exception as e:
            print(f"爬取第{page}页数据失败--错误代码:{e}")
            logging.error(f"爬取第{page}页数据失败--错误代码:{e}")
            time.sleep(10)

    keys = set(keys)  #去重
    save_csv(keys, site)


if __name__ == '__main__':
    site=""
    main(site)

爱站

须自行补齐网站地址及Cookie协议头,查询需要登陆权限!

代码语言:javascript
复制
# 爱站网站关键词采集
# -*- coding: utf-8 -*-
import requests
from lxml import etree
import time
import logging

logging.basicConfig(filename='aizhan.log', level=logging.DEBUG,format='%(asctime)s - %(levelname)s - %(message)s')

#获取关键词
def get_keywords(site,page):
    url=f"https://baidurank.aizhan.com/baidu/{site}/-1/0/{page}/position/1/"
    headers = {
        "Cookie":Cookie ,
    }
    response = requests.get(url=url,headers=headers, timeout=10)
    print(response.status_code)
    html = response.content.decode('utf-8')
    tree = etree.HTML(html)
    keywords = tree.xpath('//td[@class="title"]/a[@class="gray"]/@title')
    print(keywords)
    save_txt(keywords, site)
    return keywords


#存储为csv文件
def save_csv(keywords,site):
    filename=site.replace("www.",'').replace(".com",'').replace(".cn",'').replace('https://','').replace('http://','')
    for keyword in keywords:
        with open(f'aizhan_{filename}.csv','a+',encoding='utf-8-sig') as f:
            f.write(f'{keyword}\n')
    print("保存关键词列表成功!")


#存储为txt文件
def save_txt(keywords,site):
    filename=site.replace("www.",'').replace(".com",'').replace(".cn",'').replace('https://','').replace('http://','')
    for keyword in keywords:
        with open(f'aizhan_{filename}.txt','a+',encoding='utf-8') as f:
            f.write(f'{keyword}\n')
    print("保存关键词列表成功!")


def main(site):
    logging.info(f"开始爬取网站{site}关键词数据..")
    num = 50
    keys=[]
    for page in range(1,num+1):
        print(f"正在爬取第{page}页数据..")
        logging.info(f"正在爬取第{page}页数据..")
        try:
            keywords = get_keywords(site, page)
            keys.extend(keywords)
            time.sleep(8)
        except Exception as e:
            print(f"爬取第{page}页数据失败--错误代码:{e}")
            logging.error(f"爬取第{page}页数据失败--错误代码:{e}")
            time.sleep(10)

    keys = set(keys)  #去重
    save_csv(keys, site)


if __name__ == '__main__':
    site=""
    main(site)
    

站长

须自行补齐网站地址及Cookie协议头,查询需要登陆权限!

代码语言:javascript
复制
# 站长之家网站关键词采集
# -*- coding: utf-8 -*-
import requests
from lxml import etree
import time
import logging

logging.basicConfig(filename='chinaz.log', level=logging.DEBUG,format='%(asctime)s - %(levelname)s - %(message)s')


#获取关键词
def get_keywords(site,page):
    headers={
        "Cookie":Cookie,
        "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
    }
    url=f"https://rank.chinaz.com/{site}-0---0-{page}"
    response=requests.get(url=url,headers=headers,timeout=8)
    print(response)
    html=response.content.decode('utf-8')
    tree=etree.HTML(html)
    keywords=tree.xpath('//ul[@class="_chinaz-rank-new5b"]/li[@class="w230 "]/a/text()')
    print(keywords)
    save_txt(keywords, site)
    return keywords


#存储为csv文件
def save_csv(keywords,site):
    filename=site.replace("www.",'').replace(".com",'').replace(".cn",'').replace('https://','').replace('http://','')
    for keyword in keywords:
        with open(f'chinaz_{filename}.csv','a+',encoding='utf-8-sig') as f:
            f.write(f'{keyword}\n')
    print("保存关键词列表成功!")

#存储为txt文件
def save_txt(keywords,site):
    filename=site.replace("www.",'').replace(".com",'').replace(".cn",'').replace('https://','').replace('http://','')
    for keyword in keywords:
        with open(f'chinaz_{filename}.txt','a+',encoding='utf-8') as f:
            f.write(f'{keyword}\n')
    print("保存关键词列表成功!")

def main(site):
    logging.info(f"开始爬取网站{site}关键词数据..")
    num = 50
    keys=[]
    for page in range(1,num+1):
        print(f"正在爬取第{page}页数据..")
        logging.info(f"正在爬取第{page}页数据..")
        try:
            keywords = get_keywords(site, page)
            keys.extend(keywords)
            time.sleep(8)
        except Exception as e:
            print(f"爬取第{page}页数据失败--错误代码:{e}")
            logging.error(f"爬取第{page}页数据失败--错误代码:{e}")
            time.sleep(10)

    keys = set(keys)  #去重
    save_csv(keys, site)


if __name__ == '__main__':
    site=""
    main(site)
    

·················END·················

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2022-07-07,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 Python与SEO学习 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 5118
  • 爱站
  • 站长
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档