前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >爬虫解决网站混淆JS跳转

爬虫解决网站混淆JS跳转

作者头像
十四君
发布2019-11-28 17:00:18
2.7K0
发布2019-11-28 17:00:18
举报
文章被收录于专栏:UrlteamUrlteam

有些网站,防止被爬虫采集,就会做一层js跳转,普通请求只会拿到js部分,而且很难看懂,然后我试了几种思路,最后留下解决方式:

示例网站:http://huakai.waa.cn/    这是一个发卡平台,是黑灰产的一种售卖羊毛资源的地方。

如果直接请求该页面会得到:

代码语言:javascript
复制
window.onload=setTimeout("bx(105)", 200); function bx(WA) {var qo, mo="", no="",
 oo = [0xba,0x0a,0x7f,0xd1,0x22,0xab,0x04,0x64,0xc9,0x19,0x4c,0x61,0xc3,0x2a,0x90,0x64,
   0xce,0x2b,0x8b,0x93,0x37,0x55,0x8b,0xd7,0x37,0x07,0x69,0xd0,0x21,0x85,0x83,0xd4,
   0x39,0x9e,0x00,0x6f,0xcf,0xd7,0x35,0x49,0x5d,0xbc,0xcc,0x21,0x81,0xc0,0x13,0x2f,
   0x83,0xe3,0x38,0x95,0xf6,0x56,0xba,0xa8,0xbc,0x1a,0x2e,0x32,0x2f,0x3d,0x9c,0xa8,
   0xb7,0x35,0x92,0xf1,0x1a,0x2e,0x3f,0x91,0xf3,0x08,0x30,0xda,0xe9,0xfc,0x0d,0x1b,
   0x56,0x7e,0x89,0xe8,0xfb,0x7b,0xdf,0xf7,0x04,0x64,0x66,0xc3,0xd7,0xe3,0xff,0x4c,
   0x58,0x6c,0x77,0x87,0xfa,0x09,0x66,0x8e,0x92,0xe2,0xf2,0x03,0x20,0x22,0xfb,0x09,
   0x1a,0x28,0x37,0x44,0x51,0x6b,0x8e,0xee,0xbf,0x0b,0x5e,0xba,0x0c,0xaf,0x10,0x52,
   0x6a,0x9c,0xb0,0x05,0x54,0x7b,0x9e,0x8f,0xa0,0xae,0xc6,0x0b,0x2f,0x72,0xc3,0xeb,
   0xff,0xf9,0x06,0x29,0x3d,0x50,0x99,0xa2,0xb2,0xce,0xd7,0x2c,0x3f,0x4c,0x6f,0xad,
   0x43,0x8b,0xba,0xc4,0xe7,0x29,0x88,0xee,0x47,0xab,0x71,0xdd,0x33,0x4b,0x70,0xe4,
   0x33,0x97,0xfb,0x11,0x4b,0xad,0x03,0x1d,0x40,0xd6,0x2a,0x8e,0xdd,0x39,0xfc,0x05,
   0x2b,0x49,0x53,0x04,0x27,0x75,0xd1,0x37,0x90,0xef,0x46,0x94,0xb9,0x21,0x90,0xe6,
   0x49,0x99,0x93,0xfb,0x5c,0xb1,0x01,0xb2,0xd7,0x3f,0x95,0xf7,0x72,0xd6,0x26,0x82,
   0xe8,0x04,0x69,0x71,0xe0,0x37,0x18,0x7a,0xca,0x23,0x83,0x1b,0x80,0xcf,0xe4,0x15,
   0xb1,0xe2,0xc6,0x3b];qo = 
   "qo=242; do{oo[qo]=(-oo[qo])&0xff; oo[qo]=(((oo[qo]>>4)|((oo[qo]<<4)&0xff))-126)&0xff;} while(--qo>=2);";
    eval(qo);qo = 241; do { oo[qo] = (oo[qo] - oo[qo - 1]) & 0xff; } 
    while (-- qo >= 3 );qo = 1; for (;;) { if (qo > 241) break; oo[qo] = ((((((oo[qo] + 81) & 0xff) + 117) 
    & 0xff) << 4) & 0xff) | (((((oo[qo] + 81) & 0xff) + 117) & 0xff) >> 4); qo++;}po = ""; for (qo = 1; qo 
    < oo.length - 1; qo++) if (qo % 5) po += String.fromCharCode(oo[qo] ^ WA);eval("qo=eval;qo(po);");} 

看上去毫无头绪。

首先是尝试,带上完整请求:

代码语言:javascript
复制
curl 'http://huakai.waa.cn/' -H 'Proxy-Connection: keep-alive' -H 'Pragma: no-cache'
 -H 'Cache-Control: no-cache' -H 'Upgrade-Insecure-Requests: 1' 
-H 'User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'
-H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3' 
-H 'Referer: http://huakai.waa.cn/' -H 'Accept-Encoding: gzip, deflate' -H 'Accept-Language: zh-CN,zh;q=0.9' 
-H 'Cookie: Hm_lvt_076005c31490ab6d3c03b39335e4bdb8=1554694360,1554694554; 
yd_cookie=439b11e8-078b-4a1f1637e66aa901ed6ee4e8396bb5cdf82f; PHPSESSID=qb6of6f0fueq7g8gvi326f3fk7; youkeusername=a16798cb80a2b7de74b035b58464a8e0;
 Hm_lpvt_076005c31490ab6d3c03b39335e4bdb8=1556269667; _ydclearance=f1d5aec9aefbda1f117d94fd-1cc1-4057-8d0a-9ef19991857f-1556362746' --compressed

这个只是由chrome中复制该条请求的curl的自动命令,等于重新发次请求,会发现,这个请求缺可以正常拿到页面内容。

因此初步分析是请求头字段影响,逐个排查,发现各种字段只有cookie缺失了,就会触发返回js。

再逐个排查cookie中哪个的效果。会发现是:_ydclearance 字段的作用,这时候可以用下命令,即可

代码语言:javascript
复制
curl 'http://huakai.waa.cn/' -H 'Cookie:  _ydclearance=95b847720b0792bf33cfd2ba-b5f2-403a-be3b-b3522a041fd6-1556272401;'

但是,第二天复查发现,该cookie已经失效了,再次返回js。

仔细看下js,会发现,他是进行了一个延时后调用bx(105)函数,完成后刷新页面,换句话说,得进行这个js的计算即可。

可以参考用 pip3 install js2py

代码语言:javascript
复制
js2py.eval_js(a)

硬解开这个js,但是我就用了个小技巧。因为cookie可用1天。

我就是用phantomjs 延时5秒,让他自己完成这个js的计算后,再用这个生成好的cookie来配合直接请求使用。

使用的phantomjs 代码为下,将其保存为res.js ,执行如下命令即可获取源码

代码语言:javascript
复制
phantomjs res.js http://huakai.waa.cn/

res.js :改编自:https://github.com/luyishisi/Anti-Anti-Spider/blob/master/9.phantomjs/get_page_Source_Code/request.js

代码语言:javascript
复制
var page = require('webpage').create(),
    system = require('system'),
    address;
address = system.args[1];
//init and settings
page.settings.resourceTimeout = 30000 ;
page.settings.XSSAuditingEnabled = true ;
//page.viewportSize = { width: 1000, height: 1000  };
page.settings.userAgent = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36';
page.customHeaders = { 
    "Connection" : "keep-alive",
    "Cache-Control" : "max-age=0",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
    "Accept-Language": "zh-CN,zh;q=0.8,en;q=0.6",
};
page.open(address, function() {
  console.log(address);
  console.log('begin');
 
        }); 
//加载页面完毕运行
page.onLoadFinished = function(status) {
  setTimeout( function(){
     //add your code
  console.log('Status: ' + status);
  console.log(page.content);
  phantom.exit();
}, 5 * 1000 );
};

我的博客即将同步至腾讯云+社区,邀请大家一同入驻:https://cloud.tencent.com/developer/support-plan?invite_code=u3xrcath7lgz

原创文章,转载请注明: 转载自URl-team

本文链接地址: 爬虫解决网站混淆JS跳转

Related posts:

  1. selenium自动登录挂stackoverflow的金牌
  2. python 高度鲁棒性爬虫的超时控制问题
  3. 数据采集技术指南 第一篇 技术栈总览-附总图和演讲ppt
  4. 淘宝商品信息采集器二,开放源码可自定义关键词进行采集
  5. 解决爬虫模拟登录时验证码图片拉取提交问题的两种方式
  6. 如何解决selenium被检测,实现淘宝登陆
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2019-04-272,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Related posts:
相关产品与服务
验证码
腾讯云新一代行为验证码(Captcha),基于十道安全栅栏, 为网页、App、小程序开发者打造立体、全面的人机验证。最大程度保护注册登录、活动秒杀、点赞发帖、数据保护等各大场景下业务安全的同时,提供更精细化的用户体验。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档