这两天把之前构造的IP代理池进行了优化,给大家分享一下整个思路。
把IP池构建切分为下面几个步骤:
对MongoDB内的IP进行更新
collection.delete_one({'ip':})
collection.updata({'ip':},{'$set',{'speed':speed}})
collection.delete_many({'speed':{'$gt':10}})
抓取大量IP,逐一进行验证
将有效IP导入MongoDB中
IP的抓取我选择的是西刺代理,这个网站的IP是免费提供的,但是它的IP极其不稳定,可能几分钟前能用,几分钟后就失效了。从西刺要抓取IP地址以及端口,类型。
要爬取的信息在table标签下的tr中,了解到了具体的位置,就很好爬了。
整体的循环思路是:爬一面返回一个ip_lists,然后对每一个IP进行验证;如果有效,就存入数据库中。
def get_ips_from_xici(n):
target_headers = {
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Referer': 'http://www.xicidaili.com/nn/',
'Accept-Encoding': 'gzip, deflate, sdch',
'Accept-Language': 'zh-CN,zh;q=0.8'
}
start_url = 'http://www.xicidaili.com/wt/'#http
#start_url='http://www.xicidaili.com/wn/'#https
ip = []
for p in range(1, n+1):
url = start_url + str(p)
html = requests.get(url, headers=target_headers).text
soup = BeautifulSoup(html, 'html.parser')
lists = soup.find('table').find_all('tr')
for list in lists[1:]:
ips = list.find_all('td')
inf = {}
inf['类型'] =str(ips[5].string).lower()
inf['IP'] = ips[1].string
inf['端口'] = ips[2].string
inf['地点'] = ips[3].string
w =str(ips[5].string).lower() + '://' + ips[1].string + ':' + ips[2].string
ip.append(w)
return ip
下面这部分代码是将上面获得的IP进行验证(我选取的是去哪儿网)
def validate_ip(ips,test_url,success_ip):
headers = {
'Host': 'piao.qunar.com',
'Referer': 'http://piao.qunar.com/ticket/list.htm?keyword=%E6%88%90%E9%83%BD®ion=null&from=mpl_search_suggest',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
for ip in ips:
try:
#print('=====================开始测试%s==========================' % ip)
start = time.time()
proxies = {'http': ip}
r = requests.get(test_url, headers=headers, timeout=3, proxies=proxies)
if r.status_code == 200:#if not r.ok:
soup = BeautifulSoup(r.text, 'lxml')
y = soup.find('div', class_='search_result')
if not y ==None:
inf={}
success_ip.append(ip)
speed = round(time.time() - start, 2)
inf['ip'] = ip
inf['端口'] = ip.split(':')[2]
inf['speed']=speed
collection.insert_one(inf)
print('success ip =%s,speed=%s' % (ip, speed))
else:
print('fail ip=%s' % ip)
time.sleep(1.5)
except Exception as e:
ip_lists.remove(ip)
#print(ip_lists)
print('fail ip=%s %s' % (ip,e))
return success_ip
这里的验证有一点点的不同,因为我发现仅仅用if r.status_code == 200来验证是不够的,有时候通过这个验证的IP却不能帮我们获得得到想要抓取的内容,所以我加了这几行代码,这样就绝对保证了IP的有效性。
soup = BeautifulSoup(r.text, 'lxml')
y = soup.find('div', class_='search_result')
if not y ==None:
第一层验证通过但是 soup.find('div', class_='search_result')为空值的IP,print(’fail ip=%s' %ip);第一层验证都失败的,print(’fail ip=%s %s' %(ip,e)),得到的结果如下。
程序跑了一个下午以后,一共就爬到了21个有效IP;其中还有很多重复的。
再使用这些IP前进行一下验证:
if __name__=='__main__':
collection = MongoClient('localhost', 27017)
collection = collection.IPPOOL
collection = collection.ippool
for i in collection.find({},{'_id':0,'ip':1,'speed':1}):#1为升序;-1为降序
print(i['ip'])
test_url = 'http://piao.qunar.com/ticket/list.htm?keyword=%E5%8C%97%E4%BA%AC®ion=%E5%8C%97%E4%BA%AC&from=mpshouye_hotcity'
headers = {
'Host': 'piao.qunar.com',
'Referer': 'http://piao.qunar.com/ticket/list.htm?keyword=%E6%88%90%E9%83%BD®ion=null&from=mpl_search_suggest',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
proxies = {'http':i['ip']}
start=time.time()
try:
r = requests.get(test_url, headers=headers, timeout=3, proxies=proxies)
if r.status_code == 200: # if not r.ok:
soup = BeautifulSoup(r.text, 'lxml')
y = soup.find('div', class_='search_result')
speed=round(time.time()- start,2)
collection.update({'ip':i['ip']},{'$set':{'speed':speed}})
except Exception as e:
collection.delete_one({'ip':i['ip']})
跑完以后,一共才21个IP,现在只剩下了11个,还有很多重复的。
去重的代码:
collection = MongoClient('localhost', 27017)
collection = collection.IPPOOL
collection = collection.ippool
for i in collection.distinct('ip'):
print(i)
w=collection.find_one({'ip':i})
collection.delete_many({'ip':i})
collection.insert_one(w)
去重以后就只有5个了!!!不得不说西刺的代理稳定性真的很差,质量也很差。。。
接下来就是IP的调取了,有两种方法,一种是等IP全部跑完以后,加入爬虫的程序里;另外一种是边爬边用。
#第一种
ip=[]
for i in collection.find({},{'_id':0,'ip':1}):
ip.append(i)
print(ip)
[{'ip': 'http://112.115.57.20:3128'}, {'ip': 'http://58.53.128.83:3128'}, {'ip': 'http://219.246.90.204:3128'}, {'ip': 'http://183.166.129.53:8080'}, {'ip': 'http://218.14.115.211:3128'}]
#第二种
for i in collection.aggregate([{'$sample':{'size':1}}]):
print(i['ip'])
http://58.53.128.83:3128