Python爬取北京地区短租房信息

本文利用Requests和BeautifulSoup第三方库,爬取小猪短租网北京地区短租房的信息。

完整代码如下:

frombs4importBeautifulSoup

importrequests

importtime

headers = {

'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'

}

defjudgment_sex(class_name):

ifclass_name == ['member_icol']:

return'女'

else:

return'男'

defget_links(url):

wb_data = requests.get(url,headers = headers)

soup = BeautifulSoup(wb_data.text,'lxml')

links = soup.select('#page_list > ul > li > a')

forlinkinlinks:

href = link.get("href")

get_info(href)

defget_info(url):

wb_data = requests.get(url,headers = headers)

soup = BeautifulSoup(wb_data.text,'lxml')

tittles = soup.select('div.pho_info > h4')

addresses = soup.select('span.pr5')

prices = soup.select('#pricePart > div.day_l > span')

imgs = soup.select('#floatRightBox > div.js_box.clearfix > div.member_pic > a > img')

names = soup.select('#floatRightBox > div.js_box.clearfix > div.w_240 > h6 > a')

sexs = soup.select('#floatRightBox > div.js_box.clearfix > div.w_240 > h6 > span')

f =open('xiaozhu_data.txt', 'a+',encoding='utf-8')

fortittle,address,price,img,name,sexinzip(tittles,addresses,prices,imgs,names,sexs):

data = {

'tittle':tittle.get_text().strip(),

'address':address.get_text().strip(),

'price':price.get_text(),

'img':img.get("src"),

'name':name.get_text(),

'sex':judgment_sex(sex.get("class"))

}

print(data,file= f)

f.close

if__name__== '__main__':

urls = ['http://bj.xiaozhu.com/search-duanzufang-p{}-0/'.format(number)fornumberinrange(1,50)]

forsingle_urlinurls:

get_links(single_url)

time.sleep(2)

欢迎一起交流学习!

  • 发表于:
  • 原文链接http://kuaibao.qq.com/s/20180127F0LYXP00?refer=cp_1026
  • 腾讯「云+社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 yunjia_community@tencent.com 删除。

扫码关注云+社区

领取腾讯云代金券