首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >如何使scrapy crawler在处理多级网页爬行时正常工作

如何使scrapy crawler在处理多级网页爬行时正常工作
EN

Stack Overflow用户
提问于 2018-06-04 15:19:48
回答 1查看 67关注 0票数 0

我正在学习爬行技巧,我想这样做:

  1. 登录到特定网页(完成)
  2. 转到包含我需要的链接的页面
  3. 针对该页面中的每个链接,爬网其内容。

问题是,我已经针对单个链接测试了我的代码,它可以工作,但当我为多级作业尝试它时。它以一种我无法理解的方式失败了:它只能抓取每个链接的某一部分。我想知道我的代码中是否有一些逻辑错误,请帮助。下面是代码

代码语言:javascript
复制
import scrapy
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time

class BaiduSpider(scrapy.Spider):
    name = 'baidu'
    allowed_domains = ['baidu.com']
    start_urls = ['http://tieba.baidu.com']
    main_url = 'http://tieba.baidu.com/f?kw=%E5%B4%94%E6%B0%B8%E5%85%83&ie=utf-8'
    username = ""
    password = ""

def __init__(self, username=username, password=password):
    #options = webdriver.ChromeOptions()
    #options.add_argument('headless')
    #options.add_argument('window-size=1200x600')
    self.driver = webdriver.Chrome()#chrome_options=options)
    self.username = username
    self.password = password
# checked
def logIn(self):
    elem = self.driver.find_element_by_css_selector('#com_userbar > ul > li.u_login > div > a')
    elem.click()
    wait = WebDriverWait(self.driver,10).until(EC.presence_of_element_located((By.CSS_SELECTOR,'#TANGRAM__PSP_10__footerULoginBtn')))
    elem = self.driver.find_element_by_css_selector('#TANGRAM__PSP_10__footerULoginBtn')
    elem.click()
    elem = self.driver.find_element_by_css_selector('#TANGRAM__PSP_10__userName')
    elem.send_keys(self.username)
    elem = self.driver.find_element_by_css_selector('#TANGRAM__PSP_10__password')
    elem.send_keys(self.password)
    self.driver.find_element_by_css_selector('#TANGRAM__PSP_10__submit').click()
# basic checked
def parse(self, response):
    self.driver.get(response.url)
    self.logIn()
    # wait for hand input verify code
    time.sleep(20)
    self.driver.get('http://tieba.baidu.com/f?kw=%E5%B4%94%E6%B0%B8%E5%85%83&ie=utf-8')
    # try first page first
    for url in self.driver.find_elements_by_css_selector('a.j_th_tit'):
        #new_url = response.urljoin(url)
        new_url = url.get_attribute("href")
        yield scrapy.Request(url=new_url, callback=self.parse_sub)

# checked
def pageScroll(self, url):
    self.log('I am scrolling' + url)
    self.driver.get(url)
    SCROLL_PAUSE_TIME = 0.5
    SCROLL_LENGTH = 1200
    page_height = int(self.driver.execute_script("return document.body.scrollHeight"))
    scrollPosition = 0
    while scrollPosition < page_height:
        scrollPosition = scrollPosition + SCROLL_LENGTH
        self.driver.execute_script("window.scrollTo(0, " + str(scrollPosition) + ");")
        time.sleep(SCROLL_PAUSE_TIME)
    time.sleep(1.2)

def parse_sub(self, response):
    self.log('I visited ' + response.url)
    self.pageScroll(response.url)

    for sel in self.driver.find_elements_by_css_selector('div.l_post.j_l_post.l_post_bright'):
        name = sel.find_element_by_css_selector('.d_name').text
        try:
            content = sel.find_element_by_css_selector('.j_d_post_content').text
        except: content = ''
        replys = []
        for i in sel.find_elements_by_xpath('.//div[@class="lzl_cnt"]'):
            user1 = i.find_element_by_xpath('.//a[@username]')
            user1 = self.driver.execute_script("return arguments[0].firstChild.textContent", user1)
            try:
                user2 = i.find_element_by_xpath('.//span[@class="lzl_content_main"]/a[@username]')
                user2 = self.driver.execute_script("return arguments[0].firstChild.textContent", user2)
            except: user2 = name
            span = i.find_element_by_xpath('.//span[@class="lzl_content_main"]')
            reply = self.driver.execute_script('return arguments[0].lastChild.textContent;', span)

            replys.append(tuple(user1, user2, reply))
        yield {"topic": response.css(".core_title_txt::text").extract(), "name":name, "content":content, "replys":replys}

    #follow to next page

    #next_sel = self.driver.find_element_by_css_selector('#thread_theme_7 a:nth-child(3)')
    #next_url_name = next_sel.text

    #if next_sel and next_url_name == '下一页':
    #    next_url = next_sel.get_attribute('href')

    #    yield scrapy.Request(url=next_url, callback=self.parse_sub)
EN

回答 1

Stack Overflow用户

发布于 2018-06-05 05:39:36

看起来您正在为链接使用硬编码容器,而不是通用容器,因此只返回了

代码语言:javascript
复制
for url in self.driver.find_elements_by_css_selector('a.j_th_tit')

这个锚( j_th_tit )似乎是一个动态生成的类名,并且对于所有的锚(A)标签可能不是相同的。

你可以试一试

代码语言:javascript
复制
 for url in self.driver.find_elements_by_css_selector('a')

用于获取页面的所有链接。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/50675217

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档