我需要在亚马逊上抓取一个产品的所有评论:
我正在使用Scrapy来做这件事。然而,下面的代码似乎并没有收集所有的评论,因为它们被分到不同的页面上。一个人应该先点击所有评论,然后点击下一页。我想知道如何使用scrapy或python中的其他工具来做到这一点。这个产品有5893条评论,我无法手动获取这些信息。
目前我的代码如下:
import scrapy
from scrapy.crawler import CrawlerProcess
class My_Spider(scrapy.Spider):
name = 'spid'
start_urls = ['https://www.amazon.com/Cascade-ActionPacs-Dishwasher-Detergent-Packaging/dp/B01NGTV4J5/ref=pd_rhf_cr_s_trq_bnd_0_6/130-6831149-4603948?_encoding=UTF8&pd_rd_i=B01NGTV4J5&pd_rd_r=b6f87690-19d7-4dba-85c0-b8f54076705a&pd_rd_w=AgonG&pd_rd_wg=GG9yY&pf_rd_p=4e0a494a-50c5-45f5-846a-abfb3d21ab34&pf_rd_r=QAD0984X543RFMNNPNF2&psc=1&refRID=QAD0984X543RFMNNPNF2']
def parse(self, response):
for row in response.css('div.review'):
item = {}
item['author'] = row.css('span.a-profile-name::text').extract_first()
rating = row.css('i.review-rating > span::text').extract_first().strip().split(' ')[0]
item['rating'] = int(float(rating.strip().replace(',', '.')))
item['title'] = row.css('span.review-title > span::text').extract_first()
yield item并执行爬虫程序:
process = CrawlerProcess({
})
process.crawl(My_Spider)
process.start() 你能告诉我是否有可能转到下一页并刮掉所有的评论?这应该是存储评论的页面。
发布于 2020-08-03 22:31:46
使用url https://www.amazon.com/Cascade-ActionPacs-Dishwasher-Detergent-Packaging/product-reviews/B01NGTV4J5/ref=cm_cr_arp_d_paging_btm_next_2?ie=UTF8&reviewerType=all_reviews&pageNumber=<PUT PAGE NUMBER HERE>,您可以执行以下操作:
import scrapy
from scrapy.crawler import CrawlerProcess
class My_Spider(scrapy.Spider):
name = 'spid'
start_urls = ['https://www.amazon.com/Cascade-ActionPacs-Dishwasher-Detergent-Packaging/product-reviews/B01NGTV4J5/ref=cm_cr_arp_d_paging_btm_next_2?ie=UTF8&reviewerType=all_reviews&pageNumber=1']
def parse(self, response)
for row in response.css('div.review'):
item = {}
item['author'] = row.css('span.a-profile-name::text').extract_first()
rating = row.css('i.review-rating > span::text').extract_first().strip().split(' ')[0]
item['rating'] = int(float(rating.strip().replace(',', '.')))
item['title'] = row.css('span.review-title > span::text').extract_first()
yield item
next_page = response.css('ul.a-pagination > li.a-last > a::attr(href)').get()
yield scrapy.Request(url=next_page))https://stackoverflow.com/questions/63231265
复制相似问题