我想让我真正的爬虫多线程。
当我设置多线程时,将启动该函数的几个实例。
实例:
如果我的函数使用print range(5),如果我有2个线程,我将使用1,1,2,2,3,3,4,4,5,5。
如何在多线程中获得结果1,2,3,4,5?
我的实际代码是一个爬虫,如下所示:
import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages):
page = 1
while page <= max_pages:
url = "http://stackoverflow.com/questions?page=" + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('a', {'class': 'question-hyperlink'}):
href = link.get('href')
title = link.string
print(title)
get_single_item_data("http://stackoverflow.com/" + href)
page += 1
def get_single_item_data(item_url):
source_code = requests.get(item_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
res = soup.find('span', {'class': 'vote-count-post '})
print("UpVote : " + res.string)
trade_spider(1)如何在没有重复链接的情况下在多线程中调用trade_spider()?
发布于 2016-08-16 16:37:53
将页码作为trade_spider函数的参数。
用不同的页码调用每个进程中的函数,以便每个线程都得到一个唯一的页面。
例如:
import multiprocessing
def trade_spider(page):
url = "http://stackoverflow.com/questions?page=%s" % (page,)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('a', {'class': 'question-hyperlink'}):
href = link.get('href')
title = link.string
print(title)
get_single_item_data("http://stackoverflow.com/" + href)
# Pool of 10 processes
max_pages = 100
num_pages = range(1, max_pages)
pool = multiprocessing.Pool(10)
# Run and wait for completion.
# pool.map returns results from the trade_spider
# function call but that returns nothing
# so ignoring it
pool.map(trade_spider, num_pages)发布于 2016-08-16 17:38:03
试试这个:
from multiprocessing import Process, Value
import time
max_pages = 100
shared_page = Value('i', 1)
arg_list = (max_pages, shared_page)
process_list = list()
for x in range(2):
spider_process = Process(target=trade_spider, args=arg_list)
spider_process.daemon = True
spider_process.start()
process_list.append(spider_process)
for spider_process in process_list:
while spider_process.is_alive():
time.sleep(1.0)
spider_process.join()将trade_spider的参数列表更改为
def trade_spider(max_pages, page)并移除
page = 1这将创建两个进程,通过共享page值在页面列表中工作。
https://stackoverflow.com/questions/38979880
复制相似问题