我目前有一个函数,它接收一个url字符串,读取它以查找x信息,并将其存储为json文件:
def log_scrape(url):
HEADERS = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246'}
response = requests.get(url=url, headers=HEADERS)
soup = BeautifulSoup(response.content, 'html.parser')
data = soup.find_all('script')[8]
dataString = data.text.rstrip()
logData = re.findall(r'{.*}', dataString)
try:
urlLines = url.split('/')
if len(urlLines) < 5:
bossName = urlLines[3]
elif len(urlLines) == 5:
bossName = urlLines[4]
except Exception as e:
return 'Error' + str(e)
tag = bossName.split('_')
bossTag = tag[1]
try:
# Wing_1
if bossTag == 'vg':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_1\Valley_Guardian'
elif bossTag == 'gors':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_1\Gorseval_The_Multifarious'
elif bossTag == 'sab':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_1\Sabetha'
# Wing_2
elif bossTag == 'sloth':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_2\Slothasor'
elif bossTag == 'matt':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_2\Mathias'
# Wing_3
elif bossTag == 'kc':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_3\Keep_Construct'
elif bossTag == 'xera':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_3\Xera'
# Wing_4
elif bossTag == 'cairn':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_4\Cairn_The_Indomitable'
elif bossTag == 'mo':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_4\Mursaat_Overseer'
elif bossTag == 'sam':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_4\Samarog'
elif bossTag == 'dei':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_4\Deimos'
# Wing_5
elif bossTag == 'sh':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_5\Soulless_Horror_Deesmina'
elif bossTag == 'dhuum':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_5\Dhuum'
# Wing_6
elif bossTag == 'ca':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_6\Conjured_Amalgamate'
elif bossTag == 'twinlargos':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_6\Twin_Largos'
elif bossTag == 'qadim':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_6\Qadim'
# Wing_7
elif bossTag == 'adina':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_7\Cardinal_Adina'
elif bossTag == 'sabir':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_7\Cardinal_Sabir'
elif bossTag == 'prlqadim' or bossTag == 'qpeer':
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data\Wing_7\Qadim_The_Peerless'
except:
pathName = 'ETL\EXTRACT_00\Web Scraping\Boss_data'
with open(f'{pathName}\{bossName}.json', 'w') as f:
for line in logData:
jsonFile = f.write(line)
return jsonFile
pass
但是,这使得这个过程非常慢,所以我想尝试使用txt文件,循环它并运行de function,txt文件如下所示:
https://gw2wingman.nevermindcreations.de/logContent/20220829-151336_matt_kill
https://gw2wingman.nevermindcreations.de/logContent/20220831-214520_sabir_kill
https://gw2wingman.nevermindcreations.de/logContent/20220831-190128_sabir_kill
我尝试使用for循环:
with open('gw2_urls.txt', 'r') as urls:
for url in urls:
print(log_scrape(url))
但是它总是在"data = soup.find_all('script')8“行中返回一个错误'List of index‘,但是,如果我一个接一个地执行这个错误,这个错误就不会出现。
如果你知道为什么会发生这种情况,以及我如何加快这一进程,那将是非常有帮助的。
发布于 2022-09-11 08:52:19
如果我明白的话,你想要数据中的链接吗?您似乎只从soup.find_all('script')[8]
获得了一个,这就是如果它存在的话。这是标记脚本的所有元素的列表。<a>
和href
属性示例:
for link in soup.find_all('a'):
print(link.get('href'))
# http://example.com/elsie
# http://example.com/lacie
# http://example.com/tillie
这可以更改为
log_data = [a.get('href') for a in soup.find_all('a')]
然后将其打印到文件中,如下所示:
with open('gw2_urls.txt', 'w') as urls:
for url in urls:
urls.write(log_data + "\n")
发布于 2022-09-11 08:49:31
用python读取文本文件中的行的正确方法是:
with open('gw2_urls.txt', 'r') as f:
urls = f.readlines()
for url in urls:
print(log_scrape(url))
有关readlines()
的详细信息,请参阅https://www.w3schools.com/python/ref_file_readlines.asp
https://stackoverflow.com/questions/73677874
复制相似问题