我正在从rest API中提取数据。问题是数据量很大,所以响应是分页的。我通过首先读取有多少页的数据,然后迭代每个页的请求来解决这个问题。这里唯一的问题是,页面的总数大约是1.5K,这需要花费大量的时间来实际获取和附加到CSV。有没有更快的解决方法?
这就是我的目标端点:https://developer.keeptruckin.com/reference#get-logs
import requests
import json
import csv
url='https://api.keeptruckin.com/v1/logs?start_date=2019-03-09'
header={'x-api-key':'API KEY HERE'}
r=requests.get(url,headers=header)
result=r.json()
result = json.loads(r.text)
num_pages=result['pagination']['total']
print(num_pages)
for page in range (2,num_pages+1):
r=requests.get(url,headers=header, params={'page_no': page})
result=r.json()
result = json.loads(r.text)
csvheader=['First Name','Last Name','Date','Time','Type','Location']
with open('myfile.csv', 'a+', newline='') as csvfile:
writer = csv.writer(csvfile, csv.QUOTE_ALL)
##writer.writerow(csvheader)
for log in result['logs']:
username = log['log']['driver']['username']
first_name=log['log']['driver']['first_name']
last_name=log['log']['driver']['last_name']
for event in log['log']['events']:
start_time = event['event']['start_time']
date, time = start_time.split('T')
event_type = event['event']['type']
location = event['event']['location']
if not location:
location = "N/A"
if (username=="barmx1045" or username=="aposx001" or username=="mcqkl002" or username=="coudx014" or username=="ruscx013" or username=="loumx001" or username=="robkr002" or username=="masgx009"or username=="coxed001" or username=="mcamx009" or username=="linmx024" or username=="woldj002" or username=="fosbl004"):
writer.writerow((first_name, last_name,date, time, event_type, location))
发布于 2019-05-30 09:05:11
第一个选项:大多数分页的响应都有一个可以编辑的页面大小。https://developer.keeptruckin.com/reference#pagination尝试将per_page字段更新为100,而不是每次拉取的默认值25。
第二个选项:通过使用多个线程/进程并拆分每个线程/进程负责的页面部分,您可以潜在地一次拉取多个页面。
https://stackoverflow.com/questions/56370305
复制相似问题