我有很多网址。每个url指向一个csv文件,每个csv文件都有自己的名称。
我想从urls下载数据并将其保存在我的计算机上。
我尝试过batch download zipped files in R中的代码,但失败了。
所以我想知道是否有一种从URL批量下载数据并将其保存在计算机上的简单方法。
urls = c(
'http://minio.tapdata.org.cn:9000/tap-bj-1km/input_v3/Tile_162_lonlat.csv.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=24BJXNVDJVVCUTC9CQZ1%2F20220418%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220418T011215Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=3d203f787748209654fc863992c6b51f206df3146dd8054cf8b4aea1ffc9150f',
'http://minio.tapdata.org.cn:9000/tap-bj-1km/input_v3/2010/1/China_PM25_1km_2010_001_162.csv.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=24BJXNVDJVVCUTC9CQZ1%2F20220418%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220418T043413Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=f3f8e4bbac9227e30e314dc8dd4dc0802a3e54719a07a7754ccae4609f0df330',
'http://minio.tapdata.org.cn:9000/tap-bj-1km/input_v3/2010/1/China_PM25_1km_2010_002_162.csv.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=24BJXNVDJVVCUTC9CQZ1%2F20220418%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220418T043413Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=c27655cfbb61a7a6f9c9cb2c2e01624037e84c5bc0c4aecb59bb2975e2c21466',
'http://minio.tapdata.org.cn:9000/tap-bj-1km/input_v3/2010/1/China_PM25_1km_2010_003_162.csv.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=24BJXNVDJVVCUTC9CQZ1%2F20220418%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220418T043413Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=c7baba0660d28263033dc0df5db4cdb851d1a7b6a36e5b369e3dfe658b8f5305'
)
df_urls = data.frame(url = urls) # all the url save in r data frame
发布于 2022-04-18 09:30:39
要在您的工作目录中下载文件,我们可以使用downloader
包。使用zip
从gsub
提取urls
文件名。
library(downloader)
lapply(urls, function(x){
#create zip file name
nam = gsub(".*[/]([^.]+)[.].*", "\\1", x)
nam = paste0(nam, '.zip')
#download zip files to your working directory.
download.file(x, nam, mode = 'wb')
})
https://stackoverflow.com/questions/71909547
复制相似问题