我在我的MacOS上运行JupyterLab。代码的一部分:
new_list =[]
for k in get_matching_s3_keys(bucket='cw-milenko-tests', prefix='Json_gzips/ticr_calculated_2', suffix='.gz'):
new_list.append(k)
dfs = [spark.read.json(file) for file in new_list]
print (map(lambda df: len(df.schema), dfs))我从S3下载,然后保存到列表中。我得到了这个错误:
AnalysisException: Path does not exist: file:/opt/workspace/Json_gzips/ticr_calculated_2_2020-05-27T00-01-21.json.gz;这是我使用的Spark集群

我使用了这个repo spark cluster on docker
如何检查我的Docker容器是否可以通信?
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a01477cd9316 andreper/spark-worker:latest "/bin/sh -c 'bin/spa…" 4 days ago Up 3 hours 0.0.0.0:8082->8081/tcp spark-worker-2
f448de886c72 andreper/spark-worker:latest "/bin/sh -c 'bin/spa…" 4 days ago Up 3 hours 0.0.0.0:8081->8081/tcp spark-worker-1
5789c47ef46e andreper/jupyterlab:latest "/bin/sh -c 'jupyter…" 4 days ago Up 3 hours 0.0.0.0:8888->8888/tcp jupyterlab
63e3d3c90ed6 andreper/spark-master:latest "/bin/sh -c 'bin/spa…" 4 days ago Up 3 hours 0.0.0.0:7077->7077/tcp, 0.0.0.0:8080->8080/tcp spark-master我检查了jupyterlab和spark-master的座架
milenko@Cloudwalkers-MacBook-Pro spark-cluster-on-docker % docker inspect -f '{{ .Mounts }}' 5789c47ef46e
[{volume hadoop-distributed-file-system /var/lib/docker/volumes/hadoop-distributed-file-system/_data /opt/workspace local rw true }]
milenko@Cloudwalkers-MacBook-Pro spark-cluster-on-docker % docker inspect -f '{{ .Mounts }}' 63e3d3c90ed6
[{volume hadoop-distributed-file-system /var/lib/docker/volumes/hadoop-distributed-file-system/_data /opt/workspace local rw true }]如何将该文件上传到HDFS中对应的路径?
发布于 2020-07-21 19:24:07
您可以使用hdfs dfs -copyFromLocal /local/path/to.json /hdfs/path/to.json将本地存储中的文件添加到hdfs。
添加file:///path/to/your.json并检查spark是否可以在本地文件系统中找到它。
https://stackoverflow.com/questions/63011256
复制相似问题