首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >星火拼接s3错误: AmazonS3Exception:状态代码: 403,亚马逊服务:亚马逊S3,亚马逊请求ID: xxxxx,亚马逊服务错误代码:空

星火拼接s3错误: AmazonS3Exception:状态代码: 403,亚马逊服务:亚马逊S3,亚马逊请求ID: xxxxx,亚马逊服务错误代码:空
EN

Stack Overflow用户
提问于 2017-12-19 19:59:02
回答 1查看 4.2K关注 0票数 4

我正在尝试读取AWS S3中存在的拼图文件,并收到以下错误。

代码语言:javascript
复制
17/12/19 11:27:40 DEBUG DAGScheduler: ShuffleMapTask finished on 0
17/12/19 11:27:40 DEBUG DAGScheduler: submitStage(ResultStage 2)
17/12/19 11:27:40 DEBUG DAGScheduler: missing: List(ShuffleMapStage 1)
17/12/19 11:27:40 DEBUG DAGScheduler: submitStage(ShuffleMapStage 1)
17/12/19 11:27:40 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_1, runningTasks: 2
17/12/19 11:27:40 WARN TaskSetManager: Lost task 2.0 in stage 1.0 (TID 4, ip-xxx-xxx-xxx-xxx.ec2.internal): com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: xxxxxxxx/xxxxxxxxx=
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
    at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
    at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:688)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:71)
    at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
    at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
    at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
    at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
    at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

17/12/19 11:27:40 DEBUG DAGScheduler: submitStage(ResultStage 2)

在Spark中启用调试日志后,我现在可以在将spark作业提交到S3时看到一个NPE。

代码语言:javascript
复制
17/12/19 11:27:39 INFO SparkContext: Starting job: count at TestRead.scala:54
17/12/19 11:27:39 INFO DAGScheduler: Registering RDD 4 (count at TestRead.scala:54)
17/12/19 11:27:39 INFO DAGScheduler: Got job 1 (count at TestRead.scala:54) with 1 output partitions
17/12/19 11:27:39 INFO DAGScheduler: Final stage: ResultStage 2 (count at TestRead.scala:54)
17/12/19 11:27:39 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1)
17/12/19 11:27:39 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1)
17/12/19 11:27:39 DEBUG DAGScheduler: submitStage(ResultStage 2)
17/12/19 11:27:39 DEBUG DAGScheduler: missing: List(ShuffleMapStage 1)
17/12/19 11:27:39 DEBUG DAGScheduler: submitStage(ShuffleMapStage 1)
17/12/19 11:27:39 DEBUG DAGScheduler: missing: List()
17/12/19 11:27:39 INFO DAGScheduler: Submitting ShuffleMapStage 1 (MapPartitionsRDD[4] at count at TestRead.scala:54), which has no missing parents
17/12/19 11:27:39 DEBUG DAGScheduler: submitMissingTasks(ShuffleMapStage 1)
17/12/19 11:27:39 DEBUG ParquetRelation$$anonfun$buildInternalScan$1$$anon$1: Failed to use InputSplit#getLocationInfo.
java.lang.NullPointerException
    at scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
    at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at org.apache.spark.rdd.HadoopRDD$.convertSplitLocationInfo(HadoopRDD.scala:412)
    at org.apache.spark.rdd.SqlNewHadoopRDD.getPreferredLocations(SqlNewHadoopRDD.scala:259)
    at org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:257)
    at org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:257)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:256)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1545)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply$mcVI$sp(DAGScheduler.scala:1556)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1555)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1555)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1555)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1553)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1553)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply$mcVI$sp(DAGScheduler.scala:1556)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1555)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$1.apply(DAGScheduler.scala:1555)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1555)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2.apply(DAGScheduler.scala:1553)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1553)
    at org.apache.spark.scheduler.DAGScheduler.getPreferredLocs(DAGScheduler.scala:1519)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$14.apply(DAGScheduler.scala:969)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$14.apply(DAGScheduler.scala:969)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:969)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:921)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:924)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:923)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:923)
    at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:861)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1607)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
17/12/19 11:27:39 DEBUG ParquetRelation$$anonfun$buildInternalScan$1$$anon$1: Failed to use InputSplit#getLocationInfo.

我不确定为什么会出现这个错误。以下是我的代码:

代码语言:javascript
复制
val conf = new SparkConf(true).setAppName("TestRead")
conf.set(SPARK_CASS_CONN_HOST, config.cassHost)
conf.set("spark.cassandra.connection.timeout_ms","10000")

val sc = new SparkContext(conf)
sc.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
sc.hadoopConfiguration.set("fs.s3a.access.key", config.s3AccessKey)
sc.hadoopConfiguration.set("fs.s3a.secret.key", config.s3SecretKey)

val sqlContext = new SQLContext(sc)

val data = sqlContext.read.parquet(s"s3a://${config.s3bucket}/${config.s3folder}")

println(s"data size is ${data.count()}")
data.show()
if(config.cassWriteToTable){
      println("Writing to Cassandra Table")
      data.write.mode(SaveMode.Append).format("org.apache.spark.sql.cassandra").options(Map("table" -> config.cassTable, "keyspace" -> config.cassKeyspace)).save()
}

println("Stopping TestRead...")
sc.stop()

我在我的build.sbt文件中包含了以下依赖项:

代码语言:javascript
复制
"org.apache.spark" % "spark-core_2.10" % "1.6.1" % "provided,test" ,
  "org.apache.spark" % "spark-sql_2.10" % "1.6.1" % "provided",
  "com.typesafe.play" % "play-json_2.10" % "2.4.6" excludeAll(ExclusionRule(organization = "com.fasterxml.jackson.core")),
  "mysql" % "mysql-connector-java" % "5.1.39",
  "com.amazonaws" % "aws-java-sdk-pom" % "1.11.7" exclude("commons-beanutils","commons-beanutils") exclude("commons-collections","commons-collections") excludeAll ExclusionRule(organization = "javax.servlet")

这里的问题可能是什么?

EN

回答 1

Stack Overflow用户

发布于 2018-01-01 21:25:50

403/禁止:您的登录没有访问您试图读取的文件的权限。

关于NPE,在issues.apache.org上提交一份针对spark的bug报告;他们将努力找出谁应该受到指责。

在此之前,请确保您使用的是最新的Spark版本,并搜索该NPE。不需要提交副本,特别是在已经修复的情况下。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/47886524

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档