专栏首页Java探索之路Spark提交Jar任务警告: Initial job has not accepted any resources;

Spark提交Jar任务警告: Initial job has not accepted any resources;

错误信息描述

在Spark提交任务时, 会一直出现下面警告

会一直出现警告信息

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

# 提交任务的命令
 ./spark-submit --master spark://node1:7077 --class  ah.szxy.scalaspark.sql.DataSetAndDataFrame.CreateDataFrameFromHive /root/test/MySpark-1.0-SNAPSHOT-jar-with-dependencies.jar 

300秒之后还会导致超时问题

Exception in thread "main" java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
	at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
	at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136)
	at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:367)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:144)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:140)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:140)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:135)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenInner(BroadcastHashJoinExec.scala:232)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:102)
	at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:181)
	at org.apache.spark.sql.execution.FilterExec.consume(basicPhysicalOperators.scala:85)
	at org.apache.spark.sql.execution.FilterExec.doConsume(basicPhysicalOperators.scala:206)
	at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:181)
	at org.apache.spark.sql.execution.InputAdapter.consume(WholeStageCodegenExec.scala:354)
	at org.apache.spark.sql.execution.InputAdapter.doProduce(WholeStageCodegenExec.scala:383)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.InputAdapter.produce(WholeStageCodegenExec.scala:354)
	at org.apache.spark.sql.execution.FilterExec.doProduce(basicPhysicalOperators.scala:125)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.FilterExec.produce(basicPhysicalOperators.scala:85)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doProduce(BroadcastHashJoinExec.scala:97)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.produce(BroadcastHashJoinExec.scala:39)
	at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:45)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:35)
	at org.apache.spark.sql.execution.BaseLimitExec$class.doProduce(limit.scala:70)
	at org.apache.spark.sql.execution.LocalLimitExec.doProduce(limit.scala:97)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
	at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
	at org.apache.spark.sql.execution.LocalLimitExec.produce(limit.scala:97)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:524)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:576)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:337)
	at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3273)
	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484)
	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484)
	at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
	at org.apache.spark.sql.Dataset.head(Dataset.scala:2484)
	at org.apache.spark.sql.Dataset.take(Dataset.scala:2698)
	at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
	at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
	at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
	at com.bjsxt.scalaspark.sql.DataSetAndDataFrame.CreateDataFrameFromHive$.main(CreateDataFrameFromHive.scala:24)
	at com.bjsxt.scalaspark.sql.DataSetAndDataFrame.CreateDataFrameFromHive.main(CreateDataFrameFromHive.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

错误解决思路

  1. 根据提示我们查看Spark Master的WebUI
  1. 可以看到当前正在运行的程序有两个, 第一个是我们提交的Spark任务, 但是这个任务没有分到相应的核数, 且任务状态是等待状态 ( 原因是申请不到资源 ); 而资源都被第二个任务(Spark-Shell ,处于Running状态 )所占 因此我们可以通过WebUI 将该任务 Kill即可
  2. 由图我们可以看到阻塞的任务有重新运行了

注意: 我们还可以在提交任务时指定执行核的参数以及内存参数也能解决该问题, 总思路就是能够让当前应用程序能够申请并使用资源

总结

  1. 提交任务的时候也会用到Spark shell, 因此应该将别的窗口的Spark Shell关闭
  2. Spark Shell 进入,Spark的bin目录下 ./spark-shell --master spark://node1:7077
  3. Spark Master WebUI端口为 8080
  4. Spark用到的端口\ 50070 Hdfs WebUI 7077 Spark Master通信端口 8080 Spark Master WebUI 8081 Spark Worker WebUI 18080 Spark HistoryServer WebUI 4040 Spark Driver WebUI 端口 任务调度 9083 hive源数据服务端口(node3)

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • Hive建表异常

    根据错误提示信息可知 : 无法识别第五行内容,说明第五行或第四行语法出了问题 FAILED: ParseException line 5:0 canno...

    时间静止不是简史
  • Sqoop导入数据时异常java.net.ConnectException: Connection refused

    java.net.ConnectException: Call From node4/192.168.179.143 to node4:8032 failed ...

    时间静止不是简史
  • feign.FeignException$MethodNotAllowed: status 405 reading xxx#yyy(Integer)

    使用feign 调用异常 feign.FeignException$MethodNotAllowed: status 405 reading Consumer...

    时间静止不是简史
  • Spark 3.0.1 Structured Streaming 提交程序异常解决

    先说解决办法,提交时除了添加spark-sql-kafka和kafka-clients jar包外,还要添加spark-token-provider-kafka...

    董可伦
  • org.apache.spark.sql.AnalysisException: Table or view not found: `traintext`.`train`; line 1 pos 14;

    恭喜老铁,跟我遇到了一样的问题,接下来是解决方法: 遇到的问题: org.apache.spark.sql.AnalysisException: Table o...

    用户1171305
  • Spark读取CSV异常 java.lang.ArrayIndexOutOfBoundsException:62

    情况1: 将GBK编码的文件转文UTF-8(我碰见的),当然这种情况也可以用情况2中的解决办法解决~

    董可伦
  • Spark No FileSystem for scheme file 解决方法

    这里的 Local repository 就是项目保存库的位置。在这里面依次打开文件位置:

    机器学习和大数据挖掘
  • Spark2.3.1+Kafka0.9使用Direct模式消费信息异常

    在验证kafka属性时不能使用scala默认的类,需要指定kafka带的类 createDirectStream[String, String, StringD...

    笨兔儿
  • spark1.4加载mysql数据 创建Dataframe及join操作连接方法问题

    最后无奈。。就用原来的方法 创建软连接,加载数据,发现可以。。这我就不明白了。。。

    用户3003813
  • spark和kafka jar包冲突NoSuchMethodError: net.jpountz.lz4.LZ4BlockInputStream

    在利用Spark和Kafka处理数据时,有时会同时在maven pom中引入Spark和Kafka的相关依赖。但是当利用Spark SQL处理数据生成的Data...

    大数据学习与分享

扫码关注云+社区

领取腾讯云代金券