首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >ExecutorLostFailure (执行程序4由于一个正在运行的任务而退出)原因:丢失的驱动程序堆栈跟踪:

ExecutorLostFailure (执行程序4由于一个正在运行的任务而退出)原因:丢失的驱动程序堆栈跟踪:
EN

Stack Overflow用户
提问于 2022-08-25 18:29:45
回答 1查看 289关注 0票数 0

我正试图在我的上运行ALS模型,而且我总是遇到同样的错误:

这里是我的星火配置:

代码语言:javascript
运行
复制
spark_config["spark.executor.memory"] = "32G"
spark_config["spark.executor.memoryOverhead"] = "20G"
spark_config["spark.executor.cores"] = "32"
spark_config["spark.driver.memory"] = "32G"
# spark_config["spark.shuffle.memoryFraction"] = "0" 


# Executor config
spark_config["spark.dyamicAllocation.enable"] = "true"
spark_config["spark.dynamicAllocation.minExecutors"] = "100"
spark_config["spark.dynamicAllocation.maxExecutors"] = "300"

这是我的模特训练。

代码语言:javascript
运行
复制
df = spark.read.parquet('file.parquet')
df = df.filter(df.item_id.isNotNull())

X_train, X_test = df.randomSplit([0.8, 0.2]) 
als = ALS(userCol= "cid_int", itemCol= "item_id", ratingCol= "score", rank=10, maxIter=10, seed=0)
model = als.fit(X_train)

以下是我的错误:

代码语言:javascript
运行
复制
Py4JJavaError: An error occurred while calling o194.fit.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 5.0 failed 4 times, most recent failure: Lost task 4.3 in stage 5.0 (TID 160, p13explorpdp01-sw-1zwh.c.wmt-bfdms-p13expprod.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:1926)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1914)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1913)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1913)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:948)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:948)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:948)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2147)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2096)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2085)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:759)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2076)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2097)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2116)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2141)
    at org.apache.spark.rdd.RDD.count(RDD.scala:1213)
    at org.apache.spark.ml.recommendation.ALS$.train(ALS.scala:932)
    at org.apache.spark.ml.recommendation.ALS.$anonfun$fit$1(ALS.scala:676)
    at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:185)
    at scala.util.Try$.apply(Try.scala:213)
    at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:185)
    at org.apache.spark.ml.recommendation.ALS.fit(ALS.scala:658)
    at org.apache.spark.ml.recommendation.ALS.fit(ALS.scala:569)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

我尝试用不同的参数来调整我的星火配置,但是没有任何帮助。

EN

回答 1

Stack Overflow用户

发布于 2022-08-26 07:08:34

代码语言:javascript
运行
复制
org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 5.0 failed 4 times, most recent failure: Lost task 4.3 in stage 5.0 (TID 160, p13explorpdp01-sw-1zwh.c.wmt-bfdms-p13expprod.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost

您的火花应用程序在非常早期阶段就失败了。你把你的火花撒在纱线上了吗?此错误显示您的任务之一失败并导致您的执行器退出。

在培训您的模型之前,您似乎已经失败了,这应该是数据转换的一部分。如果您只在此任务上失败,而没有在同一阶段下的其他任务,请检查您在不同分区中的数据是否偏斜。如果您在同一阶段下失败了大部分任务,那么您应该在这个火花应用程序中增加内存。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/73492208

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档