我在20个节点集群上多次运行我的程序。每次运行该程序时,都会突然收到以下错误:
15/04/19 16:52:35 WARN scheduler.TaskSetManager: Lost task 35.0 in stage 9.0 (TID 384, ip-XXX.XXX.compute.internal): java.io.FileNotFoundException: /mnt/spark/spark-local-XXX-ebd3/18/shuffle_2_35_64 (No space left on device)
java.io.FileOutputStream.open(Native Method)
java.io.FileOutputStream.<init>(FileOutputStream.java:221)
org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:123)
org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:192)
org.apache.spark.shuffle.hash.HashShuffleWriter$$anonfun$write$1.apply(HashShuffleWriter.scala:67)
org.apache.spark.shuffle.hash.HashShuffleWriter$$anonfun$write$1.apply(HashShuffleWriter.scala:65)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
org.apache.spark.shuffle.hash.HashShuffleWriter.write(HashShuffleWriter.scala:65)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)检查UI,它表示节点上绝对没有任何内容。我可能已经运行了15次程序,只有突然之间才开始运行。为什么会突然间发生这种事呢?我该怎么解决呢?
发布于 2015-04-19 19:25:48
“设备上没有剩余空间”是一个非常明显的例外:该节点在写入星火本地文件的挂载上没有空间:/mnt/spark/。
解决方案:转到节点(或节点)并清理它。rm -rf FTW.
如果作业因人工干预或失败而在终止前中断,则通常会留下临时数据。
https://stackoverflow.com/questions/29732899
复制相似问题