前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Spark Tips 1: RDD的collect action 不适用于单个element size过大的情况

Spark Tips 1: RDD的collect action 不适用于单个element size过大的情况

作者头像
叶锦鲤
发布2018-03-15 10:51:23
1.1K0
发布2018-03-15 10:51:23
举报
文章被收录于专栏:悦思悦读悦思悦读

collect是Spark RDD一个非常易用的action,通过collect可以轻易获得一个RDD当中所有的elements。当这些elements是String类型的时候,可以轻易将整个RDD转化成一个List<String>,简直不要太好用。

不过等一等,这么好用的action有一个弱点,它不适合size比较的element。举个例子来说吧。请看下面这段代码:

... ...

JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(

jssc,

String.class,

String.class,

StringDecoder.class,

StringDecoder.class,

kafkaParams,

topicsSet

);

JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {

@Override

public String call(Tuple2<String, String> tuple2) {

return tuple2._2();

}

});

lines.foreachRDD(new Function<JavaRDD<String>, Void>(){

@Override

public Void call(JavaRDD<String> strJavaRDD) throws Exception {

List<String> messages = strJavaRDD.collect();

List<String> sizeStrs = new ArrayList<String>();

for (String message: messages) {

if (message== null)

continue;

String logStr = "message size is " + message.length();

strs.add(logStr);

}

saveToLog(outputLogPath, strs);

return null;

}

});

... ...

上述这段代码当Kafka中单个message(也就是)的size很小(比如200Bytes)的时候,运行得很好。可是当单个message size变大到一定程度(例如10MB),就会抛出以下异常:

sparkDriver-akka.actor.default-dispatcher-18 2015-10-15 21:52:28,606 ERROR JobSc

heduler - Error running job streaming job 1444971120000 ms.0

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 238.0 failed 4 times, most recent failure: Lost task 0.3 in stage 238.0 (TID421, 127.0.0.1): ExecutorLostFailure (executor 123 lost)

Driver stacktrace:

at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1215)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1204)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)

at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)

at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1203)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)

at scala.Option.foreach(Option.scala:236)

at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1404)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1365)

at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

原因很简单,collect()无法handle“大数据”。对于10MB size这样的单条message。我们可以用下面这段代码替代上面最后一部分:

lines.foreachRDD(new Function<JavaRDD<String>, Void>() {

@Override

public Void call(JavaRDD<String> strJavaRDD) throws Exception {

JavaRDD<String> sizeRDD = strJavaRDD.map(new Function<String, String>() {

@Override

public String call(String message) throws Exception {

if (message == null)

return null;

String logStr = "Message size is " + message.length();

return logStr;

}

});

List<String> sizeStrs = sizeRDD.collect();

saveToLog(outputLogPat, sizeStrs);

return null;

}

});

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2015-10-16,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 智汇AI 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档