前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >spark常用的Transformations 和Actions

spark常用的Transformations 和Actions

原创
作者头像
数据社
修改2020-06-16 14:22:23
3880
修改2020-06-16 14:22:23
举报
文章被收录于专栏:数据社数据社

Transformations


map,filter

spark最长用的两个Transformations:map,filter,下面就来介绍一下这两个。

先看下面这张图:

这里写图片描述
这里写图片描述

从上图中可以清洗的看到 map和filter都是做的什么工作,那我们就代码演示一下。

代码语言:txt
复制
    val input = sc.parallelize(List(1,2,3,4))
    
    val result1 = input.map(x=>x*x)
    val result2 = input.filter(x=>x!=1)
    
    print(result1.collect().mkString(","))
    print("\n")
    print(result2.collect().mkString(","))
    print("\n")

执行结果如下:

代码语言:txt
复制
16/08/17 18:48:31 INFO DAGScheduler: ResultStage 0 (collect at Map.scala:17) finished in 0.093 s
16/08/17 18:48:31 INFO DAGScheduler: Job 0 finished: collect at Map.scala:17, took 0.268871 s
1,4,9,16
........
16/08/17 18:48:31 INFO DAGScheduler: ResultStage 1 (collect at Map.scala:19) finished in 0.000 s
16/08/17 18:48:31 INFO DAGScheduler: Job 1 finished: collect at Map.scala:19, took 0.018291 s
2,3,4

再回头看下上面那张图,是不是明白什么意思了!

flatMap

另外一个常用的就是flatMap,输入一串字符,分割出每个字符

map和flatmap的区别
map和flatmap的区别

来用代码实践一下:

代码语言:txt
复制
	val lines = sc.parallelize(List("hello world","hi"))
    val words = lines.flatMap (lines=>lines.split(" "))
    print(words.first())
    print("\n")

执行结果:

代码语言:txt
复制
16/08/17 19:23:24 INFO DAGScheduler: Job 2 finished: first at Map.scala:24, took 0.016987 s
hello
16/08/17 19:23:24 INFO SparkContext: Invoking stop() from shutdown hook

分隔符如果改一下的话:

代码语言:txt
复制
val words = lines.flatMap (lines=>lines.split(","))

结果会怎样呢?

代码语言:txt
复制
16/08/17 19:33:14 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 
hello world
16/08/17 19:33:14 INFO SparkContext: Invoking stop() from shutdown hook

和想象的一样吧~

distinct,distinct,intersection,subtract

还有几个比较常用的:distinct,distinct,intersection,subtract

这里写图片描述
这里写图片描述

来看看代码实践:

代码语言:txt
复制
val rdd1 = sc.parallelize(List("coffee","coffee","panda","monkey","tea"))
    val rdd2 = sc.parallelize(List("coffee","monkey","kitty"))
    
    rdd1.distinct().take(100).foreach(println)

结果:

代码语言:txt
复制
16/08/17 19:52:29 INFO DAGScheduler: ResultStage 4 (take at Map.scala:30) finished in 0.047 s
16/08/17 19:52:29 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool 
16/08/17 19:52:29 INFO DAGScheduler: Job 3 finished: take at Map.scala:30, took 0.152405 s
monkey
coffee
panda
tea
16/08/17 19:52:29 INFO SparkContext: Starting job: take at Map.scala:32

代码:

代码语言:txt
复制
 rdd1.union(rdd2).take(100).foreach(println)

结果:

代码语言:txt
复制
6/08/17 19:52:29 INFO DAGScheduler: Job 5 finished: take at Map.scala:32, took 0.011825 s
coffee
coffee
panda
monkey
tea
coffee
monkey
kitty
16/08/17 19:52:30 INFO SparkContext: Starting job: take at Map.scala:34
16/08/17 19:52:30 INFO DAGScheduler: Registering RDD 11 (intersection at Map.scala:34)
16/08/17 19:52:30 INFO DAGScheduler: Registering RDD 12 (intersection at Map.scala:34)

代码:

代码语言:txt
复制
rdd1.intersection(rdd2).take(100).foreach(println)

结果:

代码语言:txt
复制
16/08/17 19:52:30 INFO TaskSetManager: Finished task 0.0 in stage 9.0 (TID 9) in 31 ms on localhost (1/1)
16/08/17 19:52:30 INFO TaskSchedulerImpl: Removed TaskSet 9.0, whose tasks have all completed, from pool 
16/08/17 19:52:30 INFO DAGScheduler: ResultStage 9 (take at Map.scala:34) finished in 0.031 s
16/08/17 19:52:30 INFO DAGScheduler: Job 6 finished: take at Map.scala:34, took 0.060785 s
monkey
coffee
16/08/17 19:52:30 INFO SparkContext: Starting job: take at Map.scala:36

代码:

代码语言:txt
复制
rdd1.subtract(rdd2).take(100).foreach(println)

结果:

代码语言:txt
复制
16/08/17 19:52:30 INFO DAGScheduler: Job 6 finished: take at Map.scala:34, took 0.060785 s
monkey
coffee
16/08/17 19:52:30 INFO SparkContext: Starting job: take at Map.scala:36

再看看上面的图,很容易理解吧

Actions


常用的Transformations就介绍到这里,下面介绍下常用的Action:

reduce,countByValue,takeOrdered,takeSample,aggregate

首先看一下:reduce

代码语言:txt
复制
    val rdd5 = sc.parallelize(List(1,2,3,4))
    print("reduce action:"+rdd5.reduce((x,y)=>x+y)+"\n")
代码语言:txt
复制
16/08/18 11:51:16 INFO DAGScheduler: Job 15 finished: reduce at Function.scala:55, took 0.012698 s
reduce action:10
16/08/18 11:51:16 INFO SparkContext: Starting job: aggregate at Function.scala:57

countByValue

代码语言:txt
复制
print(rdd1.countByValue() + "\n")
代码语言:txt
复制
16/08/18 11:51:16 INFO DAGScheduler: Job 11 finished: countByValue at Function.scala:48, took 0.031726 s
Map(monkey -> 1, coffee -> 2, panda -> 1, tea -> 1)
16/08/18 11:51:16 INFO SparkContext: Starting job: takeOrdered at Function.scala:50

takeOrdered

代码语言:txt
复制
rdd1.takeOrdered(10).take(100).foreach(println)
代码语言:txt
复制
16/08/18 11:51:16 INFO DAGScheduler: Job 12 finished: takeOrdered at Function.scala:50, took 0.026160 s
coffee
coffee
monkey
panda
tea
16/08/18 11:51:16 INFO SparkContext: Starting job: takeSample at Function.scala:52

aggregate

这个要重点介绍一下:

Spark文档中aggregate函数定义如下

def aggregateU(seqOp: (U, T) ⇒ U, combOp: (U, U) ⇒ U)(implicit arg0: ClassTagU): U

Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an U and one operation for merging two U's, as in scala.TraversableOnce. Both of these functions are allowed to modify and return their first argument instead of creating a new U to avoid memory allocation.

seqOp操作会聚合各分区中的元素,然后combOp操作把所有分区的聚合结果再次聚合,两个操作的初始值都是zeroValue. seqOp的操作是遍历分区中的所有元素(T),第一个T跟zeroValue做操作,结果再作为与第二个T做操作的zeroValue,直到遍历完整个分区。combOp操作是把各分区聚合的结果,再聚合。aggregate函数返回一个跟RDD不同类型的值。因此,需要一个操作seqOp来把分区中的元素T合并成一个U,另外一个操作combOp把所有U聚合。

代码语言:txt
复制
val rdd5 = sc.parallelize(List(1,2,3,4))
val rdd6 = rdd5.aggregate((0, 0))  ((x, y) =>(x._1 + y, x._2+1),  (x, y) =>(x._1 + y._1, x._2 + y._2))
    print ("aggregate action : " + rdd6 + "\n"  )

我们看一下结果:

代码语言:txt
复制
16/08/18 11:51:16 INFO DAGScheduler: Job 16 finished: aggregate at Function.scala:57, took 0.011686 s
aggregate action : (10,4)
16/08/18 11:51:16 INFO SparkContext: Invoking stop() from shutdown hook

我们可以根据以上执行的例子来理解aggregate 用法:

第一步:将rdd5中的元素与初始值遍历进行聚合操作第二步:将初始值加1进行遍历聚合 第三步:将结果进行聚合 根据本次的RDD 背部实现如下: 第一步:其实是0+1 1+2 3+3 6+4 然后执行:0+1 1+1 2+1 3+1 此时返回(10,4) 本次执行是一个节点,如果在集群中的话,多个节点,会先把数据打到不同的分区上,比如(1,2) (3,4) 得到的结果就会是(3,2) (7,2) 然后进行第二步combine就得到 (10,4)

这样你应该能理解aggregate这个函数了吧

以上就是对常用的Transformations 和Actions介绍,对于初学者来说,动手代码实践各个函数,才是明白其功能最好的方法。

PS :源码

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档