前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >RDD 编程

RDD 编程

作者头像
Michael阿明
发布2021-09-06 10:56:11
4430
发布2021-09-06 10:56:11
举报
文章被收录于专栏:Michael阿明学习之路

文章目录

学习自 MOOC Spark编程基础

1. RDD 创建

  • 从文件创建
代码语言:javascript
复制
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/
         
Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_131)
Type in expressions to have them evaluated.
Type :help for more information.

scala> val lines = sc.textFile("file:///home/hadoop/workspace/word.txt")
lines: org.apache.spark.rdd.RDD[String] = file:////home/hadoop/workspace/word.txt MapPartitionsRDD[1] at textFile at <console>:24
  • 从 hdfs 创建
代码语言:javascript
复制
scala> val lines = sc.textFile("hdfs://localhost:9000/user/word.txt")
lines: org.apache.spark.rdd.RDD[String] = hdfs://localhost:9000/user/word.txt MapPartitionsRDD[3] at textFile at <console>:24
代码语言:javascript
复制
scala> val lines = sc.textFile("/user/word.txt")
lines: org.apache.spark.rdd.RDD[String] = /user/word.txt MapPartitionsRDD[9] at textFile at <console>:24
  • 通过并行集合创建
代码语言:javascript
复制
scala> val array = Array(1,2,3,4,5)
array: Array[Int] = Array(1, 2, 3, 4, 5)

scala> val rdd = sc.parallelize(array)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[12] at parallelize at <console>:26

2. RDD转换

  • filter(func),过滤
代码语言:javascript
复制
scala> val linesWithSpark = lines.filter(line=>line.contains("spark"))
linesWithSpark: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[13] at filter at <console>:26
  • map(func) , 映射
代码语言:javascript
复制
scala> val rdd2 = rdd.map(x => x+10)
rdd2: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[14] at map at <console>:28
代码语言:javascript
复制
scala> val words = lines.map(line => line.split(" "))
words: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[15] at map at <console>:26

输出: n 个元素,每个元素是一个 String 数组

  • flatMap(func)
代码语言:javascript
复制
scala> val words = lines.flatMap(line => line.split(" "))
words: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[16] at flatMap at <console>:26

输出:所有单词

  • groupByKey(), reduceByKey(func) 按 key 合并,得到 value list,后者还可以根据 func 对 value list 进行操作

3. RDD动作

spark 遇到 RDD action 时才会真正的开始执行,遇到转换的时候,只是记录下来,并不真正执行

  • count() ,统计 rdd 元素个数
  • collect(),以数组形式返回所有的元素
  • first(),返回第一个元素
  • take(n),返回前 n 个元素
  • reduce(func),聚合
  • foreach(func),遍历
代码语言:javascript
复制
scala> val rdd = sc.parallelize(Array(1,2,3,4,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> rdd.count()
res0: Long = 5

scala> rdd.first()
res1: Int = 1

scala> rdd.take(3)
res2: Array[Int] = Array(1, 2, 3)

scala> rdd.reduce((a,b)=>a+b)
res3: Int = 15

scala> rdd.collect()
res4: Array[Int] = Array(1, 2, 3, 4, 5)

scala> rdd.foreach(elem => println(elem))

4. 持久化

  • persist(),对一个 rdd 标记为持久化,遇到第一个 rdd动作 时,才真正持久化
代码语言:javascript
复制
scala> val list = List("Hadoop","Spark","Hive")
list: List[String] = List(Hadoop, Spark, Hive)

scala> val rdd1 = sc.parallelize(list)
rdd1: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[1] at parallelize at <console>:26

scala> println(rdd1.count())
3

scala> println(rdd1.collect().mkString("--"))
Hadoop--Spark--Hive

scala> rdd1.cache() # 缓存起来,后续用到rdd1的时候,不用从头开始计算了
res10: rdd1.type = ParallelCollectionRDD[1] at parallelize at <console>:26

5. 分区

  • 提高并行度
  • 减小通信开销

分区原则:分区个数尽量 = 集群CPU核心数

  • 创建rdd时指定分区数量 sc.textFile(path, partitionNum)
代码语言:javascript
复制
scala> val arr = Array(1,2,3,4,5)
arr: Array[Int] = Array(1, 2, 3, 4, 5)

scala> val rdd = sc.parallelize(arr, 2) # 2 个分区
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:26
  • 更改分区数量
代码语言:javascript
复制
scala> rdd.partitions.size
res0: Int = 2

scala> val rdd1 = rdd.repartition(1)
rdd1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[4] at repartition at <console>:28

scala> rdd1.partitions.size
res1: Int = 1
  • wordCount 例子
代码语言:javascript
复制
scala> val lines = sc.
     | textFile("/user/word.txt") # 读取文件
lines: org.apache.spark.rdd.RDD[String] = /user/word.txt MapPartitionsRDD[6] at textFile at <console>:25

scala> val wordCount = lines.flatMap(line => line.split(" ")).
     | map(word => (word, 1)).reduceByKey((a, b) => a+b)
wordCount: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[9] at reduceByKey at <console>:27

scala> wordCount.collect() # 收集
res2: Array[(String, Int)] = Array((love,2), (spark,1), (c++,1), (i,2), (michael,1))

scala> wordCount.foreach(println) # 打印
(spark,1)
(c++,1)
(i,2)
(michael,1)
(love,2)
  • 求平均值例子
代码语言:javascript
复制
scala> val rdd = sc.parallelize(Array(("spark",2),("hadoop",3),("hadoop",7),("spark",3)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> rdd.mapValues(x => (x, 1)).reduceByKey((x,y)=>(x._1+y._1, x._2+y._2)).mapValues(x => (x._1/x._2)).collect()
res0: Array[(String, Int)] = Array((spark,2), (hadoop,5))     

6. 文件数据读写

6.1 本地

代码语言:javascript
复制
scala> val textFile = sc.
     | textFile("file:///home/hadoop/workspace/word.txt")
textFile: org.apache.spark.rdd.RDD[String] = file:///home/hadoop/workspace/word.txt MapPartitionsRDD[5] at textFile at <console>:25

scala> textFile.
     | saveAsTextFile("file:///home/hadoop/workspace/writeword")
     # 后面跟的是一个目录,而不是文件名
代码语言:javascript
复制
ls /home/hadoop/workspace/writeword/
part-00000  part-00001  _SUCCESS

hadoop@dblab-VirtualBox:/usr/local/spark/bin$ cat /home/hadoop/workspace/writeword/part-00000
i love programming
it is very interesting
  • 再次读取写入的文件(会把目录下所有文件读取)
代码语言:javascript
复制
scala> val textFile = sc.textFile("file:///home/hadoop/workspace/writeword")
textFile: org.apache.spark.rdd.RDD[String] = file:///home/hadoop/workspace/writeword MapPartitionsRDD[9] at textFile at <console>:24

6.2 hdfs

代码语言:javascript
复制
scala> val textFile = 
     | sc.textFile("hdfs://localhost:9000/user/word.txt")
textFile: org.apache.spark.rdd.RDD[String] = hdfs://localhost:9000/user/word.txt MapPartitionsRDD[11] at textFile at <console>:25

scala> textFile.first()
res6: String = i love programming

保存到 hdfs (默认 当前用户的目录前缀 /user/用户名/

代码语言:javascript
复制
scala> textFile.saveAsTextFile("writeword")

查看 hdfs

代码语言:javascript
复制
hadoop@dblab-VirtualBox:/usr/local/hadoop/bin$ ./hdfs dfs -ls -R /user/
drwxr-xr-x   - hadoop supergroup          0 2021-04-22 16:01 /user/hadoop
drwxr-xr-x   - hadoop supergroup          0 2021-04-21 22:48 /user/hadoop/.sparkStaging
drwx------   - hadoop supergroup          0 2021-04-21 22:48 /user/hadoop/.sparkStaging/application_1618998320460_0002
-rw-r--r--   1 hadoop supergroup      73189 2021-04-21 22:48 /user/hadoop/.sparkStaging/application_1618998320460_0002/__spark_conf__.zip
-rw-r--r--   1 hadoop supergroup  120047699 2021-04-21 22:48 /user/hadoop/.sparkStaging/application_1618998320460_0002/__spark_libs__4686608713384839717.zip
drwxr-xr-x   - hadoop supergroup          0 2021-04-22 16:01 /user/hadoop/writeword
-rw-r--r--   1 hadoop supergroup          0 2021-04-22 16:01 /user/hadoop/writeword/_SUCCESS
-rw-r--r--   1 hadoop supergroup         42 2021-04-22 16:01 /user/hadoop/writeword/part-00000
-rw-r--r--   1 hadoop supergroup         20 2021-04-22 16:01 /user/hadoop/writeword/part-00001
drwxr-xr-x   - hadoop supergroup          0 2017-11-05 21:57 /user/hive
drwxr-xr-x   - hadoop supergroup          0 2017-11-05 21:57 /user/hive/warehouse
drwxr-xr-x   - hadoop supergroup          0 2017-11-05 21:57 /user/hive/warehouse/hive.db
-rw-r--r--   1 hadoop supergroup         62 2021-04-21 20:06 /user/word.txt

6.3 Json文件

代码语言:javascript
复制
hadoop@dblab-VirtualBox:/usr/local/hadoop/bin$ cat /usr/local/spark/examples/src/main/resources/people.json 
{"name":"Michael"}
{"name":"Andy", "age":30}
{"name":"Justin", "age":19}
代码语言:javascript
复制
scala> val jsonStr = sc.
     | textFile("file:///usr/local/spark/examples/src/main/resources/people.json")
jsonStr: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/examples/src/main/resources/people.json MapPartitionsRDD[14] at textFile at <console>:25

scala> jsonStr.foreach(println)
{"name":"Michael"}
{"name":"Andy", "age":30}
{"name":"Justin", "age":19}
  • 解析 json 文件
代码语言:javascript
复制
scala.util.parsing.json.JSON
JSON.parseFull(jsonString : String)
返回 Some or None 

编写程序

代码语言:javascript
复制
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import scala.util.parsing.json.JSON
object JSONRead{
        def main(args:Array[String]){
                val inputFile = "file:///usr/local/spark/examples/src/main/resources/people.json"
                val conf = new SparkConf().setAppName("JSONRead")
                val sc = new SparkContext(conf)
                val jsonStrs = sc.textFile(inputFile)
                val res = jsonStrs.map(s => JSON.parseFull(s))
                res.foreach({ r => r match {
                        case Some(map:Map[String, Any]) => println(map)
                        case None => println("parsing failed")
                        case other => println("unknown data structure: " + other
                        )}}
                )
        }
}   

使用 sbt 编译打包为 jar,spark-submit --class "JSONRead" <路径 of jar>(有待实践操作) 参考: 使用Intellij Idea编写Spark应用程序(Scala+SBT) http://dblab.xmu.edu.cn/blog/1492-2/

6.4 Hbase

代码语言:javascript
复制
hadoop@dblab-VirtualBox:/usr/local/hbase/bin$ ./hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.5, r239b80456118175b340b2e562a5568b5c744252e, Sun May  8 20:29:26 PDT 2016

hbase(main):001:0> disable "student"
0 row(s) in 3.0730 seconds

hbase(main):002:0> drop "student"
0 row(s) in 1.3530 seconds

hbase(main):003:0> create "student","info"
0 row(s) in 1.3570 seconds

=> Hbase::Table - student
hbase(main):004:0> put "student","1","info:name","michael"
0 row(s) in 0.0920 seconds

hbase(main):005:0> put "student","1","info:gender","M"
0 row(s) in 0.0410 seconds

hbase(main):006:0> put "student","1","info:age","18"
0 row(s) in 0.0080 seconds

也需要编写程序,sbt 编译打包

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2021/04/23 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 文章目录
  • 1. RDD 创建
  • 2. RDD转换
  • 3. RDD动作
  • 4. 持久化
  • 5. 分区
  • 6. 文件数据读写
    • 6.1 本地
      • 6.2 hdfs
        • 6.3 Json文件
          • 6.4 Hbase
          相关产品与服务
          大数据
          全栈大数据产品,面向海量数据场景,帮助您 “智理无数,心中有数”!
          领券
          问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档