scala 2.11.12 下载:https://www.scala-lang.org/download/
scala 2.11.12(Linux):scala-2.11.12.tgz
scala 2.11.12(windows):scala-2.11.12.zip
IDEA 新建一个 Maven项目
Maven项目创建成功提示
INFO BUILD SUCCESS
pom.xml 参考:
https://cloud.tencent.com/developer/article/1818625
创建Object对象
package com.xtd.spark
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
class Example {
def sparkSQL(path:String): Unit = {
// D:/Hadoop/Spark/spark-2.4.0-bin-without-hadoop/examples/src/main/resources/employees.json
val sparkConf = new SparkConf()
sparkConf.setAppName("SparkExample").setMaster("local[2]")
val context = new SparkContext(sparkConf)
val sqlContext = new SQLContext(context)
val people = sqlContext.read.format("json").load(path)
people.printSchema()
people.show()
context.stop()
}
}
object Example{
def main(args: Array[String]): Unit = {
val path = args(0)
val example = new Example
example.sparkSQL(path)
println("path: "+path)
}
}
点击右上角的对象名称,编辑项目配置,添加传递参数(本地文件添加前缀file:///)
file:///D:/Hadoop/Spark/spark-2.4.0-bin-without-hadoop/examples/src/main/resources/employees.json
employees.json 文件 ,这个文件在spark安装文件根目录下的examples下可找到
{"name":"Michael"}
{"name":"Andy", "age":30}
{"name":"Justin", "age":19}
运行会出现以下成功提示
如何打包到集群?
右击项目,选择 Open in Terminal,进入CMD控制台
输入Maven编译命令
mvn clean package -DskipTest
接下来把 jar 包上传至 Linux 服务器上,通过 spark-submit 提交 jar 到集群
客户端模式
spark-submit \
--class com.xtd.spark.Example \
--deploy-mode client \
/home/spark/jar/spark2-1.0.jar \
file:///home/spark/examples/employees.json
spark on yarn
spark-submit \
--class com.xtd.spark.ExampleHDFS \
--master yarn \
--deploy-mode cluster \
--driver-memory 2g \
--executor-cores 1 \
--executor-memory 1g \
/home/spark/jar/spark-1.0.jar \
/user/spark/examples/resources/employees.json
注意事项
/home/spark/jar/spark-1.0.jar 是jar包在Linux上的路径,jar包上传在哪就写哪 file:///home/spark/examples/employees.json 这行是传递的参数,file://表示employees.json文件在Linux上 更多参数设置可以输入命令 spark-submit --help
运行结果