基础依赖环境 Apache Hadoop2.7.1 Apache Spark1.6.0 Apache Hive1.2.1 Apache Hbase0.98.12 (1)提前安装好scala的版本,我这里是2.11.7 (2)下载spark-1.6.0源码,解压进入根目录编译 (3)dev/change-scala-version.sh 2.11 修改pom文件,修改对应的hadoop,hbase,hive的版本 执行编译支持hive功能的spark (4)mvn -Pyarn -Phive -Phive-thriftserver -Phadoop-2.7.1 -Dscala-2.11 -DskipTests clean package 三种测试方式:
Java代码
(一):命令行Spark SQL接口调试 编译成功后,将提前安装好的hive/conf/hive-site.xml拷贝到spark的conf/目录下, 执行,spark-sql的启动命令,同时使用--jars 标签把mysql驱动包,hadoop支持的压缩包,以及通过hive读取hbase相关的jar包加入进来,启动
Java代码
bin/spark-sql --jars
lib/mysql-connector-java-5.1.31.jar,
lib/hadoop-lzo-0.4.20-SNAPSHOT.jar,
/ROOT/server/hive/lib/hive-hbase-handler-1.2.1.jar,
/ROOT/server/hbase/lib/hbase-client-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/hbase-common-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/hbase-server-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/hbase-hadoop2-compat-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/guava-12.0.1.jar,
/ROOT/server/hbase/lib/hbase-protocol-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/htrace-core-2.04.jar
(二):Intellj IDEA15.0里面代码调试: sbt的依赖:
Java代码
//下面不需要使用的依赖,大家可根据情况去舍
name := "scala-spark"
version := "1.0"
scalaVersion := "2.11.7"
//使用公司的私服,去掉此行则使用默认私服
resolvers += "Local Maven Repository" at "http://xxxx:8080/nexus/content/groups/public/"
//使用内部仓储
externalResolvers := Resolver.withDefaultResolvers(resolvers.value, mavenCentral = false)
//Hadoop的依赖
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.7.1" //% "provided"
//Habse的依赖
libraryDependencies += "org.apache.hbase" % "hbase-client" % "0.98.12-hadoop2" // % "provided"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "0.98.12-hadoop2" //% "provided"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "0.98.12-hadoop2" //% "provided"
//Spark的依赖
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "1.6.0" //% "provided"
//Spark SQL 依赖
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "1.6.0" //% "provided"
//Spark For Hive 依赖
libraryDependencies += "org.apache.spark" % "spark-hive_2.11" % "1.6.0"
//java servlet 依赖
libraryDependencies += "javax.servlet" % "javax.servlet-api" % "3.0.1" //% "provided"
scala主体代码
Java代码
def main(args: Array[String]) {
//设置用户名
System.setProperty("user.name", "username");
System.setProperty("HADOOP_USER_NAME", "username");
//此处不需要设置master,方便到集群上,能测试yarn-client , yarn-cluster,spark 各种模式
val sc=new SparkConf().setAppName("spark sql hive");
val sct=new SparkContext(sc);
//得到hive上下文
val hive = new org.apache.spark.sql.hive.HiveContext(sct);
//执行sql,并打印输入信息
hive.sql("show tables ").collect().foreach(println);
//关闭资源
sct.stop();
}
写好代码,在win上运行,有bug,/tmp/hive没有执行权限https://issues.apache.org/jira/browse/SPARK-10528 所以建议还是拿到linux上执行,而且win上只能调standalone模式,不能调yarn-cluster和yarn-client模式。 记住一个血的bug,在代码里的SparkConf()一定不要setMaster("")的值,否则你粗心了,在集群上执行各种模式时候会 出现莫名其妙的bug //写代码方式,查询
Java代码
//yarn集群模式
bin/spark-submit
--class com.tools.hive.SparkHive
--master yarn-cluster --files conf/hive-site.xml
--jars lib/datanucleus-api-jdo-3.2.6.jar,
lib/datanucleus-rdbms-3.2.9.jar,
lib/datanucleus-core-3.2.10.jar,
lib/mysql-connector-java-5.1.31.jar
scala-spark_2.11-1.0.jar //这是主体的jar,不用跟--jars放在一起,否则会有问题
//yarn客户端模式
bin/spark-submit
--class com.tools.hive.SparkHive
--master yarn-client
--files conf/hive-site.xml
--jars lib/datanucleus-api-jdo-3.2.6.jar,
lib/datanucleus-rdbms-3.2.9.jar,
lib/datanucleus-core-3.2.10.jar,
lib/mysql-connector-java-5.1.31.jar
scala-spark_2.11-1.0.jar //这是主体的jar,不用跟--jars放在一起,否则会有问题
//spark alone模式
bin/spark-submit
--class com.tools.hive.SparkHive
--master spark://h1:7077
--files conf/hive-site.xml
--jars lib/datanucleus-api-jdo-3.2.6.jar,
lib/datanucleus-rdbms-3.2.9.jar,
lib/datanucleus-core-3.2.10.jar,
lib/mysql-connector-java-5.1.31.jar
scala-spark_2.11-1.0.jar //这是主体的jar,不用跟--jars放在一起,否则会有问题
以Spark SQL 方式查询,不一定非得让你写代码,这就是sql的魅力,spark sql也能使用sql通过hive的元数据,查询hdfs数据或者hbase表等 //yarn-cluster集群模式不支持spark sql Error: Cluster deploy mode is not applicable to Spark SQL shell.
Java代码
//yarn客户端模式
bin/spark-sql
--master yarn-client
--files conf/hive-site.xml
--jars lib/datanucleus-api-jdo-3.2.6.jar,
lib/datanucleus-rdbms-3.2.9.jar,
lib/datanucleus-core-3.2.10.jar,
lib/mysql-connector-java-5.1.31.jar
-e "select name , count(1) as c from info group by name order by c desc ;"
//spark alone模式
bin/spark-sql
--master spark://h1:7077
--files conf/hive-site.xml
--jars lib/datanucleus-api-jdo-3.2.6.jar,
lib/datanucleus-rdbms-3.2.9.jar,
lib/datanucleus-core-3.2.10.jar,
lib/mysql-connector-java-5.1.31.jar
-e "select name , count(1) as c from info group by name order by c desc ;"
Spark SQL + Hive + Hbase方式集成
Java代码
//yarn客户端模式
bin/spark-sql --master yarn-client --files conf/hive-site.xml --jars lib/
datanucleus-api-jdo-3.2.6.jar,
lib/datanucleus-rdbms-3.2.9.jar,
lib/datanucleus-core-3.2.10.jar,
lib/mysql-connector-java-5.1.31.jar,
lib/hadoop-lzo-0.4.20-SNAPSHOT.jar,
/ROOT/server/hive/lib/hive-hbase-handler-1.2.1.jar,
/ROOT/server/hbase/lib/hbase-client-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/hbase-common-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/hbase-server-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/hbase-hadoop2-compat-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/guava-12.0.1.jar,
/ROOT/server/hbase/lib/hbase-protocol-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/htrace-core-2.04.jar
-e "select * from dong limit 2 ;"
//spark alone模式
bin/spark-sql --master spark://h1:7077 --files conf/hive-site.xml --jars lib/
datanucleus-api-jdo-3.2.6.jar,
lib/datanucleus-rdbms-3.2.9.jar,
lib/datanucleus-core-3.2.10.jar,
lib/mysql-connector-java-5.1.31.jar,
lib/hadoop-lzo-0.4.20-SNAPSHOT.jar,
/ROOT/server/hive/lib/hive-hbase-handler-1.2.1.jar,
/ROOT/server/hbase/lib/hbase-client-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/hbase-common-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/hbase-server-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/hbase-hadoop2-compat-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/guava-12.0.1.jar,
/ROOT/server/hbase/lib/hbase-protocol-0.98.12-hadoop2.jar,
/ROOT/server/hbase/lib/htrace-core-2.04.jar
-e "select count(*) from dong ;"
总结: 使用某个spark命令提交任务时,如果对参数比较模糊,可以使用 bin/spark-xxx -h命令查看,参数介绍 另外spark 整合 hive关联hbase的时候或者spark整合hive 的时候,会出现很多问题,最常见的就是 : (1)mysql驱动包找不到 (2)datanucleus相关的类找不到 (3)运行成功,而没有结果 (4)..... Spark SQL整合Hive时,一定要把相关的jar包和hive-site.xml文件,提交到 集群上,否则会出现各种莫名其妙的小问题, 经过在网上查资料,大多数的解决办法在Spark的spark-env.sh里面设置类路径,经测试没有生效,所以,还是通过--jars 这个参数来提交依赖的jar包比较靠谱。 参考链接: winuitls.exe下载地址,如果再win上想远程连接spark alone集群提交任务,可能要用到: http://teknosrc.com/spark-error-java-io-ioexception-could-not-locate-executable-null-bin-winutils-exe-hadoop-binaries/ http://zengzhaozheng.blog.51cto.com/8219051/1597902