温馨提示:要看高清无码套图,请使用手机打开并单击图片放大查看。
1.文档编写目的
在开发Hadoop的MapReduce作业时需要重复的打包并手动传输到集群运行往往比较麻烦,有时我们也需要在本地能够直接调试代码如在Intellij能直接连接到集群提交作业,或者我们需要跨平台的提交MapReduce作业到集群。那么如何实现呢?本篇文章主要讲述如何跨平台在本地开发环境下提交作业到Hadoop集群,这里我们还是分为Kerberos环境和非Kerberos环境。
1.环境准备
2.非Kerberos及Kerberos环境连接示例
1.Kerberos集群CDH5.11.2,OS为Redhat7.2
2.非Kerberos集群CDH5.13,OS为CentOS6.5
3.Windows + Intellij
1.CDH集群运行正常
2.本地开发环境与集群网络互通且端口放通
2.环境准备
1.Maven依赖
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.0-cdh5.11.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.6.0-cdh5.11.2</version>
</dependency>
Maven依赖需要注意Fayson在《如何使用Java代码访问CDH的Solr服务》提到使用cdh的Maven库
2.创建访问集群的Keytab文件(非Kerberos集群可跳过此步)
[ec2-user@ip-172-31-22-86 keytab]$ sudo kadmin.local
Authenticating as principal mapred/admin@CLOUDERA.COM with password.
kadmin.local: listprincs fayson*
fayson@CLOUDERA.COM
kadmin.local: xst -norandkey -k fayson.keytab fayson@CLOUDERA.COM
...
kadmin.local: exit
[ec2-user@ip-172-31-22-86 keytab]$ ll
total 4
-rw------- 1 root root 514 Nov 28 10:54 fayson.keytab
[ec2-user@ip-172-31-22-86 keytab]$
3.获取集群krb5.conf文件,内容如下(非Kerberos集群可跳过此步)
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
default_realm = CLOUDERA.COM
#default_ccache_name = KEYRING:persistent:%{uid}
[realms]
CLOUDERA.COM = {
kdc = ip-172-31-22-86.ap-southeast-1.compute.internal
admin_server = ip-172-31-22-86.ap-southeast-1.compute.internal
}
4.配置hosts文件
172.31.22.86 ip-172-31-22-86.ap-southeast-1.compute.internal
172.31.26.102 ip-172-31-26-102.ap-southeast-1.compute.internal
172.31.21.45 ip-172-31-21-45.ap-southeast-1.compute.internal
172.31.26.80 ip-172-31-26-80.ap-southeast-1.compute.internal
5.通过Cloudera Manager下载Yarn客户端配置
6.工程目录结构
以下用WordCount例子来说明。
3.Kerberos和非Kerberos的公共类
WordCountMapper类
public class WordCountMapper extends Mapper<LongWritable, Text, Text, LongWritable> {
@Override
protected void map(LongWritable key, Text value,Context context)
throws IOException, InterruptedException {
//获取到一行文件的内容
String line = value.toString();
//切分这一行的内容为一个单词数组
String[] words = StringUtils.split(line, " ");
//遍历 输出 <word,1>
for(String word:words){
context.write(new Text(word), new LongWritable(1));
}
}
}
WordCountReducer类
public class WordCountReducer extends Reducer<Text, LongWritable, Text, LongWritable> {
@Override
protected void reduce(Text key, Iterable<LongWritable> values,Context context)
throws IOException, InterruptedException {
long count = 0;
for(LongWritable value:values){
//调用value的get()方法将long值取出来
count += value.get();
}
//输出<单词:count>键值对
context.write(key, new LongWritable(count));
}
}
InitMapReduceJob类
public class InitMapReduceJob {
public static Job initWordCountJob(Configuration conf) {
Job wcjob = null;
try {
conf.setBoolean("mapreduce.app-submission.cross-platform", true); //设置跨平台提交作业
//设置job所使用的jar包,使用Configuration对象调用set()方法,设置mapreduce.job.jar wcount.jar
conf.set("mapred.jar", "C:\\Users\\Administrator\\IdeaProjects\\hbasedevelop\\target\\hbase-develop-1.0-SNAPSHOT.jar");
//创建job对象需要conf对象,conf对象包含的信息是:所用的jar包
wcjob = Job.getInstance(conf);
wcjob.setMapperClass(WordCountMapper.class);
wcjob.setReducerClass(WordCountReducer.class);
//wcjob的mapper类输出的kv数据类型
wcjob.setMapOutputKeyClass(Text.class);
wcjob.setMapOutputValueClass(LongWritable.class);
//wcjob的reducer类输出的kv数据类型
//job对象调用setOutputKey
wcjob.setOutputKeyClass(Text.class);
wcjob.setOutputValueClass(LongWritable.class);
FileInputFormat.setInputPaths(wcjob, "/fayson");
FileOutputFormat.setOutputPath(wcjob, new Path("/wc/output"));
} catch (Exception e) {
e.printStackTrace();
}
return wcjob;
}
}
注意:代码中黄底标识部分,如果未设置会导致作业会运行失败。
ConfigurationUtil类
public class ConfigurationUtil {
/**
* 获取Hadoop配置信息
* @param confPath
* @return
*/
public static Configuration getConfiguration(String confPath) {
Configuration configuration = new YarnConfiguration();
configuration.addResource(new Path(confPath + File.separator + "core-site.xml"));
configuration.addResource(new Path(confPath + File.separator + "hdfs-site.xml"));
configuration.addResource(new Path(confPath + File.separator + "mapred-site.xml"));
configuration.addResource(new Path(confPath + File.separator + "yarn-site.xml"));
configuration.setBoolean("dfs.support.append", true);
configuration.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
configuration.setBoolean("fs.hdfs.impl.disable.cache", true);
return configuration;
}
}
4.非Kerberos环境
1.Intellij运行示例代码
public class NodeKBMRTest {
private static String confPath = System.getProperty("user.dir") + File.separator + "nonekb-conf";
public static void main(String[] args) {
try {
Configuration conf = ConfigurationUtil.getConfiguration(confPath);
Job wcjob = InitMapReduceJob.initWordCountJob(conf);
wcjob.setJarByClass(NodeKBMRTest.class);
wcjob.setJobName("NodeKBMRTest");
//调用job对象的waitForCompletion()方法,提交作业。
boolean res = wcjob.waitForCompletion(true);
System.exit(res ? 0 : 1);
} catch (Exception e) {
e.printStackTrace();
}
}
}
2.直接在Intellij运行提交MR作业到Hadoop集群
运行成功
3.查看HDFS输出结果
5.Kerberos环境
1.Intellij运行示例代码
public class KBMRTest {
private static String confPath = System.getProperty("user.dir") + File.separator + "conf";
public static void main(String[] args) {
try {
System.setProperty("java.security.krb5.conf", "/Volumes/Transcend/keytab/krb5.conf");
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
System.setProperty("sun.security.krb5.debug", "true"); //Kerberos Debug模式
Configuration conf = ConfigurationUtil.getConfiguration(confPath);
//登录Kerberos账号
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("fayson@CLOUDERA.COM", "/Volumes/Transcend/keytab/fayson.keytab");
UserGroupInformation userGroupInformation = UserGroupInformation.getCurrentUser();
Job wcjob = InitMapReduceJob.initWordCountJob(conf);
wcjob.setJarByClass(KBMRTest.class);
wcjob.setJobName("KBMRTest");
//调用job对象的waitForCompletion()方法,提交作业。
boolean res = wcjob.waitForCompletion(true);
System.exit(res ? 0 : 1);
} catch (Exception e) {
e.printStackTrace();
}
}
}
2.直接在Intellij运行代码,代码自动推送jar到集群执行
Yarn作业界面
3.查看HDFS创建的目录及文件
注意:在提交作业时,如果代码修改需要重新编译打包,并将jar放到黄底标注的目录。
GitHub源码地址:
https://github.com/javaxsky/cdhproject
为天地立心,为生民立命,为往圣继绝学,为万世开太平。
温馨提示:要看高清无码套图,请使用手机打开并单击图片放大查看