前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >本地Eclipse提交MR程序到Yarn

本地Eclipse提交MR程序到Yarn

作者头像
sparkle123
发布2018-07-04 11:15:18
5250
发布2018-07-04 11:15:18
举报

1、一般地,从Windows本地的Eclipse提交程序到yarn,会报如下错误:

代码语言:javascript
复制
Diagnostics: Exception from container-launch.
Container id: container_1526537597068_0006_02_000001
Exit code: 1
Exception message: /bin/bash: line 0: fg: no job control

Stack trace: ExitCodeException exitCode=1: /bin/bash: line 0: fg: no job control

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
    at org.apache.hadoop.util.Shell.run(Shell.java:478)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1

2、原因: 客户端提交任务时候,在YARNRunner.java中调用了 createApplicationSubmissionContext方法,该方法负责初始化启动MRAppMaster的脚本,该脚本的生成依赖于本地系统,而目前由于是在Windows系统下,所以生成的脚本提交到Linux集群上运行必然出错。

createApplicationSubmissionContext

3、手工修改YARNRunner适配Linux系统

代码语言:javascript
复制
// Construct necessary information to start the MR AM
        ApplicationSubmissionContext appContext = createApplicationSubmissionContext(conf, jobSubmitDir, ts);

4、修改完再次运行debug看到appContext的值为:

代码语言:javascript
复制
application_id { id: 7 cluster_timestamp: 1526537597068 } application_name: "wc-fat.jar" queue: "default" am_container_spec { localResources { key: "job.jar" value { resource { scheme: "hdfs" host: "192.168.92.150" port: 8020 file: "/tmp/hadoop-yarn/staging/hadoop/.staging/job_1526537597068_0007/job.jar" } size: 73165925 timestamp: 1526810933633 type: PATTERN visibility: APPLICATION pattern: "(?:classes/|lib/).*" } } localResources { key: "jobSubmitDir/job.splitmetainfo" value { resource { scheme: "hdfs" host: "192.168.92.150" port: 8020 file: "/tmp/hadoop-yarn/staging/hadoop/.staging/job_1526537597068_0007/job.splitmetainfo" } size: 15 timestamp: 1526810935006 type: FILE visibility: APPLICATION } } localResources { key: "jobSubmitDir/job.split" value { resource { scheme: "hdfs" host: "192.168.92.150" port: 8020 file: "/tmp/hadoop-yarn/staging/hadoop/.staging/job_1526537597068_0007/job.split" } size: 355 timestamp: 1526810934874 type: FILE visibility: APPLICATION } } localResources { key: "job.xml" value { resource { scheme: "hdfs" host: "192.168.92.150" port: 8020 file: "/tmp/hadoop-yarn/staging/hadoop/.staging/job_1526537597068_0007/job.xml" } size: 93151 timestamp: 1526810944696 type: FILE visibility: APPLICATION } } tokens: "HDTS\000\000\001\025MapReduceShuffleToken\b\342\214TYX\031\303\033" environment { key: "SHELL" value: "/bin/bash" } environment { key: "CLASSPATH" value: "$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:$PWD/*" } environment { key: "LD_LIBRARY_PATH" value: "$PWD" } command: "$JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr " application_ACLs { accessType: APPACCESS_VIEW_APP acl: " " } application_ACLs { accessType: APPACCESS_MODIFY_APP acl: " " } } cancel_tokens_when_complete: true maxAppAttempts: 2 resource { memory: 1536 virtual_cores: 1 } applicationType: "MAPREDUCE"

【另外】 规划文件提交在hdfs上的佐证:

image.png

image.png

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2018.05.20 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档