前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >hadoop学习笔记 原

hadoop学习笔记 原

作者头像
用户2836074
发布2018-08-15 09:45:42
3690
发布2018-08-15 09:45:42
举报
文章被收录于专栏:清风清风
代码语言:javascript
复制
* vi /etc/hosts
10.204.211.241 JZYH-COLLECTOR-LTEMR3-OSS
* vi /etc/sysconfig/network
#主机名不要使用下划线
127.0.0.1   localhost localhost4 localhost4.localdomain4
** Single Node Cluster
* etc/hadoop/core-site.xml:
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://JZYH-COLLECTOR-LTEMR3-OSS:8020</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/disk/backup/soft/hadoop-2.5.0/data/tmp</value>
    </property>
    <property>
        <name>fs.trash.interval</name>
        <value>10080</value>
    </property>
</configuration>
* etc/hadoop/hdfs-site.xml:
<configuration>
    <!--分布式去掉该属性-->
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>JZYH-COLLECTOR-LTEMR3-OSS:50090</value>
    </property>
</configuration>
* 配置免登陆
$ ssh localhost
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
往远程机器上拷贝:$ ssh-copy-id 地址
* Format the filesystem:
  $ bin/hdfs namenode -format
* Start NameNode daemon and DataNode daemon:
  $ sbin/start-dfs.sh
  或者
  $ sbin/hadoop-daemon.sh start namenode
  $ sbin/hadoop-daemon.sh start secondarynamenode
  $ sbin/hadoop-daemon.sh start datanode
* Browse the web interface for the NameNode; by default it is available at:
	NameNode - http://localhost:50070/
* Make the HDFS directories required to execute MapReduce jobs:
  $ bin/hdfs dfs -mkdir /user
  $ bin/hdfs dfs -mkdir /user/<username>
* Copy the input files into the distributed filesystem:
  $ bin/hdfs dfs -put etc/hadoop input
* Run some of the examples provided:
  $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar grep input output 'dfs[a-z.]+'
* Copy the output files from the distributed filesystem to the local filesystem and examine them:
  $ bin/hdfs dfs -get output output
* View the output files on the distributed filesystem:
  $ bin/hdfs dfs -cat output/*
* When you're done, stop the daemons with:
  $ sbin/stop-dfs.sh

**YARN on Single Node
*etc/hadoop/mapred-site.xml:
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>JZYH-COLLECTOR-LTEMR3-OSS:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>JZYH-COLLECTOR-LTEMR3-OSS:19888</value>
    </property>
</configuration>
*etc/hadoop/yarn-site.xml:
<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>JZYH-COLLECTOR-LTEMR3-OSS</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>
*etc/hadoop/slaves:
JZYH-COLLECTOR-LTEMR3-OSS
*Start ResourceManager daemon and NodeManager daemon:
$ sbin/start-yarn.sh
或者
$ sbin/yarn-daemon.sh start resourcemanager
$ sbin/yarn-daemon.sh start nodemanager
*Browse the web interface for the ResourceManager; by default it is available at:
ResourceManager - http://localhost:8088/
*When you're done, stop the daemons with:
  $ sbin/stop-yarn.sh

*start Mapreduce history daemon
$ sbin/mr-jobhistory-daemon.sh start historyserver
* Aggregation(应用运行完成以后将日志上传到HDFS文件系统)
	etc/hadoop/yarn-site.xml:
<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>JZYH-COLLECTOR-LTEMR3-OSS</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>3600</value>
    </property>
</configuration>
重启所有服务
$ sbin/yarn-daemon.sh stop resourcemanager
$ sbin/yarn-daemon.sh stop nodemanager
$ sbin/mr-jobhistory-daemon.sh stop historyserver

$ sbin/yarn-daemon.sh start resourcemanager
$ sbin/yarn-daemon.sh start nodemanager
$ sbin/mr-jobhistory-daemon.sh start historyserver


**读取本地文件:bin/hdfs dfs -Dfs.defaultFS=file:/// -ls /
**状态信息:bin/hdfs dfsadmin -report
**安全模式safemode
启用
bin/hdfs dfsadmin -safemode enter
查询
bin/hdfs dfsadmin -safemode get
退出
bin/hdfs dfsadmin -safemode leave

执行wordcount:
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar wordcount input output
报错:
16/12/11 13:08:01 INFO client.RMProxy: Connecting to ResourceManager at mac/192.168.1.119:8032
16/12/11 13:08:03 INFO input.FileInputFormat: Total input paths to process : 1
16/12/11 13:08:03 INFO mapreduce.JobSubmitter: number of splits:1
16/12/11 13:08:03 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1481432311518_0003
16/12/11 13:08:03 INFO impl.YarnClientImpl: Submitted application application_1481432311518_0003
16/12/11 13:08:03 INFO mapreduce.Job: The url to track the job: http://mac:8088/proxy/application_1481432311518_0003/
16/12/11 13:08:03 INFO mapreduce.Job: Running job: job_1481432311518_0003
16/12/11 13:08:12 INFO mapreduce.Job: Job job_1481432311518_0003 running in uber mode : false
16/12/11 13:08:12 INFO mapreduce.Job:  map 0% reduce 0%
16/12/11 13:08:12 INFO mapreduce.Job: Job job_1481432311518_0003 failed with state FAILED due to: Application application_1481432311518_0003 failed 2 times due to AM Container for appattempt_1481432311518_0003_000002 exited with  exitCode: 127 due to: Exception from container-launch: ExitCodeException exitCode=127: 
ExitCodeException exitCode=127: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
查看日志:
hdfs dfs -ls -R /
drwxrwx---   - john supergroup          0 2016-12-11 12:13 /tmp
drwxrwx---   - john supergroup          0 2016-12-10 18:02 /tmp/hadoop-yarn
drwxrwx---   - john supergroup          0 2016-12-11 12:12 /tmp/hadoop-yarn/staging
drwxrwx---   - john supergroup          0 2016-12-10 18:02 /tmp/hadoop-yarn/staging/history
drwxrwx---   - john supergroup          0 2016-12-10 18:02 /tmp/hadoop-yarn/staging/history/done
drwxrwxrwt   - john supergroup          0 2016-12-10 18:02 /tmp/hadoop-yarn/staging/history/done_intermediate
drwx------   - john supergroup          0 2016-12-11 12:12 /tmp/hadoop-yarn/staging/john
drwx------   - john supergroup          0 2016-12-11 13:08 /tmp/hadoop-yarn/staging/john/.staging
drwx------   - john supergroup          0 2016-12-11 12:12 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0001
-rw-r--r--  10 john supergroup     270368 2016-12-11 12:12 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0001/job.jar
-rw-r--r--  10 john supergroup        112 2016-12-11 12:12 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0001/job.split
-rw-r--r--   1 john supergroup         17 2016-12-11 12:12 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0001/job.splitmetainfo
-rw-r--r--   1 john supergroup      80320 2016-12-11 12:13 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0001/job.xml
drwx------   - john supergroup          0 2016-12-11 12:19 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0002
-rw-r--r--  10 john supergroup     270368 2016-12-11 12:19 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0002/job.jar
-rw-r--r--  10 john supergroup        112 2016-12-11 12:19 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0002/job.split
-rw-r--r--   1 john supergroup         17 2016-12-11 12:19 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0002/job.splitmetainfo
-rw-r--r--   1 john supergroup      80320 2016-12-11 12:19 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0002/job.xml
drwx------   - john supergroup          0 2016-12-11 12:51 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0003
-rw-r--r--  10 john supergroup     270368 2016-12-11 12:51 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0003/job.jar
-rw-r--r--  10 john supergroup        112 2016-12-11 12:51 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0003/job.split
-rw-r--r--   1 john supergroup         17 2016-12-11 12:51 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0003/job.splitmetainfo
-rw-r--r--   1 john supergroup      80320 2016-12-11 12:51 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0003/job.xml
drwx------   - john supergroup          0 2016-12-11 12:52 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0004
-rw-r--r--  10 john supergroup     270368 2016-12-11 12:52 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0004/job.jar
-rw-r--r--  10 john supergroup        112 2016-12-11 12:52 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0004/job.split
-rw-r--r--   1 john supergroup         17 2016-12-11 12:52 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0004/job.splitmetainfo
-rw-r--r--   1 john supergroup      80320 2016-12-11 12:52 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0004/job.xml
drwx------   - john supergroup          0 2016-12-11 12:53 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0005
-rw-r--r--  10 john supergroup     270368 2016-12-11 12:53 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0005/job.jar
-rw-r--r--  10 john supergroup        112 2016-12-11 12:53 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0005/job.split
-rw-r--r--   1 john supergroup         17 2016-12-11 12:53 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0005/job.splitmetainfo
-rw-r--r--   1 john supergroup      80320 2016-12-11 12:53 /tmp/hadoop-yarn/staging/john/.staging/job_1481426363207_0005/job.xml
drwx------   - john supergroup          0 2016-12-11 12:59 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0001
-rw-r--r--  10 john supergroup     270368 2016-12-11 12:59 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0001/job.jar
-rw-r--r--  10 john supergroup        112 2016-12-11 12:59 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0001/job.split
-rw-r--r--   1 john supergroup         17 2016-12-11 12:59 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0001/job.splitmetainfo
-rw-r--r--   1 john supergroup      80315 2016-12-11 12:59 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0001/job.xml
drwx------   - john supergroup          0 2016-12-11 13:07 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0002
-rw-r--r--  10 john supergroup     270368 2016-12-11 13:07 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0002/job.jar
-rw-r--r--  10 john supergroup        112 2016-12-11 13:07 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0002/job.split
-rw-r--r--   1 john supergroup         17 2016-12-11 13:07 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0002/job.splitmetainfo
-rw-r--r--   1 john supergroup      80320 2016-12-11 13:07 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0002/job.xml
drwx------   - john supergroup          0 2016-12-11 13:08 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0003
-rw-r--r--  10 john supergroup     270368 2016-12-11 13:08 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0003/job.jar
-rw-r--r--  10 john supergroup        112 2016-12-11 13:08 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0003/job.split
-rw-r--r--   1 john supergroup         17 2016-12-11 13:08 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0003/job.splitmetainfo
-rw-r--r--   1 john supergroup      80320 2016-12-11 13:08 /tmp/hadoop-yarn/staging/john/.staging/job_1481432311518_0003/job.xml
drwxrwxrwt   - john supergroup          0 2016-12-11 12:13 /tmp/logs
drwxrwx---   - john supergroup          0 2016-12-11 12:13 /tmp/logs/john
drwxrwx---   - john supergroup          0 2016-12-11 13:08 /tmp/logs/john/logs
drwxrwx---   - john supergroup          0 2016-12-11 12:13 /tmp/logs/john/logs/application_1481426363207_0001
-rw-r-----   1 john supergroup        509 2016-12-11 12:13 /tmp/logs/john/logs/application_1481426363207_0001/mac_51897
drwxrwx---   - john supergroup          0 2016-12-11 12:19 /tmp/logs/john/logs/application_1481426363207_0002
-rw-r-----   1 john supergroup        509 2016-12-11 12:19 /tmp/logs/john/logs/application_1481426363207_0002/mac_51897
drwxrwx---   - john supergroup          0 2016-12-11 12:51 /tmp/logs/john/logs/application_1481426363207_0003
-rw-r-----   1 john supergroup        509 2016-12-11 12:51 /tmp/logs/john/logs/application_1481426363207_0003/mac_51897
drwxrwx---   - john supergroup          0 2016-12-11 12:52 /tmp/logs/john/logs/application_1481426363207_0004
-rw-r-----   1 john supergroup        509 2016-12-11 12:52 /tmp/logs/john/logs/application_1481426363207_0004/mac_51897
drwxrwx---   - john supergroup          0 2016-12-11 12:53 /tmp/logs/john/logs/application_1481426363207_0005
-rw-r-----   1 john supergroup        509 2016-12-11 12:53 /tmp/logs/john/logs/application_1481426363207_0005/mac_51897
drwxrwx---   - john supergroup          0 2016-12-11 12:59 /tmp/logs/john/logs/application_1481432311518_0001
-rw-r-----   1 john supergroup        509 2016-12-11 12:59 /tmp/logs/john/logs/application_1481432311518_0001/mac_53087
drwxrwx---   - john supergroup          0 2016-12-11 13:07 /tmp/logs/john/logs/application_1481432311518_0002
-rw-r-----   1 john supergroup        509 2016-12-11 13:07 /tmp/logs/john/logs/application_1481432311518_0002/mac_53087
drwxrwx---   - john supergroup          0 2016-12-11 13:08 /tmp/logs/john/logs/application_1481432311518_0003
-rw-r-----   1 john supergroup        509 2016-12-11 13:08 /tmp/logs/john/logs/application_1481432311518_0003/mac_53087
drwx------   - john supergroup          0 2016-12-11 11:50 /user
drwx------   - john supergroup          0 2016-12-11 12:12 /user/john
drwx------   - john supergroup          0 2016-12-11 11:50 /user/john/.Trash
drwx------   - john supergroup          0 2016-12-11 12:12 /user/john/.Trash/Current
drwx------   - john supergroup          0 2016-12-11 11:50 /user/john/.Trash/Current/test
-rw-r--r--   1 john supergroup      15458 2016-12-10 18:43 /user/john/.Trash/Current/test/LICENSE.txt
drwxr-xr-x   - john supergroup          0 2016-12-11 12:08 /user/john/.Trash/Current/test1481429418166
-rw-r--r--   1 john supergroup        101 2016-12-10 19:15 /user/john/.Trash/Current/test1481429418166/NOTICE.txt
drwx------   - john supergroup          0 2016-12-11 12:12 /user/john/.Trash/Current/user
drwx------   - john supergroup          0 2016-12-11 12:12 /user/john/.Trash/Current/user/john
drwxr-xr-x   - john supergroup          0 2016-12-11 12:08 /user/john/.Trash/Current/user/john/output
drwxr-xr-x   - john supergroup          0 2016-12-11 12:08 /user/john/input
-rw-r--r--   1 john supergroup         11 2016-12-11 11:53 /user/john/input/wc.input
hdfs dfs -cat /tmp/logs/john/logs/application_1481432311518_0003/mac_53087
??h??׶9?A@???P	VERSIONAPPLICATION_ACL
MODIFY_APPVIEW_APP APPLICATION_OWNERjohn(&container_1481432311518_0003_01_000001Gstderr48/bin/bash: /bin/java: No such file or directory
stdout0(&container_1481432311518_0003_02_000001Gstderr48/bin/bash: /bin/java: No such file or directory
stdout0
	VERSION*(&container_1481432311518_0003_02_000001none?;?;data:BCFile.indexnone͇

                                                                                     data:TFile.indexnone?Q66data:TFile.metanone?K???h??׶9?A@???PJohndeiMac:Desktop john$ 
找到问题原因,创建软链接解决问题:
sudo ln -s /usr/bin/java /bin/java

运行自己写的jar包
yarn jar jars/mr-wordcount.jar /user/john/input /user/john/wordcount
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2016/12/10 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档