前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Hadoop的Client搭建-即集群外主机访问Hadoop

Hadoop的Client搭建-即集群外主机访问Hadoop

作者头像
星哥玩云
发布2022-07-12 14:17:03
1K0
发布2022-07-12 14:17:03
举报
文章被收录于专栏:开源部署开源部署

Hadoop的Client搭建-即集群外主机访问Hadoop

1、增加主机映射(与namenode的映射一样):

增加最后一行

[root@localhost ~]# su - root

[root@localhost ~]# vi /etc/hosts 127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1        localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.48.129    hadoop-master [root@localhost ~]#

2、新建用户hadoop

建立hadoop用户组

新建用户,useradd -d /usr/hadoop -g hadoop -m hadoop (新建用户hadoop指定用户主目录/usr/hadoop 及所属组hadoop)

passwd hadoop 设置hadoop密码(这里设置密码为hadoop)

[root@localhost ~]# groupadd hadoop  [root@localhost ~]# useradd -d /usr/hadoop -g hadoop -m hadoop [root@localhost ~]# passwd hadoop

3、配置jdk环境

本次安装的是hadoop-2.7.5,需要JDK 7以上版本。若已安装可跳过。

JDK安装可参考:http://www.linuxidc.com/Linux/2017-01/139874.htm 或 CentOS7.2安装JDK1.7 http://www.linuxidc.com/Linux/2016-11/137398.htm

或者直接拷贝master上的JDK文件更有利于保持版本的一致性。

[root@localhost Java]# su - root [root@localhost java]# mkdir -p /usr/java [root@localhost java]# scp -r hadoop@hadoop-master:/usr/java/jdk1.7.0_79 /usr/java [root@localhost java]# ll total 12 drwxr-xr-x. 8 root root 4096 Feb 13 01:34 default drwxr-xr-x. 8 root root 4096 Feb 13 01:34 jdk1.7.0_79 drwxr-xr-x. 8 root root 4096 Feb 13 01:34 latest

设置Java及hadoop环境变量

确保/usr/java/jdk1.7.0.79存在

su - root

vi /etc/profile

确保/usr/java/jdk1.7.0.79存在

unset i unset -f pathmunge JAVA_HOME=/usr/java/jdk1.7.0_79 CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar PATH=/usr/hadoop/hadoop-2.7.5/bin:$JAVA_HOME/bin:$PATH

设置生效(重要)

[root@localhost ~]# source /etc/profile [root@localhost ~]#

JDK安装后确认:

[hadoop@localhost ~]$ java -version java version "1.7.0_79" Java(TM) SE Runtime Environment (build 1.7.0_79-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) [hadoop@localhost ~]$

4、设置hadoop的环境变量

拷贝namenode上已配置好的hadoop目录到当前主机

[root@localhost ~]# su - hadoop Last login: Sat Feb 24 14:04:55 CST 2018 on pts/1 [hadoop@localhost ~]$ pwd /usr/hadoop [hadoop@localhost ~]$ scp -r hadoop@hadoop-master:/usr/hadoop/hadoop-2.7.5 . The authenticity of host 'hadoop-master (192.168.48.129)' can't be established. ECDSA key fingerprint is 1e:cd:d1:3d:b0:5b:62:45:a3:63:df:c7:7a:0f:b8:7c. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'hadoop-master,192.168.48.129' (ECDSA) to the list of known hosts. hadoop@hadoop-master's password:

[hadoop@localhost ~]$ ll total 0 drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Desktop drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Documents drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Downloads drwxr-xr-x 10 hadoop hadoop 150 Feb 24 14:30 hadoop-2.7.5 drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Music drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Pictures drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Public drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Templates drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Videos [hadoop@localhost ~]$

到此,Hadoop的客户端安装就算完成了,接下来就可以使用了。

执行hadoop命令结果如下,

[hadoop@localhost ~]$ hadoop Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]   CLASSNAME            run the class named CLASSNAME  or   where COMMAND is one of:   fs                  run a generic filesystem user client   version              print the version   jar <jar>            run a jar file                       note: please use "yarn jar" to launch                             YARN applications, not this command.   checknative [-a|-h]  check native hadoop and compression libraries availability   distcp <srcurl> <desturl> copy file or directories recursively   archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive   classpath            prints the class path needed to get the   credential          interact with credential providers Hadoop jar and the required libraries   daemonlog            get/set the log level for each daemon   trace                view and modify Hadoop tracing settings

Most commands print help when invoked w/o parameters. [hadoop@localhost ~]$

5、使用hadoop

创建本地文件

[hadoop@localhost ~]$ hdfs dfs -ls Found 1 items drwxr-xr-x  - hadoop supergroup          0 2018-02-22 23:41 output [hadoop@localhost ~]$ vi my-local.txt hello boy! yehyeh

上传本地文件至集群

[hadoop@localhost ~]$ hdfs dfs -mkdir upload [hadoop@localhost ~]$ hdfs dfs -ls upload [hadoop@localhost ~]$ hdfs dfs -ls Found 2 items drwxr-xr-x  - hadoop supergroup          0 2018-02-22 23:41 output drwxr-xr-x  - hadoop supergroup          0 2018-02-23 22:38 upload [hadoop@localhost ~]$ hdfs dfs -ls upload [hadoop@localhost ~]$ hdfs dfs -put my-local.txt upload [hadoop@localhost ~]$ hdfs dfs -ls upload Found 1 items -rw-r--r--  3 hadoop supergroup        18 2018-02-23 22:45 upload/my-local.txt [hadoop@localhost ~]$ hdfs dfs -cat upload/my-local.txt hello boy! yehyeh [hadoop@localhost ~]$

ps:注意本地java版本与master拷贝过来的文件中/etc/hadoop-env.sh配置的JAVA_HOME是否要保持一致没有验证过,本文是保持一致的。

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
大数据
全栈大数据产品,面向海量数据场景,帮助您 “智理无数,心中有数”!
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档