主机名 | ip | 角色 |
---|---|---|
hadoop01 | 192.168.56.10 | NameNode ResourceManager |
hadoop02 | 192.168.56.11 | DataNode NodeManager |
hadoop03 | 192.168.56.12 | DataNode NodeManager |
hadoop04 | 192.168.56.13 | DataNode NodeManager |
groupadd hadoop;
useradd -G hadoop cluster;
passwd cluster;
1.0虚拟机设置(网络)
1.1修改主机名
vim /etc/sysconfig/network
1.2修改IP
vim /etc/sysconfig/network-scripts/ifcfg-eth0
1.3修改主机名和IP的映射关系
vim /etc/hosts
1.4关闭防火墙
#查看防火墙状态
service iptables status
#关闭防火墙
service iptables stop
#查看防火墙开机启动状态
chkconfig iptables --list
#关闭防火墙开机启动
chkconfig iptables off
#关闭 selinux:
setenforce 0
sed "s@^SELINUX=enforcing@SELINUX=disabled@g" /etc/sysconfig/selinux
1.6重启Linux
reboot
/etc/profile
.bash_profile
JAVA_HOME=/usr/java/latest
HADOOP_HOME=/home/cluster/hadoop
CLASSPATH=.:$JAVA_HOME/lib
PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$PATH
注意:*hadoop2.x的配置文件$HADOOP_HOME/etc/hadoop*
3.1配置hadoop
第一个:hadoop-env.sh
# set to the root of your Java installation
export JAVA_HOME=/usr/java/latest
# Assuming your installation directory is /usr/local/hadoop
export HADOOP_PREFIX=/home/cluster/hadoop
第二个:core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>
<!-- 指定hadoop运行时产生文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/cluster/hadoop/tmp</value>
</property>
第三个:hdfs-site.xml
<!-- 指定HDFS副本的数量 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
第四个:mapred-site.xml
mv mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
<!-- 指定mr运行在yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
第五个:yarn-site.xml
<!-- 指定YARN的老大(ResourceManager)的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop01</value>
</property>
<!-- reducer获取数据的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
第六个:
vi slaves
hadoop02
hadoop03
3.2分发安装包
这一步可以使用ssh的scp,参考文末的ssh免密码ssh配置方式
3.3格式化namenode(是对namenode进行初始化)
hdfs namenode -format (hadoop namenode -format)
3.4启动hadoop
先启动HDFS
sbin/start-dfs.sh
再启动YARN
sbin/start-yarn.sh
3.5验证是否启动成功
使用jps命令验证
27408 NameNode
28218 Jps
27643 SecondaryNameNode
28066 NodeManager
27803 ResourceManager
27512 DataNode
3.6访问界面
http://192.168.56.10:50070 (HDFS管理界面)
http://192.168.56.10:8088 (MR管理界面)
>所有机器 ssh-keygen -t rsa 一路按回车;
>在master机器上执行:cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys;
>scp文件到其他几台机器:
scp ~/.ssh/authorized_keys cluster@hadoop02:~/.ssh/;
scp ~/.ssh/authorized_keys cluster@hadoop03:~/.ssh/;
scp ~/.ssh/authorized_keys cluster@hadoop04:~/.ssh/;
在hadoop02-04上执行(把id_rsa加到hadoop01的authorized_keys)
ssh-copy-id -i ~/.ssh/id_rsa.pub cluster@hadoop01;
遇到的问题: 15/05/01 09:56:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 下载:http://dl.bintray.com/sequenceiq/sequenceiq-bin/ 覆盖: tar -xvf hadoop-native-64-2.6.0.tar -C /home/cluster/hadoop/lib/native