本环境有三台机器。其中master(10.0.0.2),slave1(10.0.0.3),slave2(10.0.0.4)。
useradd -d /data/hadoop -u 600 -g root hadoop
#修改hadoop的密码
passwd hadoop
vi /etc/hostname
maste
**注意:如果是slave1,则此处填写slave1,如果是slave2,则填写slave2**
vi /etc/hosts
10.0.0.2 maste
10.0.0.3 slave1
10.0.0.4 slave2
127.0.0.1 localhost localhost.localdomain localhost
vi /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
NETWORKING\_IPV6=no
HOSTNAME=maste
执行ssh-keygen -t rsa命令。一直按enter建进入即可
执行ssh-copy-id -i ~/.ssh/id_rsa.pub slave1
tar -xzvf jdk-8u91-linux-x64.tar.gz
vi ~/.bash\_profile
.bash_profile 文件内容如下所示
# .bash\_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/.local/bin:$HOME/bin:/data/hadoop/hadoop-2.6.4/share/
export PATH
JAVA\_HOME=/data/hadoop/jdk1.8.0\_91
CLASSPATH=.:$JAVA\_HOME/lib
PATH=$JAVA\_HOME/bin:$PATH
export JAVA\_HOME CLASSPATH PATH
**注意:**
tar -xzvf hadoop-2.6.4.tar.gz
vi ~/.bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD\_PAGER=
# User specific aliases and functions
export HADOOP\_PREFIX=$HOME/hadoop-2.6.4
export HADOOP\_COMMON\_HOME=$HADOOP\_PREFIX
export HADOOP\_HDFS\_HOME=$HADOOP\_PREFIX
export HADOOP\_MAPRED\_HOME=$HADOOP\_PREFIX
export HADOOP\_YARH\_HOME=$HADOOP\_PREFIX
export HADOOP\_CONF\_DIR=$HADOOP\_PREFIX/etc/hadoop
export PATH=$PATH:$HADOOP\_PREFIX/bin:$HADOOP\_PREFIX/sbin
source ~/.bashrc 使配置文件生效
cd /data/hadoop/hadoop-2.6.4/etc/hadoop
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA\_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA\_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use.
export JAVA\_HOME=/data/hadoop/jdk1.8.0\_91
#localhost
slave1
slave2
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-master</value>
<description>Abase for other temporary directories.</description>
</property>
</configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>Master:50090</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/hadoop/tmp/hdfs/datanode</value>
</property>
<property>
<name>dfs.datanode.name.dir</name>
<value>file:///data/hadoop/tmp/hdfs/namenode</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:///data/hadoop/tmp/hdfs/namesecondary</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
**注意:dfs.replication说明的是节点的数量。本案例中有两个slave,因此填写数值为2**
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduece.jobtracker.staging.root.dir</name>
<value>/user</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>Master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>Master:19888</value>
</property>
</configuration>
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce\_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
</configuration>
start-all.sh 即可
hdfs dfsadmin -report
[hadoop@master hadoop]$ vi yarn-site.xml
[hadoop@master hadoop]$ hdfs dfsadmin -report
Configured Capacity: 20867301376 (19.43 GB)
Present Capacity: 16041099264 (14.94 GB)
DFS Remaining: 15645147136 (14.57 GB)
DFS Used: 395952128 (377.61 MB)
DFS Used%: 2.47%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (2):
Name: 10.0.0.3:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 10433650688 (9.72 GB)
DFS Used: 197976064 (188.80 MB)
Non DFS Used: 2413105152 (2.25 GB)
DFS Remaining: 7822569472 (7.29 GB)
DFS Used%: 1.90%
DFS Remaining%: 74.97%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Jun 23 11:03:48 CST 2016
Name: 10.0.0.4:50010 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 10433650688 (9.72 GB)
DFS Used: 197976064 (188.80 MB)
Non DFS Used: 2413096960 (2.25 GB)
DFS Remaining: 7822577664 (7.29 GB)
DFS Used%: 1.90%
DFS Remaining%: 74.97%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Jun 23 11:03:49 CST 2016
**说明:因为我们是两个节点,因此只要在这里看到两个节点,就说明正常**
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。