前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >趣搭-嗒集群

趣搭-嗒集群

作者头像
DataScience
发布2020-04-14 12:07:31
1.3K0
发布2020-04-14 12:07:31
举报
文章被收录于专栏:A2DataA2Data

大数据集群安装详解(Hadoop2.0)

搭建环境:RadHat5.5

HaDoop:hadoop-2.6.0-cdh5.5.2

JDK:jdk-7u25-linux-i586

vmware 虚拟机 三台

主节点:hdp-01 ip 192.168.6.2

从节点:hdp-02 ip 192.168.6.4

从节点:hdp-03 ip 192.168.6.6

将jdk-7u25-linux-i586.tar.gz 分别传到每台机器的/TMP下

将hadoop-2.6.0-cdh5.5.2.tar.gz 传入到主节点的/TMP下

注意要关闭防火墙

1./etc/init.d/iptables stop或者service iptables stop

2.chkconfig iptables off

3.setenforce 0

4.vim /etc/sysconfig/selinux ,修改为:SELINUX=disabled并且SELINUXTYPE=disabled.

开始安装JDK(三台都需要)

HDP-01

主节点

vim 1.sh

#!/bin/bash

cat <<EOF >/etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=hdp-01

EOF

hostname hdp-01

cat <<EOF > /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.6.2 hdp-01

192.168.6.4 hdp-02

192.168.6.6 hdp-03

EOF

useradd hadoop

echo "123456" | passwd --stdin hadoop

cd /tmp

ls

tar -zxvf jdk-7u25-linux-i586.tar.gz -C /usr/

cd /usr/

ls

rpm -e --nodeps java-1.4.2-gcj-compat-1.4.2.0-40jpp.115

cd

cat <<EOF >> /etc/profile

export JAVA_HOME=/usr/jdk1.7.0_25

export JAVA_BIN=/usr/jdk1.7.0_25/bin

export PATH=\PATH:\JAVA_HOME/bin

export CLASSPATH=.:\JAVA_HOME/lib/dt.jar:\JAVA_HOME/lib/tools.jar

export JAVA_HOME JAVA_BIN PATH CLASSPATH

EOF

source /etc/profile

java -version

tail -n 6 /etc/profile

su - hadoop

hdp-02

从节点

vim 1.sh

cat <<EOF >/etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=hdp-02

EOF

hostname hdp-02

cat <<EOF > /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.6.2 hdp-01

192.168.6.4 hdp-02

192.168.6.6 hdp-03

EOF

useradd hadoop

echo "123456" | passwd --stdin hadoop

cd /tmp

ls

tar -zxvf jdk-7u25-linux-i586.tar.gz -C /usr/

cd /usr/

cd

cat <<EOF >> /etc/profile

export JAVA_HOME=/usr/jdk1.7.0_25

export JAVA_BIN=/usr/jdk1.7.0_25/bin

export PATH=\PATH:\JAVA_HOME/bin

export CLASSPATH=.:\JAVA_HOME/lib/dt.jar:\JAVA_HOME/lib/tools.jar

export JAVA_HOME JAVA_BIN PATH CLASSPATH

EOF

source /etc/profile

/usr/jdk1.7.0_25/bin/java -version

tail -n 6 /etc/profile

su - hadoop

hdp-03

从节点

vim 1.sh

cat <<EOF >/etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=hdp-03

EOF

hostname hdp-03

cat <<EOF > /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.6.2 hdp-01

192.168.6.4 hdp-02

192.168.6.6 hdp-03

EOF

useradd hadoop

echo "123456" | passwd --stdin hadoop

cd /tmp

ls

tar -zxvf jdk-7u25-linux-i586.tar.gz -C /usr/

cd /usr/

ls

cd

cat <<EOF >> /etc/profile

export JAVA_HOME=/usr/jdk1.7.0_25

export JAVA_BIN=/usr/jdk1.7.0_25/bin

export PATH=\PATH:\JAVA_HOME/bin

export CLASSPATH=.:\JAVA_HOME/lib/dt.jar:\JAVA_HOME/lib/tools.jar

export JAVA_HOME JAVA_BIN PATH CLASSPATH

EOF

source /etc/profile

/usr/jdk1.7.0_25/bin/java -version

echo $?

tail -n 6 /etc/profile

su - hadoop

分别安装完之后 都会有如下图所示(主节点):

创建hadoop用户 密码为123456

自动su到hadoop用户下

java 版本以及环境变量

hadoop用户名

安装主节点 cdh

切换到root用户

cd /tmp

tar -zxvf hadoop-2.6.0tar.gz -C /usr/local

mv hadoop-2.6.0 hadoop-2.6.0

cd

vim .bash_profile

export JAVA_HOME=/usr/jdk1.7.0_25

export JAVA_BIN=/usr/jdk1.7.0_25/bin

export PATH=PATH:\JAVA_HOME/bin

export CLASSPATH=.:\JAVA_HOME/lib/dt.jar:\JAVA_HOME/lib/tools.jar

export JAVA_HOME JAVA_BIN PATH CLASSPATH

HADOOP_HOME=/usr/local/hadoop-2.6.0

HADOOP_BIN=/usr/local/hadoop-2.6.0/bin

HADOOP_SBIN=/usr/local/hadoop-2.6.0/sbin

HADOOP_CONF_DIR=\$HADOOP_HOME/etc/hadoop

PATH=\HADOOP_HOME/bin:\PATH

export HADOOP_HOME HADOOP_CONF_DIR PATH

source .bash_profile

cd /usr/local/hadoop-2.6.0/etc/hadoop

vim core-site.xml

<property>

<name>fs.defaultFS</name>

<value>hdfs://hdp-01:9000</value> <!--主机名-->

<description>NameNode URI.</description>

</property>

<property>

<name>io.file.buffer.size</name>

<value>131072</value>

<description>Size of read/write buffer used inSequenceFiles.</description>

</property>

cd /usr/local/hadoop-2.6.0/

mkdir -p dfs/name

mkdir -p dfs/data

mkdir -p dfs/namesecondary

cd etc/hadoop

vim hdfs-site.xml

<property>

<name>dfs.namenode.secondary.http-address</name>

<value>hdp-01:50090</value>

<description>The secondary namenode http server address andport.</description>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:///usr/local/hadoop-2.6.0/dfs/name</value>

<description>Path on the local filesystem where the NameNodestores the namespace and transactions logs persistently.</description>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:///usr/local/hadoop-2.6.0/dfs/data</value>

<description>Comma separated list of paths on the local filesystemof a DataNode where it should store its blocks.</description>

</property>

<property>

<name>dfs.namenode.checkpoint.dir</name>

<value>file:///usr/local/hadoop-2.6.0/dfs/namesecondary</value>

<description>Determines where on the local filesystem the DFSsecondary name node should store the temporary images to merge. If this is acomma-delimited list of directories then the image is replicated in all of thedirectories for redundancy.</description>

</property>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

</configuration>

cp mapred-site.xml.template mapred-site.xml

vim mapred-site.xml

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

<description>Theruntime framework for executing MapReduce jobs. Can be one of local, classic or yarn.</description>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>hdp-01:10020</value>

<description>MapReduce JobHistoryServer IPC host:port</description>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>hdp-01:19888</value>

<description>MapReduce JobHistoryServer Web UI host:port</description>

</property>

vim yarn-site.xml

<property>

<name>yarn.resourcemanager.hostname</name>

<value>hdp-01</value>

<description>The hostname of theRM.</description>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

<description>Shuffle service that needs to be set for Map Reduceapplications.</description>

</property>

vim hadoop-env.sh

export JAVA_HOME=/usr/jdk1.7.0_25

vim slaves

hdp-02

hdp-03

cd /usr/local/

scp -r ./hadoop-2.6.0/ hdp-02:/usr/local/

scp -r ./hadoop-2.6.0/ hdp-03:/usr/local/

耐心等待 以上copy完成。

从节点root用户下授权(两台都需做)

chown -R hadoop.hadoop /usr/local/hadoop-2.6.0-cdh5.5.2

su - hadoop

设置免密钥登陆(三台之间互相切换)

三台都需要操作(hadoop用户下):

ssh-keygen -t rsa

ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hdp-01

ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hdp-02

ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hdp-03

格式化集群(仅在主节点做一次):

cd /usr/local/hadoop-2.6.0/

bin/hadoop namenode -format

sbin/start-all.sh

jps

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-04-26,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 DataScience 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
专用宿主机
专用宿主机(CVM Dedicated Host,CDH)提供用户独享的物理服务器资源,满足您资源独享、资源物理隔离、安全、合规需求。专用宿主机搭载了腾讯云虚拟化系统,购买之后,您可在其上灵活创建、管理多个自定义规格的云服务器实例,自主规划物理资源的使用。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档