前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >CentOS搭建HA高可用Hadoop 2.7.6 集群

CentOS搭建HA高可用Hadoop 2.7.6 集群

作者头像
星哥玩云
发布2022-07-13 15:25:41
5510
发布2022-07-13 15:25:41
举报
文章被收录于专栏:开源部署开源部署

1. Hadoop的HA机制

  前言:正式引入HA机制是从hadoop2.0开始,之前的版本中没有HA机制

1.1. HA的运作机制

(1)hadoop-HA集群运作机制介绍

  所谓HA,即高可用(7*24小时不中断服务)

  实现高可用最关键的是消除单点故障

  hadoop-ha严格来说应该分成各个组件的HA机制——HDFS的HA、YARN的HA

(2)HDFS的HA机制详解

  通过双namenode消除单点故障

  双namenode协调工作的要点:

     A、元数据管理方式需要改变:

        内存中各自保存一份元数据

        Edits日志只能有一份,只有Active状态的namenode节点可以做写操作

        两个namenode都可以读取edits

        共享的edits放在一个共享存储中管理(qjournal和NFS两个主流实现)

      B、需要一个状态管理功能模块

        实现了一个zkfailover,常驻在每一个namenode所在的节点

        每一个zkfailover负责监控自己所在namenode节点,利用zk进行状态标识

        当需要进行状态切换时,由zkfailover来负责切换

        切换时需要防止brain split现象的发生

1.2. HDFS-HA图解

2. 主机规划

主机名称

外网IP

内网IP

操作系统

备注

安装软件

运行进程

mini01

10.0.0.111

172.16.1.111

CentOS 7.4

ssh port:22

jdk、hadoop

NameNode、DFSZKFailoverController(zkfc)

mini02

10.0.0.112

172.16.1.112

CentOS 7.4

ssh port:22

jdk、hadoop

NameNode、DFSZKFailoverController(zkfc)

mini03

10.0.0.113

172.16.1.113

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

ResourceManager

mini04

10.0.0.114

172.16.1.114

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

ResourceManager

mini05

10.0.0.115

172.16.1.115

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

DataNode、NodeManager、JournalNode、QuorumPeerMain

mini06

10.0.0.116

172.16.1.116

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

DataNode、NodeManager、JournalNode、QuorumPeerMain

mini07

10.0.0.117

172.16.1.117

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

DataNode、NodeManager、JournalNode、QuorumPeerMain

注意:针对HA模式,就不需要SecondaryNameNode了,因为STANDBY状态的namenode会负责做checkpoint

Linux添加hosts信息,保证每台都可以相互ping通

[root@mini01 ~]# cat /etc/hosts  127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1        localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.0.111    mini01 10.0.0.112    mini02 10.0.0.113    mini03 10.0.0.114    mini04 10.0.0.115    mini05 10.0.0.116    mini06 10.0.0.117    mini07

Windows的hosts文件修改

 # 文件位置C:\Windows\System32\drivers\etc  在hosts中追加如下内容 ………………………………………… 10.0.0.111    mini01 10.0.0.112    mini02 10.0.0.113    mini03 10.0.0.114    mini04 10.0.0.115    mini05 10.0.0.116    mini06 10.0.0.117    mini07

3. 添加用户账号

 # 使用一个专门的用户,避免直接使用root用户 # 添加用户、指定家目录并指定用户密码 useradd -d /app yun && echo '123456' | /usr/bin/passwd --stdin yun # sudo提权 echo "yun  ALL=(ALL)      NOPASSWD: ALL" >>  /etc/sudoers # 让其它普通用户可以进入该目录查看信息 chmod 755 /app/

4. 实现yun用户免秘钥登录

要求:根据规划实现 mini01 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录               实现 mini02 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录               实现 mini03 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录               实现 mini04 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录               实现 mini05 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录               实现 mini06 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录               实现 mini07 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录

# 可以使用ip也可以是hostname  但是由于我们计划使用的是 hostname 方式交互,所以使用hostname # 同时hostname方式分发,可以通过hostname远程登录,也可以IP远程登录

具体过程就不多说了,请参见  https://www.linuxidc.com/Linux/2018-08/153353.htm

5. Jdk【java8】

具体过程就不多说了,请参见 https://www.linuxidc.com/Linux/2018-08/153353.htm

6. Zookeeper部署

根据规划zookeeper部署在mini03、mini04、mini05、mini06、mini07上

6.1. 配置信息

[yun@mini03 conf]$ pwd /app/zookeeper/conf [yun@mini03 conf]$ vim zoo.cfg #单个客户端与单台服务器之间的连接数的限制,是ip级别的,默认是60,如果设置为0,那么表明不作任何限制。 maxClientCnxns=1500 # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. # dataDir=/tmp/zookeeper dataDir=/app/bigdata/zookeeper/data # the port at which the clients will connect clientPort=2181 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1

# leader和follow通信端口和投票选举端口  server.3=mini03:2888:3888 server.4=mini04:2888:3888 server.5=mini05:2888:3888 server.6=mini06:2888:3888 server.7=mini07:2888:3888

6.2. 添加myid文件

[yun@mini03 data]$ pwd /app/bigdata/zookeeper/data [yun@mini03 data]$ vim myid  # 其中mini03的myid 为3;mini04的myid 为4;mini05的myid 为5;mini06的myid 为6;mini07的myid 为7  3

6.3. 启动zk服务

# 依次在启动mini03、mini04、mini05、mini06、mini07 zk服务 [yun@mini03 ~]$ cd zookeeper/bin/ [yun@mini03 bin]$ pwd /app/zookeeper/bin [yun@mini03 bin]$ ll total 56 -rwxr-xr-x 1 yun yun  238 Oct  1  2012 README.txt -rwxr-xr-x 1 yun yun  1909 Oct  1  2012 zkCleanup.sh -rwxr-xr-x 1 yun yun  1049 Oct  1  2012 zkCli.cmd -rwxr-xr-x 1 yun yun  1512 Oct  1  2012 zkCli.sh -rwxr-xr-x 1 yun yun  1333 Oct  1  2012 zkEnv.cmd -rwxr-xr-x 1 yun yun  2599 Oct  1  2012 zkEnv.sh -rwxr-xr-x 1 yun yun  1084 Oct  1  2012 zkServer.cmd -rwxr-xr-x 1 yun yun  5467 Oct  1  2012 zkServer.sh -rw-rw-r-- 1 yun yun 17522 Jun 28 21:01 zookeeper.out [yun@mini03 bin]$ ./zkServer.sh start  JMX enabled by default Using config: /app/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED

6.4. 查询运行状态

# 其中mini03、mini04、mini06、mini07状态如下 [yun@mini03 bin]$ ./zkServer.sh status JMX enabled by default Using config: /app/zookeeper/bin/../conf/zoo.cfg Mode: follower

# 其中mini05 状态如下 [yun@mini05 bin]$ ./zkServer.sh status JMX enabled by default Using config: /app/zookeeper/bin/../conf/zoo.cfg Mode: leader

PS:4个follower 1个leader

7. Hadoop部署与配置修改

  注意:每台机器的Hadoop以及配置相同

7.1. 部署

[yun@mini01 software]$ pwd /app/software [yun@mini01 software]$ ll total 194152 -rw-r--r-- 1 yun yun 198811365 Jun  8 16:36 CentOS-7.4_hadoop-2.7.6.tar.gz [yun@mini01 software]$ tar xf CentOS-7.4_hadoop-2.7.6.tar.gz [yun@mini01 software]$ mv hadoop-2.7.6/ /app/ [yun@mini01 software]$ cd [yun@mini01 ~]$ ln -s hadoop-2.7.6/ hadoop [yun@mini01 ~]$ ll total 4 lrwxrwxrwx  1 yun yun  13 Jun  9 16:21 hadoop -> hadoop-2.7.6/ drwxr-xr-x  9 yun yun  149 Jun  8 16:36 hadoop-2.7.6 lrwxrwxrwx  1 yun yun  12 May 26 11:18 jdk -> jdk1.8.0_112 drwxr-xr-x  8 yun yun  255 Sep 23  2016 jdk1.8.0_112

7.2. 环境变量

[root@mini01 profile.d]# pwd /etc/profile.d [root@mini01 profile.d]# vim hadoop.sh export HADOOP_HOME="/app/hadoop" export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH [root@mini01 profile.d]# source /etc/profile  # 生效

7.3. core-site.xml

[yun@mini01 hadoop]$ pwd /app/hadoop/etc/hadoop [yun@mini01 hadoop]$ vim core-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="https://www.linuxidc.com/Linux/2018-08/configuration.xsl"?> <!-- …………………… -->

<!-- Put site-specific property overrides in this file. -->

<configuration>   <!-- 指定hdfs的nameservice为bi -->   <property>     <name>fs.defaultFS</name>     <value>hdfs://bi/</value>   </property>

  <!-- 指定hadoop临时目录 -->   <property>     <name>hadoop.tmp.dir</name>     <value>/app/hadoop/tmp</value>   </property>

  <!-- 指定zookeeper地址 -->   <property>     <name>ha.zookeeper.quorum</name>     <value>mini03:2181,mini04:2181,mini05:2181,mini06:2181,mini07:2181</value>   </property>

</configuration>

7.4. hdfs-site.xml

[yun@mini01 hadoop]$ pwd /app/hadoop/etc/hadoop [yun@mini01 hadoop]$ vim hdfs-site.xml  <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="https://www.linuxidc.com/Linux/2018-08/configuration.xsl"?> <!--   …………………… -->

<!-- Put site-specific property overrides in this file. -->

<configuration>   <!--指定hdfs的nameservice为bi,需要和core-site.xml中的保持一致 -->   <property>     <name>dfs.nameservices</name>     <value>bi</value>   </property>

  <!-- bi下面有两个NameNode,分别是nn1,nn2 -->   <property>     <name>dfs.ha.namenodes.bi</name>     <value>nn1,nn2</value>   </property>

  <!-- nn1的RPC通信地址 -->   <property>     <name>dfs.namenode.rpc-address.bi.nn1</name>     <value>mini01:9000</value>   </property>   <!-- nn1的http通信地址 -->   <property>     <name>dfs.namenode.http-address.bi.nn1</name>     <value>mini01:50070</value>   </property>

  <!-- nn2的RPC通信地址 -->   <property>     <name>dfs.namenode.rpc-address.bi.nn2</name>     <value>mini02:9000</value>   </property>   <!-- nn2的http通信地址 -->   <property>     <name>dfs.namenode.http-address.bi.nn2</name>     <value>mini02:50070</value>   </property>

  <!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->   <property>     <name>dfs.namenode.shared.edits.dir</name>     <value>qjournal://mini05:8485;mini06:8485;mini07:8485/bi</value>   </property>

  <!-- 指定JournalNode在本地磁盘存放数据的位置 -->   <property>     <name>dfs.journalnode.edits.dir</name>     <value>/app/hadoop/journaldata</value>   </property>

  <!-- 开启NameNode失败自动切换 -->   <property>     <name>dfs.ha.automatic-failover.enabled</name>     <value>true</value>   </property>

  <!-- 配置失败自动切换实现方式 -->   <property>     <name>dfs.client.failover.proxy.provider.bi</name>     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>   </property>

  <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->   <!-- 其中shell(/bin/true) 表示可执行一个脚本  比如 shell(/app/yunwei/hadoop_fence.sh) -->   <property>     <name>dfs.ha.fencing.methods</name>     <value>       sshfence       shell(/bin/true)     </value>   </property>

  <!-- 使用sshfence隔离机制时需要ssh免登陆 -->   <property>     <name>dfs.ha.fencing.ssh.private-key-files</name>     <value>/app/.ssh/id_rsa</value>   </property>

  <!-- 配置sshfence隔离机制超时时间 单位:毫秒 -->   <property>     <name>dfs.ha.fencing.ssh.connect-timeout</name>     <value>30000</value>   </property>

</configuration>

7.5. mapred-site.xml

[yun@mini01 hadoop]$ pwd /app/hadoop/etc/hadoop [yun@mini01 hadoop]$ cp -a mapred-site.xml.template mapred-site.xml [yun@mini01 hadoop]$ vim mapred-site.xml  <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="https://www.linuxidc.com/Linux/2018-08/configuration.xsl"?> <!--   …………………… -->

<!-- Put site-specific property overrides in this file. -->

<configuration>   <!-- 指定mr框架为yarn方式 -->   <property>     <name>mapreduce.framework.name</name>     <value>yarn</value>   </property>

</configuration>

7.6. yarn-site.xml

[yun@mini01 hadoop]$ pwd /app/hadoop/etc/hadoop [yun@mini01 hadoop]$ vim yarn-site.xml <?xml version="1.0"?> <!--   …………………… --> <configuration>

<!-- Site specific YARN configuration properties -->   <!-- 开启RM高可用 -->   <property>     <name>yarn.resourcemanager.ha.enabled</name>     <value>true</value>   </property>

  <!-- 指定RM的cluster id -->   <property>     <name>yarn.resourcemanager.cluster-id</name>     <value>yrc</value>   </property>

  <!-- 指定RM的名字 -->   <property>     <name>yarn.resourcemanager.ha.rm-ids</name>     <value>rm1,rm2</value>   </property>

  <!-- 分别指定RM的地址 -->   <property>     <name>yarn.resourcemanager.hostname.rm1</name>     <value>mini03</value>   </property>   <property>     <name>yarn.resourcemanager.hostname.rm2</name>     <value>mini04</value>   </property>

  <!-- 指定zk集群地址 -->   <property>     <name>yarn.resourcemanager.zk-address</name>     <value>mini03:2181,mini04:2181,mini05:2181,mini06:2181,mini07:2181</value>   </property>

  <!-- reduce 获取数据的方式 -->   <property>     <name>yarn.nodemanager.aux-services</name>     <value>mapreduce_shuffle</value>   </property>

</configuration>

7.7. 修改slaves

      slaves是指定子节点的位置,因为要在mini01上启动HDFS、在mini03启动yarn,所以mini01上的slaves文件指定的是datanode的位置,mini03上的slaves文件指定的是nodemanager的位置

[yun@mini01 hadoop]$ pwd /app/hadoop/etc/hadoop [yun@mini01 hadoop]$ vim slaves mini05 mini06 mini07

PS:改后配置后,将这些配置拷到其他Hadoop机器

8. 启动相关服务

  注意:第一次启动时严格按照下面的步骤!!!!!!!

8.1. 启动zookeeper集群

前面已经启动了,这里就不说了

8.2. 启动journalnode

# 根据规划在mini05、mini06、mini07 启动    # 在第一次格式化的时候需要先启动journalnode  之后就不必了 [yun@mini05 ~]$ hadoop-daemon.sh start journalnode  # 已经配置环境变量,所以不用进入到响应的目录  starting journalnode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-journalnode-mini05.out [yun@mini05 ~]$ jps 1281 QuorumPeerMain 1817 Jps 1759 JournalNode

8.3. 格式化HDFS

# 在mini01上执行命令 [yun@mini01 ~]$ hdfs namenode -format  18/06/30 18:29:12 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG:  host = mini01/10.0.0.111 STARTUP_MSG:  args = [-format] STARTUP_MSG:  version = 2.7.6 STARTUP_MSG:  classpath = ……………… STARTUP_MSG:  build = Unknown -r Unknown; compiled by 'root' on 2018-06-08T08:30Z STARTUP_MSG:  java = 1.8.0_112 ************************************************************/ 18/06/30 18:29:12 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 18/06/30 18:29:12 INFO namenode.NameNode: createNameNode [-format] Formatting using clusterid: CID-2385f26e-72e6-4935-aa09-47848b5ba4be 18/06/30 18:29:13 INFO namenode.FSNamesystem: No KeyProvider found. 18/06/30 18:29:13 INFO namenode.FSNamesystem: fsLock is fair: true 18/06/30 18:29:13 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 18/06/30 18:29:13 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 18/06/30 18:29:13 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 18/06/30 18:29:13 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 18/06/30 18:29:13 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jun 30 18:29:13 18/06/30 18:29:13 INFO util.GSet: Computing capacity for map BlocksMap 18/06/30 18:29:13 INFO util.GSet: VM type      = 64-bit 18/06/30 18:29:13 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 18/06/30 18:29:13 INFO util.GSet: capacity      = 2^21 = 2097152 entries 18/06/30 18:29:13 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 18/06/30 18:29:13 INFO blockmanagement.BlockManager: defaultReplication        = 3 18/06/30 18:29:13 INFO blockmanagement.BlockManager: maxReplication            = 512 18/06/30 18:29:13 INFO blockmanagement.BlockManager: minReplication            = 1 18/06/30 18:29:13 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2 18/06/30 18:29:13 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 18/06/30 18:29:13 INFO blockmanagement.BlockManager: encryptDataTransfer        = false 18/06/30 18:29:13 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000 18/06/30 18:29:13 INFO namenode.FSNamesystem: fsOwner            = yun (auth:SIMPLE) 18/06/30 18:29:13 INFO namenode.FSNamesystem: supergroup          = supergroup 18/06/30 18:29:13 INFO namenode.FSNamesystem: isPermissionEnabled = true 18/06/30 18:29:13 INFO namenode.FSNamesystem: Determined nameservice ID: bi 18/06/30 18:29:13 INFO namenode.FSNamesystem: HA Enabled: true 18/06/30 18:29:13 INFO namenode.FSNamesystem: Append Enabled: true 18/06/30 18:29:13 INFO util.GSet: Computing capacity for map INodeMap 18/06/30 18:29:13 INFO util.GSet: VM type      = 64-bit 18/06/30 18:29:13 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB 18/06/30 18:29:13 INFO util.GSet: capacity      = 2^20 = 1048576 entries 18/06/30 18:29:13 INFO namenode.FSDirectory: ACLs enabled? false 18/06/30 18:29:13 INFO namenode.FSDirectory: XAttrs enabled? true 18/06/30 18:29:13 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 18/06/30 18:29:13 INFO namenode.NameNode: Caching file names occuring more than 10 times 18/06/30 18:29:13 INFO util.GSet: Computing capacity for map cachedBlocks 18/06/30 18:29:13 INFO util.GSet: VM type      = 64-bit 18/06/30 18:29:13 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB 18/06/30 18:29:13 INFO util.GSet: capacity      = 2^18 = 262144 entries 18/06/30 18:29:13 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 18/06/30 18:29:13 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 18/06/30 18:29:13 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension    = 30000 18/06/30 18:29:13 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 18/06/30 18:29:13 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 18/06/30 18:29:13 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 18/06/30 18:29:13 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 18/06/30 18:29:13 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 18/06/30 18:29:13 INFO util.GSet: Computing capacity for map NameNodeRetryCache 18/06/30 18:29:13 INFO util.GSet: VM type      = 64-bit 18/06/30 18:29:13 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB 18/06/30 18:29:13 INFO util.GSet: capacity      = 2^15 = 32768 entries 18/06/30 18:29:14 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1178102935-10.0.0.111-1530354554626 18/06/30 18:29:14 INFO common.Storage: Storage directory /app/hadoop/tmp/dfs/name has been successfully formatted. 18/06/30 18:29:14 INFO namenode.FSImageFormatProtobuf: Saving image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 18/06/30 18:29:14 INFO namenode.FSImageFormatProtobuf: Image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 320 bytes saved in 0 seconds. 18/06/30 18:29:15 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 18/06/30 18:29:15 INFO util.ExitUtil: Exiting with status 0 18/06/30 18:29:15 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at mini01/10.0.0.111 ************************************************************/

拷贝到mini02

#格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是/app/hadoop/tmp,然后将/app/hadoop/tmp拷贝到mini02的/app/hadoop/下。 # 方法1: [yun@mini01 hadoop]$ pwd /app/hadoop [yun@mini01 hadoop]$ scp -r tmp/ yun@mini02:/app/hadoop  VERSION                          100%  202  189.4KB/s  00:00 seen_txid                        100%    2    1.0KB/s  00:00 fsimage_0000000000000000000.md5  100%  62    39.7KB/s  00:00 fsimage_0000000000000000000      100%  320  156.1KB/s  00:00

##########################3 # 方法2:##也可以这样,建议hdfs namenode -bootstrapStandby  # 不过需要mini02的Hadoop起来才行

8.4. 格式化ZKFC

#在mini01上执行一次即可 [yun@mini01 ~]$ hdfs zkfc -formatZK  18/06/30 18:54:30 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at mini01/10.0.0.111:9000 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:host.name=mini01 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_112 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:java.home=/app/jdk1.8.0_112/jre 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:java.class.path=…………………… 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/app/hadoop-2.7.6/lib/native 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-693.el7.x86_64 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:user.name=yun 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:user.home=/app 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Client environment:user.dir=/app/hadoop-2.7.6 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=mini03:2181,mini04:2181,mini05:2181,mini06:2181,mini07:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@7f3b84b8 18/06/30 18:54:30 INFO zookeeper.ClientCnxn: Opening socket connection to server mini04/10.0.0.114:2181. Will not attempt to authenticate using SASL (unknown error) 18/06/30 18:54:30 INFO zookeeper.ClientCnxn: Socket connection established to mini04/10.0.0.114:2181, initiating session 18/06/30 18:54:30 INFO zookeeper.ClientCnxn: Session establishment complete on server mini04/10.0.0.114:2181, sessionid = 0x4644fff9cb80000, negotiated timeout = 5000 18/06/30 18:54:30 INFO ha.ActiveStandbyElector: Session connected. 18/06/30 18:54:30 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/bi in ZK. 18/06/30 18:54:30 INFO zookeeper.ZooKeeper: Session: 0x4644fff9cb80000 closed 18/06/30 18:54:30 INFO zookeeper.ClientCnxn: EventThread shut down

8.5. 启动HDFS

# 在mini01上执行 [yun@mini01 ~]$ start-dfs.sh  Starting namenodes on [mini01 mini02] mini01: starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini01.out mini02: starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini02.out mini07: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini07.out mini06: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini06.out mini05: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini05.out Starting journal nodes [mini05 mini06 mini07] mini07: journalnode running as process 1691. Stop it first. mini06: journalnode running as process 1665. Stop it first. mini05: journalnode running as process 1759. Stop it first. Starting ZK Failover Controllers on NN hosts [mini01 mini02] mini01: starting zkfc, logging to /app/hadoop-2.7.6/logs/hadoop-yun-zkfc-mini01.out mini02: starting zkfc, logging to /app/hadoop-2.7.6/logs/hadoop-yun-zkfc-mini02.out

8.6. 启动YARN

#####注意#####:是在mini03上执行start-yarn.sh,把namenode和resourcemanager分开是因为性能问题 # 因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动 [yun@mini03 ~]$ start-yarn.sh  starting yarn daemons starting resourcemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-resourcemanager-mini03.out mini06: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini06.out mini07: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini07.out mini05: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini05.out

################################ # 在mini04启动 resourcemanager [yun@mini04 ~]$ yarn-daemon.sh start resourcemanager  # 也可用start-yarn.sh  starting resourcemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-resourcemanager-mini04.out

8.7. 启动说明

# 第一次启动的时候请严格按照上面的步骤【第一次涉及格式化问题】

# 第二次以及之后,步骤为: 启动zookeeper、HDFS、YARN

9. 浏览访问

9.1. Hdfs访问

9.1.1. 正常情况访问

http://mini01:50070   

http://mini02:50070

 9.1.2. mini01挂了Active自动切换

# mini01操作 [yun@mini01 ~]$ jps 3584 DFSZKFailoverController 3283 NameNode 5831 Jps [yun@mini01 ~]$ kill 3283 [yun@mini01 ~]$ jps 3584 DFSZKFailoverController 5893 Jps

Namenode挂了所以mini01不能访问 http://mini02:50070   

可见Hadoop已经切换过去了,之后mini01即使起来了,状态也只能为standby 。

9.2. Yarn访问

http://mini03:8088

http://mini04:8088 会直接跳转到http://mini03:8088/

# 该图从其他地方截取,所以不怎么匹配

# Linux下访问 [yun@mini01 ~]$ curl mini04:8088 This is standby RM. The redirect url is: http://mini03:8088/

HA完毕

10. 集群运维测试

10.1. Haadmin与状态切换管理

[yun@mini01 ~]$ hdfs haadmin Usage: haadmin     [-transitionToActive [--forceactive] <serviceId>]     [-transitionToStandby <serviceId>]     [-failover [--forcefence] [--forceactive] <serviceId> <serviceId>]     [-getServiceState <serviceId>]     [-checkHealth <serviceId>]     [-help <command>]

Generic options supported are -conf <configuration file>    specify an application configuration file -D <property=value>            use value for given property -fs <local|namenode:port>      specify a namenode -jt <local|resourcemanager:port>    specify a ResourceManager -files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath. -archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]

可以看到,状态操作的命令示例:

# 查看namenode工作状态  hdfs haadmin -getServiceState nn1

# 将standby状态namenode切换到active hdfs haadmin -transitionToActive nn1

# 将active状态namenode切换到standby hdfs haadmin -transitionToStandby nn2

10.2. 测试集群工作状态的一些指令

测试集群工作状态的一些指令 : hdfs dfsadmin -report    查看hdfs的各节点状态信息 hdfs haadmin -getServiceState nn1  # hdfs haadmin -getServiceState nn2      获取一个namenode节点的HA状态 hadoop-daemon.sh start namenode  单独启动一个namenode进程 hadoop-daemon.sh start zkfc  单独启动一个zkfc进程

10.3. Datanode动态上下线

Datanode动态上下线很简单,步骤如下:

  a)  准备一台服务器,设置好环境

  b)  部署hadoop的安装包,并同步集群配置

  c)  联网上线,新datanode会自动加入集群

  d)  如果是一次增加大批datanode,还应该做集群负载重均衡

10.4. 数据块的balance

启动balancer的命令:

start-balancer.sh -threshold 8

运行之后,会有Balancer进程出现:

上述命令设置了Threshold为8%,那么执行balancer命令的时候,首先统计所有DataNode的磁盘利用率的均值,然后判断如果某一个DataNode的磁盘利用率超过这个均值Threshold,那么将会把这个DataNode的block转移到磁盘利用率低的DataNode,这对于新节点的加入来说十分有用。Threshold的值为1到100之间,不显示的进行参数设置的话,默认是10。

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1. Hadoop的HA机制
    • 1.1. HA的运作机制
    • 2. 主机规划
    相关产品与服务
    文件存储
    文件存储(Cloud File Storage,CFS)为您提供安全可靠、可扩展的共享文件存储服务。文件存储可与腾讯云服务器、容器服务、批量计算等服务搭配使用,为多个计算节点提供容量和性能可弹性扩展的高性能共享存储。腾讯云文件存储的管理界面简单、易使用,可实现对现有应用的无缝集成;按实际用量付费,为您节约成本,简化 IT 运维工作。
    领券
    问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档