首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >docker搭建hadoop集群

docker搭建hadoop集群

作者头像
千羽
发布2023-08-28 18:01:21
发布2023-08-28 18:01:21
88200
代码可运行
举报
文章被收录于专栏:程序员千羽程序员千羽
运行总次数:0
代码可运行

docker搭建hadoop集群

前置条件:jdk环境,zookeeper环境在前期需要安装好,这个可以看之前的文章。

“准备条件:三台机器,10.8.46.35和10.8.46.197作为master节点,10.8.46.190作为slave节点。 上一步搭建的三台zookeeper要保持正常 以下命名三台机器都要操作。服务器要有jdk环境。

1、 修改主机hostname为:hostnamectl set-hostname hadoop-01

10.8.46.35

代码语言:javascript
代码运行次数:0
运行
复制
hostnamectl set-hostname hadoop-01 

[root@zookeeper-01-test opt]# hostname -f
hadoop-01

10.8.46.197

代码语言:javascript
代码运行次数:0
运行
复制
hostnamectl set-hostname hadoop-02

10.8.46.190

代码语言:javascript
代码运行次数:0
运行
复制
hostnamectl set-hostname hadoop-03

1. 配置jdk环境(3台服务器都要)

代码语言:javascript
代码运行次数:0
运行
复制
docker cp test-jdk-01:/usr/local/jdk1.8 /usr/local/
代码语言:javascript
代码运行次数:0
运行
复制
vim /etc/profile
# 在最后面添加这两句话
export JAVA_HOME=/usr/local/jdk1.8
export PATH=$JAVA_HOME/bin:$PATH

让文件生效

代码语言:javascript
代码运行次数:0
运行
复制
source /etc/profile

将已下载好的Hadoop压缩包(hadoop-3.2.2.tar.gz)通过工具【Xftp】拷贝到虚拟主机的opt目录下

(1)解压安装包

代码语言:javascript
代码运行次数:0
运行
复制
mkdir -p /usr/local/hadoop
cd /opt
tar -zxvf hadoop-3.2.2.tar.gz -C /usr/local/hadoop

(2)编辑全局变量

代码语言:javascript
代码运行次数:0
运行
复制
vim /etc/profile
增加以下全局变量
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.2.2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export YARN_LOG_DIR=$HADOOP_LOG_DIR
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH

#即时生效

代码语言:javascript
代码运行次数:0
运行
复制
source /etc/profile

免密设置:

代码语言:javascript
代码运行次数:0
运行
复制
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.8.46.35

免密传输

代码语言:javascript
代码运行次数:0
运行
复制
[root@hadoop-01 sbin]# scp ./start-yarn.sh 10.8.46.190:/usr/local/hadoop/hadoop-3.2.2/sbin
start-yarn.sh                                                                                                                                                                         100% 3427     4.7MB/s   00:00    
[root@hadoop-01 sbin]# scp ./start-yarn.sh 10.8.46.197:/usr/local/hadoop/hadoop-3.2.2/sbin
start-yarn.sh

(3)配置Hadoop

代码语言:javascript
代码运行次数:0
运行
复制
cd /usr/local/hadoop/hadoop-3.2.2/etc/hadoop
  1. 配置hadoop-env.sh
代码语言:javascript
代码运行次数:0
运行
复制
vim hadoop-env.sh

将export JAVA_HOME=${JAVA_HOME}修改为安装的JDK路径

代码语言:javascript
代码运行次数:0
运行
复制
export JAVA_HOME=/usr/local/jdk1.8

2. 配置core-site.xml

代码语言:javascript
代码运行次数:0
运行
复制
cd /usr/local/hadoop/hadoop-3.2.2/etc/hadoop
vim core-site.xml
代码语言:javascript
代码运行次数:0
运行
复制
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!--HDFS路径逻辑名称-->
<property>
   <name>fs.defaultFS</name>
   <value>hdfs://hadoop-local</value>
</property>
<property>
   <name>io.file.buffer.size</name>
   <value>131072</value>
</property>
<!--Hadoop存放临时文件位置-->
<property>
   <name>hadoop.tmp.dir</name>
   <value>/home/hadoop/tmp</value>
   <description>Abase for other temporary directories.</description>
</property>
<!--使用的zookeeper集群地址-->
<property>
   <name>ha.zookeeper.quorum</name>
   <value>zookeeper-01-test:2181,zookeeper-02-test:2181,zookeeper-03-test:2181</value>
</property>
<property>
   <name>fs.trash.interval</name>
   <value>1440</value>
</property>
<property>
   <name>fs.trash.checkpoint.interval</name>
   <value>1440</value>
</property>
<property>
   <name>hadoop.proxyuser.root.hosts</name>
   <value>*</value>
</property>
<property>
   <name>hadoop.proxyuser.root.groups</name>
   <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hduser.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hduser.groups</name>
    <value>*</value>
</property>
</configuration>

3. 配置hdfs-site.xml

vim hdfs-site.xml

代码语言:javascript
代码运行次数:0
运行
复制
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>                                      
<property>
   <name>dfs.nameservices</name>
   <value>hadoop-local</value>
</property>
<!--数据副本数量,根据HDFS台数设置,默认3份-->
<property>
   <name>dfs.replication</name>
   <value>3</value>
</property>
<!--HDFS文件系统数据存储位置,可以分别保存到不同硬盘,突破单硬盘性能瓶颈,多个位置以逗号隔开-->
<property>
   <name>dfs.data.dir</name>
   <value>file:/home/hadoop/hdfs/data</value>
</property>
<property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/home/hadoop/hdfs/data</value>
    </property>
<property>
   <name>dfs.permissions.enabled</name>
   <value>false</value>
</property>
<!--NameNode地址集群标识(hcluster),最多两个-->
<property>
   <name>dfs.ha.namenodes.hadoop-local</name>
   <value>test-cluster-hap-master-01,test-cluster-hap-master-02</value>
</property>
<property>
   <name>dfs.namenode.rpc-address.hadoop-local.test-cluster-hap-master-01</name>
   <value>test-cluster-hap-master-01:9820</value>
</property>
<property>
   <name>dfs.namenode.rpc-address.hadoop-local.test-cluster-hap-master-02</name>
   <value>test-cluster-hap-master-02:9820</value>
</property>
<property>
   <name>dfs.namenode.http-address.hadoop-local.test-cluster-hap-master-01</name>
   <value>test-cluster-hap-master-01:9870</value>
</property>
<property>
   <name>dfs.namenode.http-address.hadoop-local.test-cluster-hap-master-02</name>
   <value>test-cluster-hap-master-02:9870</value>
</property>
<!--开启NameNode失败自动切换-->
<property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
</property>
<!--NN存放元数据和日志位置-->
<property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/home/hadoop/name</value>
</property>
<!--同时把NameNode元数据和日志存放在JournalNode上(/home/hadoop/journal/hcluster)-->
<property>
   <name>dfs.namenode.shared.edits.dir</name>
   <!-- <value>qjournal://test-cluster-hap-slave-001:8485;test-cluster-hap-slave-002:8485;test-cluster-hap-slave-003:8485;test-cluster-hap-slave-004:8485;test-cluster-hap-slave-005:8485;test-cluster-hap-slave-006:8485;test-cluster-hap-slave-007:8485/hadoop-local</value> -->
   <value>qjournal://test-cluster-hap-slave-001:8485/hadoop-local</value>
</property>
<!--JournalNode上元数据和日志存放位置-->
<property>
   <name>dfs.journalnode.edits.dir</name>
   <value>/home/hadoop/journal</value>
</property>
<!--NameNode失败自动切换实现方式-->
<property>
   <name>dfs.client.failover.proxy.provider.hadoop-local</name>
   <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--隔离机制方法,确保任何时间只有一个NameNode处于活动状态-->
<property>
   <name>dfs.ha.fencing.methods</name>
   <value>sshfence(hdfs)
               shell(/bin/true)</value>
</property>
<!--使用sshfence隔离机制要SSH免密码认证-->
<property>
   <name>dfs.ha.fencing.ssh.private-key-files</name>
   <value>/root/.ssh/id_rsa</value>
</property>
<property>
   <name>dfs.ha.fencing.ssh.connect-timeout</name>
   <value>30000</value>
</property>
<property>
   <name>dfs.namenode.handler.count</name>
   <value>100</value>
</property>
<property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
</property>
<property>
   <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
   <value>false</value>
</property>
</configuration>                                        

4. 配置yarn-site.xml

vim yarn-site.xml

代码语言:javascript
代码运行次数:0
运行
复制
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<property>
        <name>yarn.resourcemanager.connect.retry-interval.ms</name>
        <value>2000</value>
</property>
<!--启用RM高可用-->
<property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
</property>
<!--RM故障自动切换-->
<property>
        <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
        <value>true</value>
</property>
<!--RM故障自动切换-->
<property>
        <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
        <value>true</value>
</property>
<!--RM集群标识符-->
<property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>rm-cluster</value>
</property>
<!--指定两台RM主机名标识符-->
<property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
</property>
<!--RM主机1-->
<property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>test-cluster-hap-master-01</value>
</property>
<!--RM主机2-->
<property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>test-cluster-hap-master-02</value>
</property>
<property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
</property>
<!--RM状态信息存储方式,一种基于内存(MemStore),另一种基于ZK(ZKStore)-->
<property>
   <description>The class to use as the persistent store.</description>
   <name>yarn.resourcemanager.store.class</name>
   <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
 </property>
<property>
        <name>yarn.resourcemanager.zk.state-store.address</name>
        <value>prd-cluster-dc2-storage-003-zk-01:2181,prd-cluster-dc2-storage-003-zk-02:2181,prd-cluster-dc2-storage-003-zk-03:2181</value>
</property>
<!--使用ZK集群保存状态信息-->
<property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>zookeeper-01-test:2181,zookeeper-02-test:2181,zookeeper-02-test:2181</value>
</property>
<property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>test-cluster-hap-master-01:8032</value>
</property>
<!--向RM调度资源地址-->
<property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>test-cluster-hap-master-01:8034</value>
</property>
<property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>test-cluster-hap-master-01:8088</value>
</property>
<property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>test-cluster-hap-master-02:8032</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>test-cluster-hap-master-02:8034</value>
</property>
<property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>test-cluster-hap-master-02:8088</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
    <name>yarn.application.classpath</name>
    <value>
        /usr/local/hadoop-3.2.2/etc/hadoop,
        /usr/local/hadoop-3.2.2/share/hadoop/common/*,
        /usr/local/hadoop-3.2.2/share/hadoop/common/lib/*,
        /usr/local/hadoop-3.2.2/share/hadoop/hdfs/*,
        /usr/local/hadoop-3.2.2/share/hadoop/hdfs/lib/*,
        /usr/local/hadoop-3.2.2/share/hadoop/mapreduce/*,
        /usr/local/hadoop-3.2.2/share/hadoop/mapreduce/lib/*,
        /usr/local/hadoop-3.2.2/share/hadoop/yarn/*,
        /usr/local/hadoop-3.2.2/share/hadoop/yarn/lib/*
    </value>
  </property>
</configuration>

5.配置mapred-site.xml

代码语言:javascript
代码运行次数:0
运行
复制
cp mapred-site.xml.template mapred-site.xml

vim mapred-site.xml
代码语言:javascript
代码运行次数:0
运行
复制
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<property>
        <name>yarn.resourcemanager.connect.retry-interval.ms</name>
        <value>2000</value>
</property>
<!--启用RM高可用-->
<property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
</property>
<!--RM故障自动切换-->
<property>
        <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
        <value>true</value>
</property>
<!--RM故障自动切换-->
<property>
        <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
        <value>true</value>
</property>
<!--RM集群标识符-->
<property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>rm-cluster</value>
</property>
<!--指定两台RM主机名标识符-->
<property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
</property>
<!--RM主机1-->
<property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>test-cluster-hap-master-01</value>
</property>
<!--RM主机2-->
<property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>test-cluster-hap-master-02</value>
</property>
<property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
</property>
<!--RM状态信息存储方式,一种基于内存(MemStore),另一种基于ZK(ZKStore)-->
<property>
   <description>The class to use as the persistent store.</description>
   <name>yarn.resourcemanager.store.class</name>
   <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
 </property>
<property>
        <name>yarn.resourcemanager.zk.state-store.address</name>
        <value>prd-cluster-dc2-storage-003-zk-01:2181,prd-cluster-dc2-storage-003-zk-02:2181,prd-cluster-dc2-storage-003-zk-03:2181</value>
</property>
<!--使用ZK集群保存状态信息-->
<property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>zookeeper-01-test:2181,zookeeper-02-test:2181,zookeeper-02-test:2181</value>
</property>
<property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>test-cluster-hap-master-01:8032</value>
</property>
<!--向RM调度资源地址-->
<property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>test-cluster-hap-master-01:8034</value>
</property>
<property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>test-cluster-hap-master-01:8088</value>
</property>
<property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>test-cluster-hap-master-02:8032</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>test-cluster-hap-master-02:8034</value>
</property>
<property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>test-cluster-hap-master-02:8088</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
    <name>yarn.application.classpath</name>
    <value>
        /usr/local/hadoop-3.2.2/etc/hadoop,
        /usr/local/hadoop-3.2.2/share/hadoop/common/*,
        /usr/local/hadoop-3.2.2/share/hadoop/common/lib/*,
        /usr/local/hadoop-3.2.2/share/hadoop/hdfs/*,
        /usr/local/hadoop-3.2.2/share/hadoop/hdfs/lib/*,
        /usr/local/hadoop-3.2.2/share/hadoop/mapreduce/*,
        /usr/local/hadoop-3.2.2/share/hadoop/mapreduce/lib/*,
        /usr/local/hadoop-3.2.2/share/hadoop/yarn/*,
        /usr/local/hadoop-3.2.2/share/hadoop/yarn/lib/*
    </value>
  </property>
</configuration>

6.编辑workers

代码语言:javascript
代码运行次数:0
运行
复制
test-cluster-hap-slave-001

7.编辑start

start-dfs.sh和stop-dfs.sh添加同样的内容,三台服务器都要!

代码语言:javascript
代码运行次数:0
运行
复制
vim /usr/local/hadoop/hadoop-3.2.2/sbin/start-dfs.sh
vim /usr/local/hadoop/hadoop-3.2.2/sbin/stop-dfs.sh
添加
HDFS_DATANODE_USER=root
HDFS_NAMENODE_USER=root
HDFS_ZKFC_USER=root
HDFS_JOURNALNODE_USER=root

start-yarn.sh添加

代码语言:javascript
代码运行次数:0
运行
复制
#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

还有一种方法是通过免密传输。需要提前设置免密

在同一台服务器上配置好,然后发送到另一台服务.

代码语言:javascript
代码运行次数:0
运行
复制
vim start-dfs.sh 
scp ./start-dfs.sh 10.8.46.197:/usr/local/hadoop/hadoop-3.2.2/sbin
scp ./start-dfs.sh 10.8.46.190:/usr/local/hadoop/hadoop-3.2.2/sbin
vim start-yarn.sh 
scp ./start-yarn.sh 10.8.46.190:/usr/local/hadoop/hadoop-3.2.2/sbin
scp ./start-yarn.sh 10.8.46.197:/usr/local/hadoop/hadoop-3.2.2/sbin

8.创建目录

代码语言:javascript
代码运行次数:0
运行
复制
mkdir -p /home/hadoop/hdfs/data
mkdir -p /home/hadoop/name
mkdir -p /home/hadoop/journal
mkdir -p /home/hadoop/tmp

9.添加hosts(每台服务器都要)

vim /etc/hosts

代码语言:javascript
代码运行次数:0
运行
复制
10.8.46.35 test-cluster-hap-master-01
10.8.46.197 test-cluster-hap-master-02
10.8.46.190 test-cluster-hap-slave-001

10.启动Hadoop

1、在每个Slave节点(slave-001-Hadoop-test、slave-002-Hadoop-test、slave-003-Hadoop-test)机器上运行命令

我这里就10.8.46.190服务器(slave节点)

代码语言:javascript
代码运行次数:0
运行
复制
hadoop-daemon.sh start journalnode

2、对NameNode(test-cluster-hap-master-01)节点进行格式化(格式化前启动zookeeper配置好hosts)

代码语言:javascript
代码运行次数:0
运行
复制
hadoop namenode -format

3、启动test-cluster-hap-master-01(active)节点NameNode

代码语言:javascript
代码运行次数:0
运行
复制
hadoop-daemon.sh start namenode

4、test-cluster-hap-master-02节点上同步(test-cluster-hap-master-01)元数据

代码语言:javascript
代码运行次数:0
运行
复制
hdfs namenode -bootstrapStandby  #实际上是将test-cluster-hap-master-01机器上的current文件夹同步过来

5、启动test-cluster-hap-master-02(standby)节点NameNode

代码语言:javascript
代码运行次数:0
运行
复制
hadoop-daemon.sh start namenode

6、在test-cluster-hap-master-01格式化ZKFC

代码语言:javascript
代码运行次数:0
运行
复制
hdfs zkfc -formatZK

7、在test-cluster-hap-master-01节点启动HDFS集群

代码语言:javascript
代码运行次数:0
运行
复制
start-dfs.sh

8、启动ResourceManager(test-cluster-hap-master-01机器)

代码语言:javascript
代码运行次数:0
运行
复制
start-yarn.sh

成功之后

访问 test-cluster-hap-master-01节点http://10.8.46.35:9870/

访问 test-cluster-hap-master-02节点http://10.8.46.197:9870/

访问10.8.46.190 test-cluster-hap-slave-001节点http://10.8.46.197:8088/cluster/nodes

11.验证master节点

test-cluster-hap-master-02(standby)节点NameNode停掉

代码语言:javascript
代码运行次数:0
运行
复制
hadoop-daemon.sh stop namenode

刷新后

启动 hadoop-daemon.sh start namenode 访问

done~

END

革命尚未成功,同志仍需努力,冲冲冲

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2023-06-11,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 千羽的编程时光 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • docker搭建hadoop集群
    • 1. 配置jdk环境(3台服务器都要)
    • 2. 配置core-site.xml
    • 3. 配置hdfs-site.xml
    • 4. 配置yarn-site.xml
    • 5.配置mapred-site.xml
    • 6.编辑workers
    • 7.编辑start
    • 8.创建目录
    • 9.添加hosts(每台服务器都要)
    • 10.启动Hadoop
    • 11.验证master节点
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档