前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >0480-如何从HDP2.6.5原地迁移到CDH5.16.1

0480-如何从HDP2.6.5原地迁移到CDH5.16.1

作者头像
Fayson
发布2018-12-27 14:21:50
7860
发布2018-12-27 14:21:50
举报
文章被收录于专栏:Hadoop实操Hadoop实操

温馨提示:如果使用电脑查看图片不清晰,可以使用手机打开文章单击文中的图片放大查看高清原图。

Fayson的github: https://github.com/fayson/cdhproject

提示:代码块部分可以左右滑动查看噢

1

文档编写目的

我们常使用的Hadoop平台包括Apache Hadoop,CDH和HDP,有时我们会碰到需要迁移平台的情况,举个例子,比如你已经一直在使用Apache Hadoop2.4,近期看到CDH6附带Hadoop3发布了,想迁移到CDH并且做整个平台的所有组件升级。平台迁移和平台升级的方式基本一样的,一般有2种大的选择,第一种是原地升级即直接在原有平台上操作,该办法操作效率较高,马上看到效果,但往往风险较高,比如升级失败回滚方案不完善,跨大版本比如Hadoop2到Hadoop3可能HDFS还有丢数据的风险;第二种是拷贝数据的方式升级,需要额外的服务器资源,会新搭平台,然后把旧的平台的数据拷贝过去,数据拷贝完毕后,再把旧集群的机器下线了慢慢加入到新集群,该方法一般实施周期较长,但是风险较小。根据实际情况可以选择不同的方式来进行平台迁移或者平升级,另外对于两种方案还可以具体细化分类出不同的方案,比如第一种方案考虑提前备份数据或者备份关键数据等,本文Fayson不做细化讨论。

本文Fayson主要介绍如何从HDP2.6.5原地升级到CDH5.16.1,迁移方式是直接在HDP已有的集群进行操作主要步骤包括卸载原有的HDP然后安装新的CDH,最后需保证HDFS数据,HBase数据,Hive数据都在并且能正常访问。迁移步骤如下图所示:

注意第一步禁用HDP的HDFS HA,Fayson在上一篇文章中已经做了介绍,本文将省略。具体参考《0479-如何禁用HDP2.6.5的HDFS HA》。

  • 内容概述

1.测试环境说明

2.保存相关元数据

3.停止HDP和Ambari服务

4.卸载Ambari和HDP

5.安装Cloudera Manager

6.安装CDH

7.其他问题

  • 测试环境

1.HDP2.6.5

2.Ambari2.6.2.2

3.CDH5.16.1

4.Redhat7.4

5.集群未启用Kerberos

6.采用root用户操作

2

测试环境说明

1.首先Fayson已经预先在4台机器预先安装好了HDP,安装了一些常见服务,HDFS的HA已经取消,并且在HDFS,Hive表和HBase中都导入了数据。

2.集群主要角色划分如下,因为平台迁移主要跟Ambari,HDFS,HBase,Hive服务相关,所以其他角色不做介绍。

主机名

IP地址

角色

ip-172-31-1-163.ap-southeast-1.compute.internal

172.31.1.163

DataNode,RegionServer,Zookeeper

ip-172-31-12-114.ap-southeast-1.compute.internal

172.31.12.114

Secondary NameNode,DataNode,RegionServer,Zookeeper

ip-172-31-13-13.ap-southeast-1.compute.internal

172.31.13.13

DataNode,RegionServer,Zookeeper

ip-172-31-4-109.ap-southeast-1.compute.internal

172.31.4.109

NameNode,Ambari,Hive Metastore,HMaster

3.记录HDFS的使用情况

4.记录HBase使用情况

该表的条数是:

代码语言:javascript
复制
[root@ip-172-31-4-109 ~]# hbase org.apache.hadoop.hbase.mapreduce.RowCounter TestTable

632212条

5.记录Hive使用情况

3个database,40个表

随便选择一张表查看并记录条数

50000条

6.比较HDP2.6.5和CDH5.16.1的组件版本,因为本次迁移主要是保证HDFS,Hive和HBase的数据不丢,其他的组件比如Spark,Zookeeper等比较意义不大,重新安装CDH后能正常使用即可。

HDP2.6.5

CDH5.16.1

Hadoop2.7.3

Hadoop2.6

Hive1.2.1

Hive1.1

HBase1.1.2

HBase1.2

7.记录一些关键目录

NameNode元数据目录:

/hadoop/hdfs/namenode

DataNode目录:

/hadoop/hdfs/data

Secondary NameNode检查点目录:

/hadoop/hdfs/namesecondary

HBase的HDFS目录:

/apps/hbase/data

Zookeeper目录:

/hadoop/zookeeper

3

禁用HDP的HDFS HA

此处省略,具体参考《0479-如何禁用HDP2.6.5的HDFS HA》。

4

保存相关元数据

1.使用Ambari停止HBase服务

2.保存HDFS元数据,在NameNode节点执行以下命令

代码语言:javascript
复制
sudo -u hdfs hdfs dfsadmin -rollEdits
sudo -u hdfs hdfs dfsadmin -safemode enter
sudo -u hdfs hdfs dfsadmin -saveNamespace

3.保存NameNode节点上的元数据文件

代码语言:javascript
复制
[root@ip-172-31-4-109 ~]# cd /hadoop/
[root@ip-172-31-4-109 hadoop]# ls
hdfs  mapreduce  yarn  zookeeper
[root@ip-172-31-4-109 hadoop]# cd hdfs/
[root@ip-172-31-4-109 hdfs]# ls
journal  namenode
[root@ip-172-31-4-109 hdfs]# cd namenode/
[root@ip-172-31-4-109 namenode]# ls
current  in_use.lock  namenode-formatted
[root@ip-172-31-4-109 namenode]# cd ..
[root@ip-172-31-4-109 hdfs]# ls
journal  namenode
[root@ip-172-31-4-109 hdfs]# tar czf nn.tar.gz namenode/
[root@ip-172-31-4-109 hdfs]# ll
total 2484
drwxr-xr-x. 3 hdfs hadoop      26 Dec 11 16:03 journal
drwxr-xr-x. 4 hdfs hadoop      66 Dec 12 03:28 namenode
-rw-r--r--  1 root root   2539571 Dec 12 10:03 nn.tar.gz
[root@ip-172-31-4-109 hdfs]# mv nn.tar.gz /root/migration_bak/
[root@ip-172-31-4-109 hdfs]#

4.保存Secondary NameNode节点上的数据

代码语言:javascript
复制
[root@ip-172-31-12-114 ~]# mkdir migration_bak
[root@ip-172-31-12-114 ~]# cd /hadoop/hdfs/
[root@ip-172-31-12-114 hdfs]# ls
data  journal  namesecondary
[root@ip-172-31-12-114 hdfs]# tar czf snn.tar.gz namesecondary/
[root@ip-172-31-12-114 hdfs]# ll
total 812
drwxr-x--- 3 hdfs hadoop     40 Dec 12 03:28 data
drwxr-xr-x 3 hdfs hadoop     26 Dec 11 16:03 journal
drwxr-xr-x 3 hdfs hadoop     40 Dec 12 03:29 namesecondary
-rw-r--r-- 1 root root   828980 Dec 12 10:05 snn.tar.gz
[root@ip-172-31-12-114 hdfs]# mv snn.tar.gz /root/migration_bak/
[root@ip-172-31-12-114 hdfs]#

5.备份Hive元数据

代码语言:javascript
复制
[root@ip-172-31-4-109 migration_bak]# mysqldump -u root -p metastore > metastore.sql

Enter password: 
[root@ip-172-31-4-109 migration_bak]# ls
metastore.sql  nn.tar.gz
[root@ip-172-31-4-109 migration_bak]# ll
total 2600
-rw-r--r-- 1 root root  115457 Dec 12 10:14 metastore.sql
-rw-r--r-- 1 root root 2539571 Dec 12 10:03 nn.tar.gz
[root@ip-172-31-4-109 migration_bak]# vim metastore.sql 
[root@ip-172-31-4-109 migration_bak]#

5

停止HDP和Ambari服务

1.停止Hadoop所有服务

2.停止Ambari Server服务和所有机器的Ambari agent服务

代码语言:javascript
复制
[root@ip-172-31-4-109 ~]# ambari-server stop
[root@ip-172-31-4-109 shell]# sh ssh_do_all.sh node.list "ambari-agent stop"

6

卸载Ambari和HDP

1.在所有节点移除Ambari相关的包

代码语言:javascript
复制
[root@ip-172-31-4-109 shell]# sh ssh_do_all.sh node.list "yum -y remove ambari\*"

2.移除HDP其他组件

代码语言:javascript
复制
sh ssh_do_all.sh node.list "yum -y remove hcatalog\*"
sh ssh_do_all.sh node.list "yum -y remove hive\*"
sh ssh_do_all.sh node.list "yum -y remove tez\*"
sh ssh_do_all.sh node.list "yum -y remove hbase\*"
sh ssh_do_all.sh node.list "yum -y remove zookeeper\*"
sh ssh_do_all.sh node.list "yum -y remove pig\*"
sh ssh_do_all.sh node.list "yum -y remove hadoop-lzo\*"
sh ssh_do_all.sh node.list "yum -y remove hadoop\*"
sh ssh_do_all.sh node.list "yum -y remove hue\*"
sh ssh_do_all.sh node.list "yum -y remove sqoop\*"
sh ssh_do_all.sh node.list "yum -y remove oozie\*"
sh ssh_do_all.sh node.list "yum -y remove ranger\*"
sh ssh_do_all.sh node.list "yum -y remove knox\*"
sh ssh_do_all.sh node.list "yum -y remove storm\*"
sh ssh_do_all.sh node.list "yum -y remove accumulo\*"
sh ssh_do_all.sh node.list "yum -y remove falcon\*"
sh ssh_do_all.sh node.list "yum -y remove smartsense\*"
sh ssh_do_all.sh node.list "yum -y remove slider\*"
sh ssh_do_all.sh node.list "yum -y remove spark\*"

3.删除所有节点的log目录

代码语言:javascript
复制
sh ssh_do_all.sh node.list "rm -rf /var/log/ambari-agent"
sh ssh_do_all.sh node.list "rm -rf /var/log/ambari-metrics-grafana"
sh ssh_do_all.sh node.list "rm -rf /var/log/ambari-metrics-monitor"
sh ssh_do_all.sh node.list "rm -rf /var/log/ambari-server/"
sh ssh_do_all.sh node.list "rm -rf /var/log/falcon"
sh ssh_do_all.sh node.list "rm -rf /var/log/flume"
sh ssh_do_all.sh node.list "rm -rf /var/log/hadoop"
sh ssh_do_all.sh node.list "rm -rf /var/log/hadoop-mapreduce"
sh ssh_do_all.sh node.list "rm -rf /var/log/hadoop-yarn"
sh ssh_do_all.sh node.list "rm -rf /var/log/hive"
sh ssh_do_all.sh node.list "rm -rf /var/log/hive-hcatalog"
sh ssh_do_all.sh node.list "rm -rf /var/log/hive2"
sh ssh_do_all.sh node.list "rm -rf /var/log/hst"
sh ssh_do_all.sh node.list "rm -rf /var/log/knox"
sh ssh_do_all.sh node.list "rm -rf /var/log/oozie"
sh ssh_do_all.sh node.list "rm -rf /var/log/solr"
sh ssh_do_all.sh node.list "rm -rf /var/log/zookeeper"
sh ssh_do_all.sh node.list "rm -rf /var/log/hadoop-hdfs"
sh ssh_do_all.sh node.list "rm -rf /var/log/livy2"     
sh ssh_do_all.sh node.list "rm -rf /var/log/spark"
sh ssh_do_all.sh node.list "rm -rf /var/log/spark2"
sh ssh_do_all.sh node.list "rm -rf /var/log/webcat"

4.删除所有节点的配置文件

代码语言:javascript
复制
sh ssh_do_all.sh node.list "rm -rf /etc/ambari-agent"
sh ssh_do_all.sh node.list "rm -rf /etc/ambari-metrics-grafana"
sh ssh_do_all.sh node.list "rm -rf /etc/ambari-server"
sh ssh_do_all.sh node.list "rm -rf /etc/ams-hbase"
sh ssh_do_all.sh node.list "rm -rf /etc/falcon"
sh ssh_do_all.sh node.list "rm -rf /etc/flume"
sh ssh_do_all.sh node.list "rm -rf /etc/hadoop"
sh ssh_do_all.sh node.list "rm -rf /etc/hadoop-httpfs"
sh ssh_do_all.sh node.list "rm -rf /etc/hbase"
sh ssh_do_all.sh node.list "rm -rf /etc/hive" 
sh ssh_do_all.sh node.list "rm -rf /etc/hive-hcatalog"
sh ssh_do_all.sh node.list "rm -rf /etc/hive-webhcat"
sh ssh_do_all.sh node.list "rm -rf /etc/hive2"
sh ssh_do_all.sh node.list "rm -rf /etc/hst"
sh ssh_do_all.sh node.list "rm -rf /etc/knox" 
sh ssh_do_all.sh node.list "rm -rf /etc/livy"
sh ssh_do_all.sh node.list "rm -rf /etc/mahout" 
sh ssh_do_all.sh node.list "rm -rf /etc/oozie"
sh ssh_do_all.sh node.list "rm -rf /etc/phoenix"
sh ssh_do_all.sh node.list "rm -rf /etc/pig" 
sh ssh_do_all.sh node.list "rm -rf /etc/ranger-admin"
sh ssh_do_all.sh node.list "rm -rf /etc/ranger-usersync"
sh ssh_do_all.sh node.list "rm -rf /etc/spark2"
sh ssh_do_all.sh node.list "rm -rf /etc/tez"
sh ssh_do_all.sh node.list "rm -rf /etc/tez_hive2"
sh ssh_do_all.sh node.list "rm -rf /etc/zookeeper"

5.删除所有节点上的pid

代码语言:javascript
复制
sh ssh_do_all.sh node.list "rm -rf /var/run/ambari-agent"
sh ssh_do_all.sh node.list "rm -rf /var/run/ambari-metrics-grafana"
sh ssh_do_all.sh node.list "rm -rf /var/run/ambari-server"
sh ssh_do_all.sh node.list "rm -rf /var/run/falcon"
sh ssh_do_all.sh node.list "rm -rf /var/run/flume"
sh ssh_do_all.sh node.list "rm -rf /var/run/hadoop" 
sh ssh_do_all.sh node.list "rm -rf /var/run/hadoop-mapreduce"
sh ssh_do_all.sh node.list "rm -rf /var/run/hadoop-yarn"
sh ssh_do_all.sh node.list "rm -rf /var/run/hbase"
sh ssh_do_all.sh node.list "rm -rf /var/run/hive"
sh ssh_do_all.sh node.list "rm -rf /var/run/hive-hcatalog"
sh ssh_do_all.sh node.list "rm -rf /var/run/hive2"
sh ssh_do_all.sh node.list "rm -rf /var/run/hst"
sh ssh_do_all.sh node.list "rm -rf /var/run/knox"
sh ssh_do_all.sh node.list "rm -rf /var/run/oozie" 
sh ssh_do_all.sh node.list "rm -rf /var/run/webhcat"
sh ssh_do_all.sh node.list "rm -rf /var/run/zookeeper"

6.删除所有节点上的库文件夹

代码语言:javascript
复制
sh ssh_do_all.sh node.list "rm -rf /usr/lib/ambari-agent"
sh ssh_do_all.sh node.list "rm -rf /usr/lib/ambari-infra-solr-client"
sh ssh_do_all.sh node.list "rm -rf /usr/lib/ambari-metrics-hadoop-sink"
sh ssh_do_all.sh node.list "rm -rf /usr/lib/ambari-metrics-kafka-sink"
sh ssh_do_all.sh node.list "rm -rf /usr/lib/ambari-server-backups"
sh ssh_do_all.sh node.list "rm -rf /usr/lib/ams-hbase"
sh ssh_do_all.sh node.list "rm -rf /var/lib/ambari-agent"
sh ssh_do_all.sh node.list "rm -rf /var/lib/ambari-metrics-grafana"
sh ssh_do_all.sh node.list "rm -rf /var/lib/ambari-server"
sh ssh_do_all.sh node.list "rm -rf /var/lib/flume"
sh ssh_do_all.sh node.list "rm -rf /var/lib/hadoop-hdfs"
sh ssh_do_all.sh node.list "rm -rf /var/lib/hadoop-mapreduce"
sh ssh_do_all.sh node.list "rm -rf /var/lib/hadoop-yarn"
sh ssh_do_all.sh node.list "rm -rf /var/lib/hive2"
sh ssh_do_all.sh node.list "rm -rf /var/lib/knox"
sh ssh_do_all.sh node.list "rm -rf /var/lib/smartsense"
sh ssh_do_all.sh node.list "rm -rf /var/lib/storm"

7.删除所有节点上的符号链接

代码语言:javascript
复制
sh ssh_do_all.sh node.list "rm -rf /usr/bin/accumulo"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/atlas-start"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/atlas-stop"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/beeline"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/falcon"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/flume-ng"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/hbase"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/hcat"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/hdfs"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/hive"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/hiveserver2"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/kafka"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/mahout"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/mapred"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/oozie"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/oozied.sh"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/phoenix-psql"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/phoenix-queryserver"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/phoenix-sqlline"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/phoenix-sqlline-thin"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/pig"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/python-wrap"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/ranger-admin"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/ranger-admin-start"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/ranger-admin-stop"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/ranger-kms"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/ranger-usersync"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/ranger-usersync-start"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/ranger-usersync-stop"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/slider"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-codegen"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-create-hive-table"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-eval"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-export"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-help"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-import"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-import-all-tables"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-job"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-list-databases"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-list-tables"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-merge"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-metastore"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/sqoop-version"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/storm"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/storm-slider"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/worker-lanucher"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/yarn"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/zookeeper-client"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/zookeeper-server"
sh ssh_do_all.sh node.list "rm -rf /usr/bin/zookeeper-server-cleanup"

8.重启所有节点,略。

7

安装Cloudera Manager

安装过程略,参考Fayson之前的文章《0470-如何在Redhat7.4安装CDH5.16.1》。安装成功后直接登录Cloudera Manager。

8

安装CDH

前面步骤略过,直接到主机检查。如何安装CDH依旧可以参考Fayson之前的文章《0470-如何在Redhat7.4安装CDH5.16.1》。

1.主机检查发现用户丢失自己的group告警

拷贝以下内容替换所有节点的/etc/group中的用户组,记得不动系统用户组以及普通用户组。

代码语言:javascript
复制
cloudera-scm:x:994:
apache:x:48:
hadoop:x:993:hdfs,mapred,yarn
flume:x:992:
hdfs:x:991:
solr:x:990:
sentry:x:989:
hue:x:988:
zookeeper:x:987:
mapred:x:986:
httpfs:x:985:
sqoop:x:984:sqoop2
hive:x:983:impala
kafka:x:982:
kms:x:981:
yarn:x:980:
oozie:x:979:
kudu:x:978:
hbase:x:977:
impala:x:976:
spark:x:975:
mysql:x:27:
llama:x:974:llama
sqoop2:x:973:

拷贝以下内容替换所有节点的/etc/passwd中的用户组,记得不动系统用户以及普通用户。

代码语言:javascript
复制
cloudera-scm:x:997:994:Cloudera Manager:/var/lib/cloudera-scm-server:/sbin/nologin
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
flume:x:996:992:Flume:/var/lib/flume-ng:/sbin/nologin
hdfs:x:995:991:Hadoop HDFS:/var/lib/hadoop-hdfs:/sbin/nologin
solr:x:994:990:Solr:/var/lib/solr:/sbin/nologin
sentry:x:993:989:Sentry:/var/lib/sentry:/sbin/nologin
hue:x:992:988:Hue:/usr/lib/hue:/sbin/nologin
zookeeper:x:991:987:ZooKeeper:/var/lib/zookeeper:/sbin/nologin
mapred:x:990:986:Hadoop MapReduce:/var/lib/hadoop-mapreduce:/sbin/nologin
httpfs:x:989:985:Hadoop HTTPFS:/var/lib/hadoop-httpfs:/sbin/nologin
sqoop:x:988:984:Sqoop:/var/lib/sqoop:/sbin/nologin
hive:x:987:983:Hive:/var/lib/hive:/sbin/nologin
kafka:x:986:982:Kafka:/var/lib/kafka:/sbin/nologin
kms:x:985:981:Hadoop KMS:/var/lib/hadoop-kms:/sbin/nologin
yarn:x:984:980:Hadoop Yarn:/var/lib/hadoop-yarn:/sbin/nologin
oozie:x:983:979:Oozie User:/var/lib/oozie:/sbin/nologin
kudu:x:982:978:Kudu:/var/lib/kudu:/sbin/nologin
hbase:x:981:977:HBase:/var/lib/hbase:/sbin/nologin
impala:x:980:976:Impala:/var/lib/impala:/sbin/nologin
spark:x:979:975:Spark:/var/lib/spark:/sbin/nologin
mysql:x:27:27:MariaDB Server:/var/lib/mysql:/sbin/nologin
llama:x:978:974:Llama:/var/lib/llama:/bin/bash
sqoop2:x:977:973:Sqoop 2 User:/var/lib/sqoop2:/sbin/nologin

同步/etc/passwd和/etc/group到所有节点

代码语言:javascript
复制
sh bk_cp.sh node.list /etc/group /etc
sh bk_cp.sh node.list /etc/passwd /etc

再次点击主机检查

2.选择需要安装的服务,含HBase内核即可

3.选择各个组件的相关角色的节点时请务必注意:

NameNode/HMaster与原HDP集群时一致:ip-172-31-4-109.ap-southeast-1.compute.internal

DataNode/RegionServer与原HDP集群时一致:ip-172-31-12-114.ap-southeast-1.compute.internal,ip-172-31-13-13.ap-southeast-1.compute.internal,ip-172-31-1-163.ap-southeast-1.compute.internal

SecondaryNameNode与原HDP集群时一致:ip-172-31-12-114.ap-southeast-1.compute.internal

4.数据库设置,Hive的元数据库依旧是直接连的之前HDP集群中的那个database

5.集群的关键参数配置,注意这里需要修改对应到之前HDP集群时的配置:

hbase.rootdir为/apps/hbase/data

dfs.datanode.data.dir为/hadoop/hdfs/data

dfs.namenode.name.dir为/hadoop/hdfs/namenode

dfs.namenode.checkpoint.dir为/hadoop/hdfs/namesecondary

6.HDFS服务启动失败,符合预期。

7.点击左上角Cloudera logo直接回到主页。

8.进入HDFS服务,点击配置,选择“升级HDFS元数据”

点击“升级HDFS元数据”

9.升级失败,查看角色具体日志如下

代码语言:javascript
复制
Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage directory /hadoop/hdfs/namenode. Reported: -63. Expecting = -60.

10.我们对比查看HDP之前NameNode上的元数据layoutVersion,如下所示

这里Fayson是有另外2个集群环境才能统一来查看HDFS的layoutVersion,发现HDP2.6.5的版本号比CDH5.16.1的要高,这样导致并没办法迁移到CDH5.16.1,因为HDFS的元数据支持升级,但是却没办法降级的。

基于这个限制,Fayson会直接将该集群升级到C6,后面一篇文章再介绍。

9

其他问题

因为Fayson在本文第7章处理了用户和用户组,主要删除了旧的用户组和用户,导致在做HDFS元数据升级时报错如下

主要是因为元数据目录的用户和属组出现了问题如下所示:

手动修改文件的用户和属组

代码语言:javascript
复制
[root@ip-172-31-4-109 hdfs]# chown hdfs:hadoop namenode/
[root@ip-172-31-4-109 namenode]# chown hdfs:hdfs current/
[root@ip-172-31-4-109 current]# chown hdfs:hdfs *

本文HDP卸载参考:

https://community.hortonworks.com/articles/97489/completely-uninstall-hdp-and-ambari.html

HDFS升级与降级参考:

https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html

注意降级或者回滚只能发生在升级完成之前,降级或回滚HDFS版本一旦在最终化元数据升级成功后就不能执行,参考:

Note also that downgrade and rollback are possible only after a rolling upgrade isstarted and before the upgrade is terminated. An upgrade can be terminated byeither finalize, downgrade or rollback. Therefore, it may not be possible toperform rollback after finalize or downgrade, or to perform downgrade after finalize.

提示:代码块部分可以左右滑动查看噢

为天地立心,为生民立命,为往圣继绝学,为万世开太平。 温馨提示:如果使用电脑查看图片不清晰,可以使用手机打开文章单击文中的图片放大查看高清原图。

推荐关注Hadoop实操,第一时间,分享更多Hadoop干货,欢迎转发和分享。

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-12-13,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 Hadoop实操 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
专用宿主机
专用宿主机(CVM Dedicated Host,CDH)提供用户独享的物理服务器资源,满足您资源独享、资源物理隔离、安全、合规需求。专用宿主机搭载了腾讯云虚拟化系统,购买之后,您可在其上灵活创建、管理多个自定义规格的云服务器实例,自主规划物理资源的使用。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档