前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >kubernetes-26:升级kubeadm版本—从v1.13.3升级到v1.19.3

kubernetes-26:升级kubeadm版本—从v1.13.3升级到v1.19.3

作者头像
千里行走
发布2020-11-12 17:11:11
2.9K0
发布2020-11-12 17:11:11
举报
文章被收录于专栏:千里行走千里行走

kubeadm升级前版本:v1.13.3

kubeadm version

本文要升级到最新版:v1.19.3

目录:

(1).kubernetes从v1.13.3升级到v1.14.0

(2).kubernetes从v1.14.0升级到v1.15.0

(3).kubernetes从v1.15.0升级到v1.16.0

(4).kubernetes从v1.16.0升级到v1.17.0

(5).kubernetes从v1.17.0升级到v1.18.0

(6).kubernetes从v1.18.0升级到v1.19.3

(7).参考文章

(1).kubernetes从v1.13.3升级到v1.14.0

kubeadm upgrade plan

检查可升级到哪些版本,并验证您当前的集群是否可升级。

执行:kubeadm upgrade plan

最新版本是v1.19.3。

执行下述命令获得版本升级命令:kubeadm upgrade plan v1.19.3

可以从上图中看出当前版本和最新版本的差异,以及升级操作。

执行kubeadm升级命令:kubeadm upgrade apply v1.19.3

可以看到,版本差距太大,不允许升级。一个个版本升吧。

Specified version to upgrade to "v1.14.1" is at least one minor release higher than the kubeadm minor release (14 > 13). Such an upgrade is not supported

先安装kubeadm的yum源,因为k8s官网给的yum源是packages.cloud.google.com,国内访问不了,此时我们可以使用阿里云的yum仓库镜像。

阿里云上没有附Help说明连接,简单摸索了下,如下设置可用(centos)。注意不要开启check。

建立文件:/etc/yum.repos.d/kubernetes.repo

保存如下内容

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

找到最新的稳定版 1.14:

yum list --showduplicates kubeadm --disableexcludes=kubernetes

我选择version=1.14.0-0,因为我最终要升级到最新版本v1.19.1,所以无所谓1.14是哪个小版本号。

升级 kubeadm :

yum install -y kubeadm-1.14.0-0 --disableexcludes=kubernetes

查看版本验证升级成功:

kubeadm version

从集群中移除要升级的节点:

kubectl drain future --ignore-daemonsets

(future是当前节点名称,通过kubectl get nodes获得)

执行后提示失败,因为当前node有挂载localPV:

增加参数再次执行,成功:

kubectl drain future --ignore-daemonsets --delete-local-data=true

再次kubect get nodes,可以看到:

kubectl get pods --all-namespaces:可以看到coredns是Pending,说明已经被完全隔离。

必须执行drain,否则kubeadm upgrade会执行失败,报如下类似错误:

Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced。

升级需要如下image,墙原因需要曲线下载(或者vpn):

k8s.gcr.io/etcd:3.3.10

k8s.gcr.io/kube-apiserver:v1.14.0

k8s.gcr.io/kube-controller-manager:v1.14.0

k8s.gcr.io/kube-scheduler:v1.14.0

k8s.gcr.io/coredns:1.3.1

我是曲线下载:

docker pull mirrorgooglecontainers/etcd:3.3.10

docker pull mirrorgooglecontainers/kube-apiserver:v1.14.0

docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.0

docker pull mirrorgooglecontainers/kube-scheduler:v1.14.0

docker pull coredns/coredns:1.3.1

然后重命名:

docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10

docker tag mirrorgooglecontainers/kube-apiserver:v1.14.0 k8s.gcr.io/kube-apiserver:v1.14.0

docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.0 k8s.gcr.io/kube-controller-manager:v1.14.0

docker tag mirrorgooglecontainers/kube-scheduler:v1.14.0 k8s.gcr.io/kube-scheduler:v1.14.0

docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

升级kubeadm k8s cluster版本:

kubeadm upgrade apply v1.14.0

发现一直停在这里,kubectl get nodes,发现节点处在不可调度状态:

需要将worknode的unschedulable改为ture,这样让节点可以被集群调度。

kubectl patch node future -p "{\"spec\":{\"unschedulable\":false}}"

集群OK:

kubeadm upgrade升级成功信息:

要重启kubelet:

systemctl daemon-reload

systemctl restart kubelet

重启后发现cluster变成notReady,查看原因:kubectl describe node future

需要禁用CSIMigration属性:

在/var/lib/kubelet/config.yaml配置文件中添加以下配置

featureGates:

CSIMigration: false

然后重启:

systemctl daemon-reload

systemctl restart kubelet

然后我们依次一个个版本升级,最终升级到kubeadm最新版本1.19.3.

(2).kubernetes从v1.14.0升级到v1.15.0

yum install -y kubeadm-1.15.0-0 --disableexcludes=kubernetes

升级需要如下image,墙原因需要曲线下载(或者vpn):

k8s.gcr.io/kube-proxy:v1.15.0

k8s.gcr.io/kube-apiserver:v1.15.0

k8s.gcr.io/kube-controller-manager:v1.15.0

k8s.gcr.io/kube-scheduler:v1.15.0

我是曲线下载:

docker pull mirrorgooglecontainers/kube-proxy:v1.15.0

docker pull mirrorgooglecontainers/kube-apiserver:v1.15.0

docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.0

docker pull mirrorgooglecontainers/kube-scheduler:v1.15.0

然后重命名:

docker tag mirrorgooglecontainers/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0

docker tag mirrorgooglecontainers/kube-apiserver:v1.15.0 k8s.gcr.io/kube-apiserver:v1.15.0

docker tag mirrorgooglecontainers/kube-controller-manager:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0

docker tag mirrorgooglecontainers/kube-scheduler:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0

升级kubernetes集群到v1.15.0

kubeadm upgrade apply v1.15.0

重启kubelet:

systemctl daemon-reload

systemctl restart kubelet

重启后发现cluster变成notReady,查看原因:kubectl describe node future

需要禁用CSIMigration属性:

在/var/lib/kubelet/config.yaml配置文件中添加以下配置

featureGates:

CSIMigration: false

然后重启:

systemctl daemon-reload

systemctl restart kubelet

(3).kubernetes从v1.15.0升级到v1.16.0

yum install -y kubeadm-1.16.0-0 --disableexcludes=kubernetes

升级需要如下image,墙原因需要曲线下载(或者vpn):

k8s.gcr.io/etcd:3.3.15-0

k8s.gcr.io/kube-apiserver:v1.16.0

k8s.gcr.io/kube-controller-manager:v1.16.0

k8s.gcr.io/kube-scheduler:v1.16.0

k8s.gcr.io/kube-proxy:v1.16.0

k8s.gcr.io/coredns:1.6.2

我是曲线下载:

docker pull mirrorgooglecontainers/etcd:3.3.15-0

docker pull kubesphere/kube-apiserver:v1.16.0

docker pull kubesphere/kube-controller-manager:v1.16.0

docker pull kubesphere/kube-scheduler:v1.16.0

docker pull kubesphere/kube-proxy:v1.16.0

docker pull coredns/coredns:1.6.2

然后重命名:

docker tag mirrorgooglecontainers/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0

docker tag kubesphere/kube-apiserver:v1.16.0 k8s.gcr.io/kube-apiserver:v1.16.0

docker tag kubesphere/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0

docker tag kubesphere/kube-scheduler:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0

docker tag kubesphere/kube-proxy:v1.16.0 k8s.gcr.io/kube-proxy:v1.16.0

docker tag coredns/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2

升级kubernetes集群到v1.16.0

kubeadm upgrade apply v1.16.0

提示error:

官方issue提示可以忽略:https://github.com/kubernetes/kubernetes/issues/82889

再次执行升级操作,增加参数忽略:

kubeadm upgrade apply v1.16.0 --ignore-preflight-errors=CoreDNSUnsupportedPlugins

重启kubelet:

systemctl daemon-reload

systemctl restart kubelet

重启后发现cluster变成notReady,查看原因:kubectl describe node future

需要禁用CSIMigration属性:

在/var/lib/kubelet/config.yaml配置文件中添加以下配置

featureGates:

CSIMigration: false

然后重启:

systemctl daemon-reload

systemctl restart kubelet

1.16.0升级完成:

(4).kubernetes从v1.16.0升级到v1.17.0

yum install -y kubeadm-1.17.0-0 --disableexcludes=kubernetes

升级需要如下image,墙原因需要曲线下载(或者vpn):

docker pull gotok8s/etcd:3.4.3-0

k8s.gcr.io/kube-apiserver:v1.17.0

k8s.gcr.io/kube-controller-manager:v1.17.0

k8s.gcr.io/kube-scheduler:v1.17.0

k8s.gcr.io/kube-proxy:v1.17.0

k8s.gcr.io/coredns:1.6.5

我是曲线下载:

docker pull gotok8s/etcd:3.4.3-0

docker pull kubesphere/kube-apiserver:v1.17.0

docker pull kubesphere/kube-controller-manager:v1.17.0

docker pull kubesphere/kube-scheduler:v1.17.0

docker pull kubesphere/kube-proxy:v1.17.0

docker pull coredns/coredns:1.6.5

然后重命名:

docker tag gotok8s/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0

docker tag kubesphere/kube-apiserver:v1.17.0 k8s.gcr.io/kube-apiserver:v1.17.0

docker tag kubesphere/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0

docker tag kubesphere/kube-scheduler:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0

docker tag kubesphere/kube-proxy:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0

docker tag coredns/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5

升级kubernetes集群到v1.17.0

kubeadm upgrade apply v1.17.0

重启kubelet:

systemctl daemon-reload

systemctl restart kubelet

完成1.17.0的升级:

没有提示CSIMigration错误,是因为,这个功能从1.17.0开始存在,所以不用禁用。

(5).kubernetes从v1.17.0升级到v1.18.0

yum install -y kubeadm-1.18.0-0 --disableexcludes=kubernetes

升级需要如下image,墙原因需要曲线下载(或者vpn):

k8s.gcr.io/kube-apiserver:v1.18.0

k8s.gcr.io/kube-controller-manager:v1.18.0

k8s.gcr.io/kube-scheduler:v1.18.0

k8s.gcr.io/kube-proxy:v1.18.0

k8s.gcr.io/coredns:1.6.7

我是曲线下载:

docker pull gotok8s/kube-apiserver:v1.18.0

docker pull gotok8s/kube-controller-manager:v1.18.0

docker pull gotok8s/kube-scheduler:v1.18.0

docker pull gotok8s/kube-proxy:v1.18.0

docker pull gotok8s/coredns:1.6.7

然后重命名:

docker tag gotok8s/kube-apiserver:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0

docker tag gotok8s/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0

docker tag gotok8s/kube-scheduler:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0

docker tag gotok8s/kube-proxy:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0

docker tag gotok8s/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7

升级kubernetes集群到v1.18.0

kubeadm upgrade apply v1.18.0

重启kubelet:

systemctl daemon-reload

systemctl restart kubelet

完成1.18.0的升级:

没有提示CSIMigration错误,是因为,这个功能从1.17.0开始存在,所以不用禁用。

(6).kubernetes从v1.18.0升级到v1.19.3

yum install -y kubeadm-1.19.3-0 --disableexcludes=kubernetes

升级需要如下image,墙原因需要曲线下载(或者vpn):

k8s.gcr.io/kube-apiserver:v1.19.3

k8s.gcr.io/kube-controller-manager:v1.19.3

k8s.gcr.io/kube-scheduler:v1.19.3

k8s.gcr.io/kube-proxy:v1.19.3

k8s.gcr.io/pause:3.2

k8s.gcr.io/etcd:3.4.13-0

k8s.gcr.io/coredns:1.7.0

我是曲线下载:

docker pull gotok8s/kube-apiserver:v1.19.3

docker pull gotok8s/kube-controller-manager:v1.19.3

docker pull gotok8s/kube-scheduler:v1.19.3

docker pull gotok8s/kube-proxy:v1.19.3

docker pull gotok8s/pause:3.2

docker pull gotok8s/etcd:3.4.13-0

docker pull gotok8s/coredns:1.7.0

然后重命名:

docker tag gotok8s/kube-apiserver:v1.19.3 k8s.gcr.io/kube-apiserver:v1.19.3

docker tag gotok8s/kube-controller-manager:v1.19.3 k8s.gcr.io/kube-controller-manager:v1.19.3

docker tag gotok8s/kube-scheduler:v1.19.3 k8s.gcr.io/kube-scheduler:v1.19.3

docker tag gotok8s/kube-proxy:v1.19.3 k8s.gcr.io/kube-proxy:v1.19.3

docker tag gotok8s/pause:3.2 k8s.gcr.io/pause:3.2

docker tag gotok8s/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0

docker tag gotok8s/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0

升级kubernetes集群到v1.19.3

kubeadm upgrade apply v1.19.3

重启kubelet:

systemctl daemon-reload

systemctl restart kubelet

完成本次升级目的:升级到最新版本1.19.3。

没有提示CSIMigration错误,是因为,这个功能从1.17.0开始存在,所以不用禁用。

(7).参考文章

1.kubeadm upgrade

https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/

2.升级 kubeadm 集群

https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

3.kubeadm的yum和apt国内源

https://www.jianshu.com/p/4b5f960a5bea

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2020-11-06,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 千里行走 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
容器镜像服务
容器镜像服务(Tencent Container Registry,TCR)为您提供安全独享、高性能的容器镜像托管分发服务。您可同时在全球多个地域创建独享实例,以实现容器镜像的就近拉取,降低拉取时间,节约带宽成本。TCR 提供细颗粒度的权限管理及访问控制,保障您的数据安全。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档