大家好,欢迎来到运维有术
欢迎来到云原生运维实战系列之基于 KubeSphere 玩转 K8s 第二季
导图
知识量
知识点
实战服务器配置(架构1:1复刻小规模生产环境,配置略有不同)
主机名 | IP | CPU | 内存 | 系统盘 | 数据盘 | 用途 |
---|---|---|---|---|---|---|
ks-master-0 | 192.168.9.91 | 2 | 4 | 50 | 100 | KubeSphere/k8s-master |
ks-master-1 | 192.168.9.92 | 2 | 4 | 50 | 100 | KubeSphere/k8s-master |
ks-master-2 | 192.168.9.93 | 2 | 4 | 50 | 100 | KubeSphere/k8s-master |
ks-worker-0 | 192.168.9.95 | 2 | 4 | 50 | 100 | k8s-worker/CI |
ks-worker-1 | 192.168.9.96 | 2 | 4 | 50 | 100 | k8s-worker |
ks-worker-2 | 192.168.9.97 | 2 | 4 | 50 | 100 | k8s-worker |
storage-0 | 192.168.9.81 | 2 | 4 | 50 | 100+ | ElasticSearch/GlusterFS/Ceph/Longhorn/NFS/ |
storage-1 | 192.168.9.82 | 2 | 4 | 50 | 100+ | ElasticSearch/GlusterFS/Ceph/Longhorn |
storage-2 | 192.168.9.83 | 2 | 4 | 50 | 100+ | ElasticSearch/GlusterFS/Ceph/Longhorn |
registry | 192.168.9.80 | 2 | 4 | 50 | 200 | Sonatype Nexus 3 |
合计 | 10 | 20 | 40 | 500 | 1100+ |
实战环境涉及软件版本信息
上一期,我们实战讲解了使用 KubeSphere 开发的 KubeKey 工具自动化部署 3 Master 和 1 Worker 的 Kubernetes 集群和 KubeSphere。
本期我们将模拟真实的生产环境演示如何使用 KubeKey 新增 Worker 节点到已有的 Kubernetes 集群 。
新增加的 Worker 节点,操作系统基础配置与初始化安装部署时 Worker 节点的配置保持一致。
其他节点配置说明:
本文只选取 Worker-1 节点作为演示,其余新增 Worker 节点都按照相同的方式进行配置和设置。
hostnamectl hostname ks-worker-1
编辑 /etc/hosts 文件,将规划的服务器 IP 和主机名添加到文件中。
192.168.9.91 ks-master-0
192.168.9.92 ks-master-1
192.168.9.93 ks-master-2
192.168.9.95 ks-worker-0
192.168.9.96 ks-worker-1
192.168.9.97 ks-worker-2
echo "nameserver 114.114.114.114" > /etc/resolv.conf
配置服务器时区为 Asia/Shanghai。
timedatectl set-timezone Asia/Shanghai
验证服务器时区,正确配置如下。
[root@ks-worker-1 ~]# timedatectl
Local time: Wed 2023-07-12 07:29:15 CST
Universal time: Tue 2023-07-11 23:29:15 UTC
RTC time: Thu 2023-07-13 05:50:36
Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: no
NTP service: active
RTC in local TZ: no
安装 chrony 作为时间同步软件。
yum install chrony
修改配置文件 /etc/chrony.conf,修改 ntp 服务器配置。
vi /etc/chrony.conf
# 删除所有的 pool 配置
pool pool.ntp.org iburst
# 增加国内的 ntp 服务器,或是指定其他常用的时间服务器
pool cn.pool.ntp.org iburst
# 上面的手工操作,也可以使用 sed 自动替换
sed -i 's/^pool pool.*/pool cn.pool.ntp.org iburst/g' /etc/chrony.conf
重启并设置 chrony 服务开机自启动。
systemctl restart chronyd && systemctl enable chronyd
验证 chrony 同步状态。
# 执行查看命令
chronyc sourcestats -v
# 正常的输出结果如下
[root@ks-worker-1 ~]# chronyc sourcestats -v
.- Number of sample points in measurement set.
/ .- Number of residual runs with same sign.
| / .- Length of measurement set (time).
| | / .- Est. clock freq error (ppm).
| | | / .- Est. error in freq.
| | | | / .- Est. offset.
| | | | | | On the -.
| | | | | | samples. \
| | | | | | |
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
==============================================================================
time.cloudflare.com 5 4 72 -62.561 1522.322 +10ms 6299us
tick.ntp.infomaniak.ch 4 4 7 -2356.686 17029.674 -124ms 2317us
a.chl.la 5 3 72 -6.494 287.851 -16ms 1678us
systemctl stop firewalld && systemctl disable firewalld
openEuler 22.03 SP2 最小化安装的系统默认启用了 SELinux,为了减少麻烦,我们所有的节点都禁用 SELinux。
# 使用 sed 修改配置文件,实现彻底的禁用
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
# 使用命令,实现临时禁用,这一步其实不做也行,KubeKey 会自动配置
setenforce 0
在所有节点上,以 root 用户登陆系统,执行下面的命令为 Kubernetes 安装系统基本依赖包。
# 安装 Kubernetes 系统依赖包
yum install curl socat conntrack ebtables ipset ipvsadm
# 安装其他必备包,openEuler 也是奇葩了,默认居然都不安装tar,不装的话后面会报错
yum install tar
注意:本小节为可选配置项,如果你安装部署时没有使用主机名,均使用IP模式时,可以忽略本节内容。
编辑 /etc/hosts 文件,将新增的 Worker 节点 IP 和主机名条目更想到文件中。
192.168.9.91 ks-master-0
192.168.9.92 ks-master-1
192.168.9.93 ks-master-2
192.168.9.95 ks-worker-0
192.168.9.96 ks-worker-1
192.168.9.97 ks-worker-2
本小节为可选配置项,如果你使用纯密码的方式作为服务器远程连接认证方式,可以忽略本节内容。
输入以下命令将 SSH 公钥从 master-0 节点发送到其他节点。命令执行时输入 yes,以接受服务器的 SSH 指纹,然后在出现提示时输入 root 用户的密码。
ssh-copy-id root@ks-worker-1
ssh-copy-id root@ks-worker-2
下面是密钥复制时,正确的输出结果。
[root@ks-master-0 ~]# ssh-copy-id root@ks-worker-1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_ed25519.pub"
The authenticity of host 'ks-worker-1 (192.168.9.96)' can't be established.
ED25519 key fingerprint is SHA256:xri+nP+7NfGgMG7kSl+ZNVWJvvvJmHyWN6ZHdh0x3jI.
This host key is known by the following other names/addresses:
~/.ssh/known_hosts:1: ks-master-0
~/.ssh/known_hosts:3: ks-master-1
~/.ssh/known_hosts:4: ks-master-2
~/.ssh/known_hosts:5: ks-worker-0
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Authorized users only. All activities may be monitored and reported.
root@ks-worker-1's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@ks-worker-1'"
and check to make sure that only the key(s) you wanted were added.
添加并上传 SSH 公钥后,您现在可以执行下面的命令验证,通过 root 用户连接到所有服务器,无需密码验证。
[root@ks-master-0 ~]# ssh root@ks-worker-1
# 登陆输出结果 略
接下来我们使用 KubeKey 将新增加的节点加入到 Kubernetes 集群,参考官方文档说明,整个过程比较简单,仅需两步。
通过 SSH 登陆到 master-0 节点,切换到原有的 kubekey 目录,修改原有的集群配置文件,我们实战中使用的名字为 kubesphere-v3.3.2.yaml,请根据实际情况修改 。
主要修改点:
修改后的示例如下:
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: ks-master-0, address: 192.168.9.91, internalAddress: 192.168.9.91, user: root, password: "P@88w0rd"}
- {name: ks-master-1, address: 192.168.9.92, internalAddress: 192.168.9.92, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
- {name: ks-master-2, address: 192.168.9.93, internalAddress: 192.168.9.93, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
- {name: ks-worker-0, address: 192.168.9.95, internalAddress: 192.168.9.95, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
- {name: ks-worker-1, address: 192.168.9.96, internalAddress: 192.168.9.96, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
- {name: ks-worker-2, address: 192.168.9.97, internalAddress: 192.168.9.97, user: root, privateKeyPath: "~/.ssh/id_ed25519"}
roleGroups:
etcd:
- ks-master-0
- ks-master-1
- ks-master-2
control-plane:
- ks-master-0
- ks-master-1
- ks-master-2
worker:
- ks-worker-0
- ks-worker-1
- ks-worker-2
....
# 下面的内容保持不变
在增加节点之前,我们再确认一下当前集群的节点信息。
[root@ks-master-0 kubekey]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ks-master-0 Ready control-plane 8h v1.26.0 192.168.9.91 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
ks-master-1 Ready control-plane 8h v1.26.0 192.168.9.92 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
ks-master-2 Ready control-plane 8h v1.26.0 192.168.9.93 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
ks-worker-0 Ready worker 8h v1.26.0 192.168.9.95 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
接下来我们执行下面的命令,使用修改后的配置文件将新增的 Worker 节点加入集群。
export KKZONE=cn
./kk add nodes -f kubesphere-v3.3.2.yaml
上面的命令执行后,首先 kk 会检查部署 Kubernetes 的依赖及其他详细要求。检查合格后,系统将提示您确认安装。输入 yes 并按 ENTER 继续部署。
[root@ks-master-0 kubekey]# export KKZONE=cn
[root@ks-master-0 kubekey]# ./kk add nodes -f kubesphere-v3.3.2.yaml
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
08:22:11 CST [GreetingsModule] Greetings
08:22:12 CST message: [ks-master-2]
Greetings, KubeKey!
08:22:12 CST message: [ks-worker-0]
Greetings, KubeKey!
08:22:13 CST message: [ks-worker-1]
Greetings, KubeKey!
08:22:13 CST message: [ks-worker-2]
Greetings, KubeKey!
08:22:14 CST message: [ks-master-0]
Greetings, KubeKey!
08:22:14 CST message: [ks-master-1]
Greetings, KubeKey!
08:22:14 CST success: [ks-master-2]
08:22:14 CST success: [ks-worker-0]
08:22:14 CST success: [ks-worker-1]
08:22:14 CST success: [ks-worker-2]
08:22:14 CST success: [ks-master-0]
08:22:14 CST success: [ks-master-1]
08:22:14 CST [NodePreCheckModule] A pre-check on nodes
08:22:28 CST success: [ks-worker-1]
08:22:28 CST success: [ks-worker-2]
08:22:28 CST success: [ks-worker-0]
08:22:28 CST success: [ks-master-1]
08:22:28 CST success: [ks-master-2]
08:22:28 CST success: [ks-master-0]
08:22:28 CST [ConfirmModule] Display confirmation form
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| ks-worker-0 | y | y | y | y | y | y | y | y | y | | v1.6.4 | | | | CST 08:12:46 |
| ks-worker-1 | y | y | y | y | y | y | y | y | y | | | | | | CST 08:12:26 |
| ks-worker-2 | y | y | y | y | y | y | y | y | y | | | | | | CST 08:12:35 |
| ks-master-0 | y | y | y | y | y | y | y | y | y | | v1.6.4 | | | | CST 08:22:27 |
| ks-master-1 | y | y | y | y | y | y | y | y | y | | v1.6.4 | | | | CST 08:11:52 |
| ks-master-2 | y | y | y | y | y | y | y | y | y | | v1.6.4 | | | | CST 08:12:35 |
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]:
安装过程日志输出比较多,为了节省篇幅这里就不展示了。部署完成需要大约15分钟左右,具体看网速和机器配置。
部署完成后,您应该会在终端上看到类似于下面的输出。提示部署完成的同时,输出中还会显示用户登陆 KubeSphere 的默认管理员用户和密码。
...
08:37:14 CST [AutoRenewCertsModule] Generate k8s certs renew script
08:37:20 CST success: [ks-master-1]
08:37:20 CST success: [ks-master-2]
08:37:20 CST success: [ks-master-0]
08:37:20 CST [AutoRenewCertsModule] Generate k8s certs renew service
08:37:25 CST success: [ks-master-0]
08:37:25 CST success: [ks-master-1]
08:37:25 CST success: [ks-master-2]
08:37:25 CST [AutoRenewCertsModule] Generate k8s certs renew timer
08:37:30 CST success: [ks-master-0]
08:37:30 CST success: [ks-master-1]
08:37:30 CST success: [ks-master-2]
08:37:30 CST [AutoRenewCertsModule] Enable k8s certs renew service
08:37:32 CST success: [ks-master-2]
08:37:32 CST success: [ks-master-0]
08:37:32 CST success: [ks-master-1]
08:37:32 CST Pipeline[AddNodesPipeline] execute successfully
我们打开浏览器访问 master-0 节点的 IP 地址和端口 30880,登陆 KubeSphere 管理控制台的登录页面。
进入集群管理界面,单击左侧「节点」菜单,点击「集群节点」查看 Kubernetes 集群可用节点的详细信息。
在 master-0 节点运行 kubectl 命令获取 Kubernetes 集群上的可用节点列表。
kubectl get nodes -o wide
在输出结果中可以看到,当前的 Kubernetes 集群有三个可用节点、节点的内部 IP、节点角色、节点的 Kubernetes 版本号、容器运行时及版本号、操作系统类型及内核版本等信息。
[root@ks-master-0 kubekey]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ks-master-0 Ready control-plane 8h v1.26.0 192.168.9.91 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
ks-master-1 Ready control-plane 8h v1.26.0 192.168.9.92 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
ks-master-2 Ready control-plane 8h v1.26.0 192.168.9.93 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
ks-worker-0 Ready worker 8h v1.26.0 192.168.9.95 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
ks-worker-1 NotReady worker 13m v1.26.0 192.168.9.96 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
ks-worker-2 NotReady worker 13m v1.26.0 192.168.9.97 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.x86_64 containerd://1.6.4
输入以下命令获取在 Kubernetes 集群上运行的 Pod 列表,按工作负载在 NODE 上的分布排序。
kubectl get pods -o wide -A | sort -k 8
在输出结果中可以看到, 新增的两个 Worker 节点上已经运行了 5 个必须的基本组件。
[root@ks-master-0 kubekey]# kubectl get pods -o wide -A | sort -k 8
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kubesphere-system ks-controller-manager-b677bdb48-85fbt 1/1 Running 4 (4h26m ago) 8h 10.233.115.13 ks-worker-0 <none> <none>
kube-system calico-kube-controllers-7f576895dd-gqgx7 1/1 Running 3 (3h44m ago) 8h 10.233.115.1 ks-worker-0 <none> <none>
kube-system openebs-localpv-provisioner-5c7fdd7bd9-m4p7d 1/1 Running 3 (3h44m ago) 8h 10.233.115.2 ks-worker-0 <none> <none>
kube-system kube-apiserver-ks-master-0 1/1 Running 5 (4h21m ago) 8h 192.168.9.91 ks-master-0 <none> <none>
kube-system kube-controller-manager-ks-master-0 1/1 Running 1 (4h30m ago) 8h 192.168.9.91 ks-master-0 <none> <none>
kube-system kube-scheduler-ks-master-0 1/1 Running 1 (4h30m ago) 8h 192.168.9.91 ks-master-0 <none> <none>
kubesphere-monitoring-system node-exporter-ccxbp 2/2 Running 0 8h 192.168.9.91 ks-master-0 <none> <none>
kube-system calico-node-69fjx 1/1 Running 0 8h 192.168.9.91 ks-master-0 <none> <none>
kube-system kube-proxy-lvdvd 1/1 Running 0 8h 192.168.9.91 ks-master-0 <none> <none>
kube-system nodelocaldns-428h9 1/1 Running 0 8h 192.168.9.91 ks-master-0 <none> <none>
kubesphere-monitoring-system node-exporter-dnkbx 2/2 Running 0 8h 192.168.9.92 ks-master-1 <none> <none>
kube-system calico-node-scncc 1/1 Running 0 8h 192.168.9.92 ks-master-1 <none> <none>
kube-system coredns-d9d84b5bf-pgs75 1/1 Running 0 8h 10.233.103.1 ks-master-1 <none> <none>
kube-system coredns-d9d84b5bf-vt8gk 1/1 Running 0 8h 10.233.103.2 ks-master-1 <none> <none>
kube-system kube-apiserver-ks-master-1 1/1 Running 0 8h 192.168.9.92 ks-master-1 <none> <none>
kube-system kube-controller-manager-ks-master-1 1/1 Running 0 8h 192.168.9.92 ks-master-1 <none> <none>
kube-system kube-proxy-4x5jt 1/1 Running 0 8h 192.168.9.92 ks-master-1 <none> <none>
kube-system kube-scheduler-ks-master-1 1/1 Running 0 8h 192.168.9.92 ks-master-1 <none> <none>
kube-system nodelocaldns-gcmqw 1/1 Running 0 8h 192.168.9.92 ks-master-1 <none> <none>
kubesphere-monitoring-system node-exporter-5wxf4 2/2 Running 0 8h 192.168.9.93 ks-master-2 <none> <none>
kube-system calico-node-hcnwb 1/1 Running 0 8h 192.168.9.93 ks-master-2 <none> <none>
kube-system kube-apiserver-ks-master-2 1/1 Running 0 8h 192.168.9.93 ks-master-2 <none> <none>
kube-system kube-controller-manager-ks-master-2 1/1 Running 0 8h 192.168.9.93 ks-master-2 <none> <none>
kube-system kube-proxy-28wwm 1/1 Running 0 8h 192.168.9.93 ks-master-2 <none> <none>
kube-system kube-scheduler-ks-master-2 1/1 Running 0 8h 192.168.9.93 ks-master-2 <none> <none>
kube-system nodelocaldns-p65sd 1/1 Running 0 8h 192.168.9.93 ks-master-2 <none> <none>
kubesphere-controls-system default-http-backend-767cdb5fdc-69ttd 1/1 Running 0 8h 10.233.115.5 ks-worker-0 <none> <none>
kubesphere-monitoring-system kube-state-metrics-5b8c487d5-v7pwg 3/3 Running 0 8h 10.233.115.8 ks-worker-0 <none> <none>
kubesphere-monitoring-system node-exporter-qjqd8 2/2 Running 0 8h 192.168.9.95 ks-worker-0 <none> <none>
kubesphere-monitoring-system prometheus-operator-6fb9967754-vlbrw 2/2 Running 0 8h 10.233.115.7 ks-worker-0 <none> <none>
kubesphere-system ks-console-b5855d9f5-fwkxq 1/1 Running 0 8h 10.233.115.6 ks-worker-0 <none> <none>
kubesphere-system ks-installer-5b5cc7f6c5-ghmj5 1/1 Running 0 8h 10.233.115.3 ks-worker-0 <none> <none>
kube-system calico-node-9dfp6 1/1 Running 0 8h 192.168.9.95 ks-worker-0 <none> <none>
kube-system haproxy-ks-worker-0 1/1 Running 0 13m 192.168.9.95 ks-worker-0 <none> <none>
kube-system kube-proxy-zsj2m 1/1 Running 0 8h 192.168.9.95 ks-worker-0 <none> <none>
kube-system nodelocaldns-dgf49 1/1 Running 0 8h 192.168.9.95 ks-worker-0 <none> <none>
kube-system snapshot-controller-0 1/1 Running 0 8h 10.233.115.4 ks-worker-0 <none> <none>
kubesphere-controls-system kubectl-admin-5656cd6dfc-2cxmp 1/1 Running 0 8h 10.233.115.15 ks-worker-0 <none> <none>
kubesphere-monitoring-system prometheus-k8s-0 2/2 Running 0 8h 10.233.115.11 ks-worker-0 <none> <none>
kubesphere-monitoring-system prometheus-k8s-1 2/2 Running 0 8h 10.233.115.12 ks-worker-0 <none> <none>
kubesphere-system ks-apiserver-579fb669c7-5m6ds 1/1 Running 0 8h 10.233.115.14 ks-worker-0 <none> <none>
kubesphere-monitoring-system node-exporter-92m6p 2/2 Running 0 24m 192.168.9.96 ks-worker-1 <none> <none>
kube-system calico-node-vgshr 1/1 Running 0 24m 192.168.9.96 ks-worker-1 <none> <none>
kube-system haproxy-ks-worker-1 1/1 Running 0 14m 192.168.9.96 ks-worker-1 <none> <none>
kube-system kube-proxy-b6vn8 1/1 Running 0 24m 192.168.9.96 ks-worker-1 <none> <none>
kube-system nodelocaldns-bsccj 1/1 Running 0 24m 192.168.9.96 ks-worker-1 <none> <none>
kubesphere-monitoring-system node-exporter-vhbct 2/2 Running 0 24m 192.168.9.97 ks-worker-2 <none> <none>
kube-system calico-node-ccgbr 1/1 Running 0 24m 192.168.9.97 ks-worker-2 <none> <none>
kube-system haproxy-ks-worker-2 1/1 Running 0 14m 192.168.9.97 ks-worker-2 <none> <none>
kube-system kube-proxy-fp8fr 1/1 Running 0 24m 192.168.9.97 ks-worker-2 <none> <none>
kube-system nodelocaldns-kzrvg 1/1 Running 0 24m 192.168.9.97 ks-worker-2 <none> <none>
输入以下命令查看在 Worker 节点上已经下载的 Image 列表。
crictl images ls
在新增的 Worker 节点执行,输出结果如下:
[root@ks-worker-1 ~]# crictl images ls
IMAGE TAG IMAGE ID SIZE
docker.io/calico/cni v3.23.2 a87d3f6f1b8fd 111MB
registry.cn-beijing.aliyuncs.com/kubesphereio/cni v3.23.2 a87d3f6f1b8fd 111MB
docker.io/calico/kube-controllers v3.23.2 ec95788d0f725 56.4MB
docker.io/calico/node v3.23.2 a3447b26d32c7 77.8MB
registry.cn-beijing.aliyuncs.com/kubesphereio/node v3.23.2 a3447b26d32c7 77.8MB
docker.io/calico/pod2daemon-flexvol v3.23.2 b21e2d7408a79 8.67MB
docker.io/coredns/coredns 1.9.3 5185b96f0becf 14.8MB
docker.io/kubesphere/k8s-dns-node-cache 1.15.12 5340ba194ec91 42.1MB
registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache 1.15.12 5340ba194ec91 42.1MB
docker.io/kubesphere/kube-proxy v1.26.0 556768f31eb1d 21.5MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy v1.26.0 556768f31eb1d 21.5MB
docker.io/kubesphere/pause 3.8 4873874c08efc 311kB
docker.io/library/haproxy 2.3 7ecd3fda00f4e 38.5MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy v0.11.0 29589495df8d9 19.2MB
registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter v1.3.1 1dbe0e9319764 10.3MB
注意:Worker 节点的 Image 初始数量为 16 个。
至此,我们完成了在已有三个 Master 节点和一个 Worker 节点的 Kubernetes 集群中增加 2 个 Worker节点的全部任务。
本文主要实战演示了在利用 KubeKey 自动化增加 Worker 节点到已有 Kubernetes 集群的详细过程。
本文的操作虽然是基于 openEuler 22.03 LTS SP2,但是整个操作流程同样适用于其他操作系统利用 KubeKey 部署的 Kubernetes 集群的扩容。
下一期,我们会实战讲解如何在 KubeSphere 中对接持久化存储 GlusterFS,请持续关注。
基于 KubeSphere 玩转 K8S 第二季系列文档,是运维有术推出的基于 KubeSphere 玩转 K8S 第二季实战训练营的实战文档。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。