前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >kubeadm部署高可用kubernetes

kubeadm部署高可用kubernetes

作者头像
全栈程序员站长
发布2022-09-15 11:00:59
9470
发布2022-09-15 11:00:59
举报
文章被收录于专栏:全栈程序员必看

大家好,又见面了,我是你们的朋友全栈君。

1 准备环境(所有主机执行)

1.1 主机列表

代码语言:javascript
复制
cat >> /etc/hosts <<EOF
192.168.3.71   k8s-master01
192.168.3.72   k8s-master02
192.168.3.73   k8s-master03
192.168.3.74   k8s-worker01
192.168.3.75   k8s-worker02
192.168.3.76   k8s-worker03
192.168.3.77   k8s-worker04
EOF

1.2 关闭防火墙

代码语言:javascript
复制
systemctl disable firewalld --now

1.3 关闭selinux

代码语言:javascript
复制
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

1.4 关闭swap分区

代码语言:javascript
复制
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

1.5 升级内核

(1)更新yum源仓库

代码语言:javascript
复制
yum -y update

(2)启用 ELRepo 仓库 ELRepo 仓库是基于社区的用于企业级 Linux 仓库,提供对 RedHat Enterprise (RHEL) 和 其他基于 RHEL的 Linux 发行版(CentOS、Scientific、Fedora 等)的支持。 ELRepo 聚焦于和硬件相关的软件包,包括文件系统驱动、显卡驱动、网络驱动、声卡驱动和摄像头驱动等。

导入ELRepo仓库的公共密钥

代码语言:javascript
复制
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

安装ELRepo仓库的yum源

代码语言:javascript
复制
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

(3)查看可用的系统内核包 可以看到5.4和5.18两个版本

代码语言:javascript
复制
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * elrepo-kernel: hkg.mirror.rackspace.com
elrepo-kernel                                                                                                                                | 3.0 kB  00:00:00     
elrepo-kernel/primary_db                                                                                                                     | 2.0 MB  00:00:01     
可安装的软件包
elrepo-release.noarch                                                             7.0-5.el7.elrepo                                                     elrepo-kernel
kernel-lt.x86_64                                                                  5.4.157-1.el7.elrepo                                                 elrepo-kernel
kernel-lt-devel.x86_64                                                            5.4.157-1.el7.elrepo                                                 elrepo-kernel
kernel-lt-doc.noarch                                                              5.4.157-1.el7.elrepo                                                 elrepo-kernel
kernel-lt-headers.x86_64                                                          5.4.157-1.el7.elrepo                                                 elrepo-kernel
kernel-lt-tools.x86_64                                                            5.4.157-1.el7.elrepo                                                 elrepo-kernel
kernel-lt-tools-libs.x86_64                                                       5.4.157-1.el7.elrepo                                                 elrepo-kernel
kernel-lt-tools-libs-devel.x86_64                                                 5.4.157-1.el7.elrepo                                                 elrepo-kernel
kernel-ml.x86_64                                                                  5.15.0-1.el7.elrepo                                                  elrepo-kernel
kernel-ml-devel.x86_64                                                            5.15.0-1.el7.elrepo                                                  elrepo-kernel
kernel-ml-doc.noarch                                                              5.15.0-1.el7.elrepo                                                  elrepo-kernel
kernel-ml-headers.x86_64                                                          5.15.0-1.el7.elrepo                                                  elrepo-kernel
kernel-ml-tools.x86_64                                                            5.15.0-1.el7.elrepo                                                  elrepo-kernel
kernel-ml-tools-libs.x86_64                                                       5.15.0-1.el7.elrepo                                                  elrepo-kernel
kernel-ml-tools-libs-devel.x86_64                                                 5.15.0-1.el7.elrepo                                                  elrepo-kernel
perf.x86_64                                                                       5.15.0-1.el7.elrepo                                                  elrepo-kernel
python-perf.x86_64                                                                5.15.0-1.el7.elrepo                                                  elrepo-kernel

(4)安装最新版本内核

代码语言:javascript
复制
 yum --enablerepo=elrepo-kernel install kernel-ml

–enablerepo 选项开启 CentOS 系统上的指定仓库。默认开启的是 elrepo,这里用 elrepo-kernel 替换。 (5)设置 grub2 内核安装好后,需要设置为默认启动选项并重启后才会生效 查看系统上的所有可用内核:

代码语言:javascript
复制
# sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
0 : CentOS Linux (5.15.0-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (3.10.0-862.el7.x86_64) 7 (Core)
2 : CentOS Linux (0-rescue-94ed1d2d30f041468d148d9dd88524dc) 7 (Core)

设置新的内核为grub2的默认版本 服务器上存在4 个内核,我们要使用 5.18 这个版本,可以通过 grub2-set-default 0 命令或编辑 /etc/default/grub 文件来设置

通过 grub2-set-default 0 命令设置 其中 0 是上面查询出来的可用内核

代码语言:javascript
复制
grub2-set-default 0

生成 grub 配置文件并重启

代码语言:javascript
复制
grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.15.0-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.15.0-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-862.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-862.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-94ed1d2d30f041468d148d9dd88524dc
Found initrd image: /boot/initramfs-0-rescue-94ed1d2d30f041468d148d9dd88524dc.img
done

重启服务器

代码语言:javascript
复制
reboot

(6) 验证

代码语言:javascript
复制
# uname -r
5.15.0-1.el7.elrepo.x86_64

内核升级博客:https://www.cnblogs.com/xzkzzz/p/9627658.html

1.6 更新Yum源

更改CentOS-Base.repo

代码语言:javascript
复制
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

新增docker.repo

代码语言:javascript
复制
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

新增k8s.repo

代码语言:javascript
复制
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum repolist

1.7 安装必要工具

代码语言:javascript
复制
yum install wget jg yum-utils psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

1.8 时间同步配置

代码语言:javascript
复制
yum install -y ntpdate
crontab -e
0 */1 * * * /usr/sbin/ntpdate ntp1.aliyun.com

1.9 limit设置

代码语言:javascript
复制
ulimit -SHn 65535
vi /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nprec 655350
* hard npros 655350
* soft memlock unlimited
* hard memlock unlimited

1.10 免密登录

代码语言:javascript
复制
# 在k8s-master01操作:

ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Q+2HrmD3D5NGiM9uJ8FJ8eqqKG+rK31qVbyJ8Et3grE root@k8s-master01
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|        ..       |
|     .  .o.      |
|  . . oo.o..     |
|   o *.=So+ .    |
|    E =o*+ o     |
| . o oo++.*      |
|o +.o. +++.o     |
|+B=+...ooo...    |
+----[SHA256]-----+

ssh-copy-id k8s-master02
ssh-copy-id k8s-master03

1.11 安装ipvsadm

代码语言:javascript
复制
yum install ipvsadm ipset sysstat conntrack libseccomp -y

配置ipvs模块

代码语言:javascript
复制
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

vi /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

启动服务

代码语言:javascript
复制
systemctl enable --now systemd-modules-load.service

查看ipvs是否加载

代码语言:javascript
复制
lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0 
nf_nat                 49152  1 ip_vs_ftp
ip_vs_sed              16384  0 
ip_vs_nq               16384  0 
ip_vs_fo               16384  0 
ip_vs_dh               16384  0 
ip_vs_lblcr            16384  0 
ip_vs_lblc             16384  0 
ip_vs_wlc              16384  0 
ip_vs_lc               16384  0 
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 159744  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_v_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          155648  2 nf_nat,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

1.12 将桥接的IPv4流量传递到iptables的链

代码语言:javascript
复制
cat > /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 vm.swappiness=0 EOF
sysctl --system

1.13 安装脚本(ntp及免密外)

k8s-install1.sh

代码语言:javascript
复制
# 1.1 主机列表 
cat >> /etc/hosts <<EOF 10.160.103.101 hz-kubesphere-master01 10.160.103.102 hz-kubesphere-master02 10.160.103.103 hz-kubesphere-master03 EOF

# 1.2 关闭防火墙
systemctl disable firewalld --now

# 1.3 关闭selinux
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

# 1.4 关闭swap分区
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# 1.6 更新Yum源
yum install -y wget
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum repolist -y

# 1.5 升级内核
yum -y update
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml -y
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

k8s-install2.sh

代码语言:javascript
复制
# 1.7 安装必要工具
yum install wget jg yum-utils psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

# 1.9 limit设置
ulimit -SHn 65535
echo "* soft nofile 655360" >> /etc/security/limits.conf
echo "* hard nofile 131072" >> /etc/security/limits.conf
echo "* soft nprec 655350" >> /etc/security/limits.conf
echo "* hard npros 655350" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf

# 1.11 安装ipvsadm
yum install ipvsadm ipset sysstat conntrack libseccomp -y

# 1.12 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 vm.swappiness=0 EOF
sysctl --system

# 2 docker安装
yum install docker-ce-19.03.* -y
systemctl enable docker --now
cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"], "exec-opts":["native.cgroupdriver=systemd"], "log-driver":"json-file", "log-opts": { "max-size": "100m" }, "storage-driver":"overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF
systemctl restart docker

2 基本组件安装(所有节点执行)

2.1 docker安装

安装docker19.03版本

代码语言:javascript
复制
yum install docker-ce-19.03.* -y

启动服务

代码语言:javascript
复制
 systemctl enable docker --now

2.2 docker镜像加速及修改cgroup方式

代码语言:javascript
复制
cat > /etc/docker/daemon.json <<EOF
 {
 "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],
 "exec-opts":["native.cgroupdriver=systemd"],
 "log-driver":"json-file",
 "log-opts": {
  "max-size": "100m"
  },
 "storage-driver":"overlay2",
 "storage-opts": [
  "overlay2.override_kernel_check=true"
  ]
}
EOF
systemctl restart docker

2.3 安装kubernetes1.18.6安装包

代码语言:javascript
复制
yum install -y kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6

为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容:

代码语言:javascript
复制
cat > /etc/sysconfig/kubelet << EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF

启动kubelet

代码语言:javascript
复制
systemctl enable kubelet --now

3 高可用组件安装

3.1 haproxy及keepalived安装

(安装master节点执行)

官方高可用网址:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability

代码语言:javascript
复制
yum install keepalived haproxy psmisc -y
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
echo "#" > /etc/keepalived/keepalived.conf
cp -p /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
echo "#" > /etc/haproxy/haproxy.cfg

(2)编辑配置文件

haproxy:

k8s-master01/k8s-master02/k8s-master03:

代码语言:javascript
复制
cat /etc/haproxy/haproxy.cfg
#
global
    log /dev/log  local0 warning
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
   
   stats socket /var/lib/haproxy/stats
   
defaults
  log global
  option  httplog
  option  dontlognull
  timeout connect 5000
  timeout client 50000
  timeout server 50000
   
frontend kube-apiserver
  bind *:16443
  mode tcp
  option tcplog
  default_backend kube-apiserver
   
backend kube-apiserver
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server kube-apiserver-1 192.168.3.71:6443 check # Replace the IP address with your own.
    server kube-apiserver-2 192.168.3.72:6443 check # Replace the IP address with your own.
    server kube-apiserver-3 192.168.3.73:6443 check # Replace the IP address with your own.

keepalived:

k8s-master01

代码语言:javascript
复制
cat /etc/keepalived/keepalived.conf
#
global_defs {
  notification_email {
  }
  router_id LVS_DEVEL
  vrrp_skip_check_adv_addr
  vrrp_garp_interval 0
  vrrp_gna_interval 0
}
   
vrrp_script chk_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}
   
vrrp_instance haproxy-vip {
  state BACKUP
  priority 100
  interface eth0                       # Network card
  virtual_router_id 77
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass wangjinxiong
  }
  mcast_src_ip 192.168.3.71      # The IP address of this machine
   
  virtual_ipaddress {
    192.168.3.70                  # The VIP address
  }
   
  track_script {
    chk_haproxy
  }
}

k8s-master02:

代码语言:javascript
复制
cat /etc/keepalived/keepalived.conf
#
global_defs {
  notification_email {
  }
  router_id LVS_DEVEL
  vrrp_skip_check_adv_addr
  vrrp_garp_interval 0
  vrrp_gna_interval 0
}
   
vrrp_script chk_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}
   
vrrp_instance haproxy-vip {
  state BACKUP
  priority 99
  interface eth0                       # Network card
  virtual_router_id 77
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass wangjinxiong
  }
  mcast_src_ip 192.168.3.72      # The IP address of this machine
   
  virtual_ipaddress {
    192.168.3.70                  # The VIP address
  }
   
  track_script {
    chk_haproxy
  }
}

k8s-master03:

代码语言:javascript
复制
cat /etc/keepalived/keepalived.conf
#
global_defs {
  notification_email {
  }
  router_id LVS_DEVEL
  vrrp_skip_check_adv_addr
  vrrp_garp_interval 0
  vrrp_gna_interval 0
}
   
vrrp_script chk_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}
   
vrrp_instance haproxy-vip {
  state BACKUP
  priority 98
  interface eth0                       # Network card
  virtual_router_id 77
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass wangjinxiong
  }
  mcast_src_ip 192.168.3.73      # The IP address of this machine
   
  virtual_ipaddress {
    192.168.3.70                  # The VIP address
  }
   
  track_script {
    chk_haproxy
  }
}

3.2 高可用部署

3.2.1 获取配置文件并修改

代码语言:javascript
复制
kubeadm config print init-defaults > kubeadm-config.yaml

修改或者直接编辑以下内容:

代码语言:javascript
复制
cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24000h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.3.71     # 本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01        # 本主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.3.70
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.3.70:16443"    # 虚拟IP和haproxy端口
controllerManager: { 
   }
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers    # 镜像仓库源要根据自己实际情况修改
kind: ClusterConfiguration
kubernetesVersion: v1.18.6     # k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.255.0.0/16"
  serviceSubnet: 10.254.0.0/16
scheduler: { 
   }

将配置文件拷贝到k8s-master02,k8s-master03

代码语言:javascript
复制
scp kubeadm-config.yaml root@k8s-master02:/root
scp kubeadm-config.yaml root@k8s-master03:/root

三个节点分别下载镜像:

代码语言:javascript
复制
kubeadm config images pull --config kubeadm-config.yaml

下载的镜像分别是:

代码语言:javascript
复制
# kubeadm config images pull --config kubeadm-config.yaml
W1114 11:56:27.286372    4411 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.6
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.6
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.6
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.6
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7

3.2.2 集群初始化

在k8s-master01上执行:

代码语言:javascript
复制
# kubeadm init --config kubeadm-config.yaml
W1114 12:04:56.768490    5707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.254.0.1 192.168.3.71 192.168.3.70 192.168.3.70]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.3.71 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.3.71 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1114 12:04:59.684854    5707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1114 12:04:59.685634    5707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.238942 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.3.70:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:a842f419cb57aea185507844755a487814bacd1b34dd7d9ca6507944f963118e \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.3.70:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:a842f419cb57aea185507844755a487814bacd1b34dd7d9ca6507944f963118e 

如果初始化失败,可以重置:

代码语言:javascript
复制
kubeadm reset

k8s-master01拷贝管理员配置文件:

代码语言:javascript
复制
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群情况:

代码语言:javascript
复制
# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   2m36s   v1.18.6

在其它两个master节点创建以下目录:

代码语言:javascript
复制
mkdir -p /etc/kubernetes/pki/etcd

把主master节点证书分别复制到其他master节点

代码语言:javascript
复制
scp /etc/kubernetes/pki/ca.* root@k8s-master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@k8s-master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@k8s-master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@k8s-master02:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@k8s-master02:/etc/kubernetes/
scp /etc/kubernetes/pki/ca.* root@k8s-master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@k8s-master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@k8s-master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@k8s-master03:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@k8s-master03:/etc/kubernetes/

k8s-master02 k8s-master03节点加入集群执行以下命令

代码语言:javascript
复制
 kubeadm join 192.168.3.70:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:a842f419cb57aea185507844755a487814bacd1b34dd7d9ca6507944f963118e \
    --control-plane 

worker节点加入集群执行以下命令

代码语言:javascript
复制
kubeadm join 192.168.3.70:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:a842f419cb57aea185507844755a487814bacd1b34dd7d9ca6507944f963118e

查看k8s集群情况:

代码语言:javascript
复制
# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES
kube-system   coredns-546565776c-5pb6d               0/1     Pending   0          7m43s   <none>         <none>         <none>           <none>
kube-system   coredns-546565776c-7fbvc               0/1     Pending   0          7m43s   <none>         <none>         <none>           <none>
kube-system   etcd-k8s-master01                      1/1     Running   0          7m51s   192.168.3.71   k8s-master01   <none>           <none>
kube-system   etcd-k8s-master02                      1/1     Running   0          5m55s   192.168.3.72   k8s-master02   <none>           <none>
kube-system   etcd-k8s-master03                      1/1     Running   0          3m44s   192.168.3.73   k8s-master03   <none>           <none>
kube-system   kube-apiserver-k8s-master01            1/1     Running   0          7m51s   192.168.3.71   k8s-master01   <none>           <none>
kube-system   kube-apiserver-k8s-master02            1/1     Running   0          5m56s   192.168.3.72   k8s-master02   <none>           <none>
kube-system   kube-apiserver-k8s-master03            1/1     Running   1          3m50s   192.168.3.73   k8s-master03   <none>           <none>
kube-system   kube-controller-manager-k8s-master01   1/1     Running   1          7m51s   192.168.3.71   k8s-master01   <none>           <none>
kube-system   kube-controller-manager-k8s-master02   1/1     Running   0          5m56s   192.168.3.72   k8s-master02   <none>           <none>
kube-system   kube-controller-manager-k8s-master03   1/1     Running   0          4m2s    192.168.3.73   k8s-master03   <none>           <none>
kube-system   kube-proxy-79jxw                       1/1     Running   0          7m43s   192.168.3.71   k8s-master01   <none>           <none>
kube-system   kube-proxy-qq4td                       1/1     Running   0          5m57s   192.168.3.72   k8s-master02   <none>           <none>
kube-system   kube-proxy-z42k4                       1/1     Running   0          5m10s   192.168.3.73   k8s-master03   <none>           <none>
kube-system   kube-proxy-zk628                       1/1     Running   0          85s     192.168.3.74   k8s-worker01   <none>           <none>
kube-system   kube-scheduler-k8s-master01            1/1     Running   1          7m51s   192.168.3.71   k8s-master01   <none>           <none>
kube-system   kube-scheduler-k8s-master02            1/1     Running   0          5m56s   192.168.3.72   k8s-master02   <none>           <none>
kube-system   kube-scheduler-k8s-master03            1/1     Running   0          4m26s   192.168.3.73   k8s-master03   <none>           <none>
# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   9m29s   v1.18.6
k8s-master02   NotReady   master   7m28s   v1.18.6
k8s-master03   NotReady   master   6m41s   v1.18.6
k8s-worker01   NotReady   <none>   2m56s   v1.18.6
k8s-worker02   NotReady   <none>   9s      v1.18.6

备注:status为notready,是由于没有安装网络组件。

3.2.3 安装网络组件calico

calico下载地址:https://docs.projectcalico.org/v3.15/manifests/calico.yaml

代码语言:javascript
复制
kubectl apply -f calico3.20.1.yaml 

观察集群情况:

代码语言:javascript
复制
# kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
k8s-master01   Ready    master   3h20m   v1.18.6
k8s-master02   Ready    master   3h18m   v1.18.6
k8s-master03   Ready    master   3h17m   v1.18.6
k8s-worker01   Ready    <none>   3h13m   v1.18.6
k8s-worker02   Ready    <none>   3h11m   v1.18.6
[root@k8s-master01 ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
default       nginx0920                                 1/1     Running   0          66m
kube-system   calico-kube-controllers-7c4bc6fff-5lzmv   1/1     Running   0          3h10m
kube-system   calico-node-2gfcb                         1/1     Running   0          3h10m
kube-system   calico-node-88cq9                         1/1     Running   0          3h10m
kube-system   calico-node-8wrcf                         1/1     Running   0          3h10m
kube-system   calico-node-fmpmv                         1/1     Running   0          3h10m
kube-system   calico-node-qw7nf                         1/1     Running   0          3h10m
kube-system   coredns-546565776c-5pb6d                  1/1     Running   0          3h20m
kube-system   coredns-546565776c-7fbvc                  1/1     Running   0          3h20m
kube-system   etcd-k8s-master01                         1/1     Running   0          3h20m
kube-system   etcd-k8s-master02                         1/1     Running   0          3h18m
kube-system   etcd-k8s-master03                         1/1     Running   0          3h16m
kube-system   kube-apiserver-k8s-master01               1/1     Running   0          3h20m
kube-system   kube-apiserver-k8s-master02               1/1     Running   0          3h18m
kube-system   kube-apiserver-k8s-master03               1/1     Running   1          3h16m
kube-system   kube-controller-manager-k8s-master01      1/1     Running   1          3h20m
kube-system   kube-controller-manager-k8s-master02      1/1     Running   0          3h18m
kube-system   kube-controller-manager-k8s-master03      1/1     Running   0          3h16m
kube-system   kube-proxy-79jxw                          1/1     Running   0          3h20m
kube-system   kube-proxy-7wv9q                          1/1     Running   0          3h11m
kube-system   kube-proxy-qq4td                          1/1     Running   0          3h18m
kube-system   kube-proxy-z42k4                          1/1     Running   0          3h17m
kube-system   kube-proxy-zk628                          1/1     Running   0          3h14m
kube-system   kube-scheduler-k8s-master01               1/1     Running   1          3h20m
kube-system   kube-scheduler-k8s-master02               1/1     Running   0          3h18m
kube-system   kube-scheduler-k8s-master03               1/1     Running   0          3h17m

3.2.4 安装etcd客户端

下载:

代码语言:javascript
复制
wget https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz

解压并加入环境变量:

代码语言:javascript
复制
tar zxvf etcd-v3.4.3-linux-amd64.tar.gz 
mv etcd-v3.4.3-linux-amd64/etcdctl /usr/local/bin
chmod +x /usr/local/bin/etcdctl
rm -rf etcd-v3.4.3-linux-amd64

查看etcd高可用集群健康状态

代码语言:javascript
复制
 ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.3.71:2379,192.168.3.72:2379,192.168.3.73:2379 endpoint health
+-------------------+--------+-------------+-------+
|     ENDPOINT      | HEALTH |    TOOK     | ERROR |
+-------------------+--------+-------------+-------+
| 192.168.3.73:2379 |   true | 14.763006ms |       |
| 192.168.3.72:2379 |   true | 14.539845ms |       |
| 192.168.3.71:2379 |   true | 15.906503ms |       |
+-------------------+--------+-------------+-------+

查看etcd高可用集群leader

代码语言:javascript
复制
ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.3.71:2379,192.168.3.72:2379,192.168.3.73:2379 endpoint status
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.3.71:2379 | 676a68a7e2db62d3 |   3.4.3 |  5.0 MB |      true |      false |         3 |      47931 |              47931 |        |
| 192.168.3.72:2379 | 8250ee8843dcbebf |   3.4.3 |  5.0 MB |     false |      false |         3 |      47931 |              47931 |        |
| 192.168.3.73:2379 | f1d7b52439f4b031 |   3.4.3 |  5.0 MB |     false |      false |         3 |      47931 |              47931 |        |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

https://www.cnblogs.com/lfl17718347843/p/13417304.html

4 更新证书为100年

4.1 安装go编译器

(1)下载二进制包

代码语言:javascript
复制
wget https://dl.google.com/go/go1.13.4.linux-amd64.tar.gz

(2)解压

代码语言:javascript
复制
tar -zxvf go1.13.4.linux-amd64.tar.gz -C /usr/local

(3)导入环境变量

代码语言:javascript
复制
mkdir -p /home/gopath
cat >> /etc/profile <<EOF export GOROOT=/usr/local/go export GOPATH=/home/gopath export PATH=\$PATH:\$GOROOT/bin EOF
source /etc/profile

(4)查看版本

代码语言:javascript
复制
# go version
go version go1.13.4 linux/amd64

4.2 编译kubeadm并替换

(1)下载源码

代码语言:javascript
复制
git clone https://github.com/kubernetes/kubernetes.git

(2)切换到自己的版本,修改源码,比如我的是v1.18.6版本

代码语言:javascript
复制
cd /root/kubernetes
git checkout v1.18.6

vi cmd/kubeadm/app/constants/constants.go,找到CertificateValidity,修改如下

代码语言:javascript
复制
....
const (
        // KubernetesDir is the directory Kubernetes owns for storing various configuration files
        KubernetesDir = "/etc/kubernetes"
        // ManifestsSubDirName defines directory name to store manifests
        ManifestsSubDirName = "manifests"
        // TempDirForKubeadm defines temporary directory for kubeadm
        // should be joined with KubernetesDir.
        TempDirForKubeadm = "tmp"

        // CertificateValidity defines the validity for all the signed certificates generated by kubeadm
        CertificateValidity = time.Hour * 24 * 365 * 100
....

(3)编译kubeadm

代码语言:javascript
复制
make WHAT=cmd/kubeadm

编译完生成如下目录和二进制文件

代码语言:javascript
复制
# ll _output/bin/
总用量 76708
-rwxr-xr-x 1 root root  6819840 12月 11 00:24 conversion-gen
-rwxr-xr-x 1 root root  6795264 12月 11 00:24 deepcopy-gen
-rwxr-xr-x 1 root root  6766592 12月 11 00:24 defaulter-gen
-rwxr-xr-x 1 root root  4887622 12月 11 00:24 go2make
-rwxr-xr-x 1 root root  2109440 12月 11 00:25 go-bindata
-rwxr-xr-x 1 root root 39731200 12月 11 00:26 kubeadm
-rwxr-xr-x 1 root root 11436032 12月 11 00:24 openapi-gen
[root@k8s-master01 kubernetes]# pwd
/root/kubernetes

(4)备份原kubeadm和证书文件

代码语言:javascript
复制
cp -r /etc/kubernetes/pki /root/ca-backup
cp -p /usr/bin/kubeadm /root/ca-backup

(5)将新生成的kubeadm进行替换

代码语言:javascript
复制
cd /root/kubernetes
cp _output/bin/kubeadm /usr/bin/kubeadm

(6)更新证书:

代码语言:javascript
复制
# cd /etc/kubernetes/pki
# kubeadm alpha certs renew all

(7)输出如下:

代码语言:javascript
复制
# kubeadm alpha certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

(8)查询证书有效期:

代码语言:javascript
复制
[root@k8s-master01 pki]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Nov 16, 2121 16:29 UTC   99y                                     no      
apiserver                  Nov 16, 2121 16:29 UTC   99y             ca                      no      
apiserver-etcd-client      Nov 16, 2121 16:29 UTC   99y             etcd-ca                 no      
apiserver-kubelet-client   Nov 16, 2121 16:29 UTC   99y             ca                      no      
controller-manager.conf    Nov 16, 2121 16:29 UTC   99y                                     no      
etcd-healthcheck-client    Nov 16, 2121 16:29 UTC   99y             etcd-ca                 no      
etcd-peer                  Nov 16, 2121 16:29 UTC   99y             etcd-ca                 no      
etcd-server                Nov 16, 2121 16:29 UTC   99y             etcd-ca                 no      
front-proxy-client         Nov 16, 2121 16:29 UTC   99y             front-proxy-ca          no      
scheduler.conf             Nov 16, 2121 16:29 UTC   99y                                     no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Nov 12, 2031 04:04 UTC   9y              no      
etcd-ca                 Nov 12, 2031 04:04 UTC   9y              no      
front-proxy-ca          Nov 12, 2031 04:04 UTC   9y              no      
[root@k8s-master01 pki]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   26d   v1.18.6
k8s-master02   Ready    master   26d   v1.18.6
k8s-master03   Ready    master   26d   v1.18.6
k8s-worker01   Ready    <none>   26d   v1.18.6
k8s-worker02   Ready    <none>   26d   v1.18.6
k8s-worker03   Ready    <none>   26d   v1.18.6
k8s-worker04   Ready    <none>   26d   v1.18.6
[root@k8s-master01 pki]# 

参考博客:https://zhuanlan.zhihu.com/p/150001642 https://blog.csdn.net/sinat_28371057/article/details/90442805

5 在 Kubernetes 上最小化安装 KubeSphere3.1.1

前提条件:kubernetes有默认的SC 执行以下命令开始安装

代码语言:javascript
复制
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
   
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

检查安装日志:

代码语言:javascript
复制
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{ 
   .items[0].metadata.name}') -f

确保在安全组中打开了端口 30880,并通过 NodePort (IP:30880) 使用默认帐户和密码 (admin/P@88w0rd) 访问 Web 控制台。

参考博客:https://v3-1.docs.kubesphere.io/zh/docs/quick-start/minimal-kubesphere-on-k8s/

k8s删除节点:https://blog.csdn.net/inrgihc/article/details/109628471 修改kubectl get cs报错:https://blog.csdn.net/a772304419/article/details/112228226 K8S calicoctl 客户端工具安装https://www.cnblogs.com/reatual/p/14366009.html

修改kubesphere web admin登录密码:

代码语言:javascript
复制
kubectl patch users <username> -p '{"spec":{"password":"<password>"}}' --type='merge' && kubectl annotate users <username> iam.kubesphere.io/password-encrypted-

6 kubeadm添加与删除节点

6.1 首先在master上生成新的token

代码语言:javascript
复制
[root@node1 ~]# kubeadm token create --print-join-command
W1111 17:50:25.985706  292853 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join apiserver.cluster.local:6443 --token sc2ty3.ej38ceisi5lmt9ad     --discovery-token-ca-cert-hash sha256:42bf6e526b795854b61b7c0ca875f9a8292b989d44f0f51a4d8dec450711b89e

6.2在master上生成用于新master加入的证书

代码语言:javascript
复制
[root@node1 ~]# kubeadm init phase upload-certs --upload-certs
I1111 17:50:52.634857  293705 version.go:252] remote version is much newer: v1.19.3; falling back to: stable-1.18
W1111 17:50:53.498664  293705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
c5c77a2b5989c75c0ec98fae91f771c569e5764523fd8daa102a1cb074c07e2f

6.3 添加新节点

6.3.1 添加新node节点

代码语言:javascript
复制
[root@node3 ~]kubeadm join apiserver.cluster.local:6443 --token sc2ty3.ej38ceisi5lmt9ad     --discovery-token-ca-cert-hash sha256:42bf6e526b795854b61b7c0ca875f9a8292b989d44f0f51a4d8dec450711b89e

6.3.2 添加新master节点

把红色部分加到–experimental-control-plane –certificate-key后。

代码语言:javascript
复制
[root@node2 ~]kubeadm join apiserver.cluster.local:6443 --token sc2ty3.ej38ceisi5lmt9ad \
  --discovery-token-ca-cert-hash sha256:42bf6e526b795854b61b7c0ca875f9a8292b989d44f0f51a4d8dec450711b89e \
  --control-plane --certificate-key c5c77a2b5989c75c0ec98fae91f771c569e5764523fd8daa102a1cb074c07e2f

6.4 删除node节点

(1) 驱逐这个node节点上的pod

代码语言:javascript
复制
# kubectl drain node2 --delete-local-data --force --ignore-daemonsets

检查节点状态,被标记为不可调度节点

代码语言:javascript
复制
# kubectl get nodes

(2) 删除这个node节点

代码语言:javascript
复制
kubectl delete node node2

(3)在node06这个节点上执行如下命令

代码语言:javascript
复制
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
systemctl start kubelet

[

](https://blog.csdn.net/inrgihc/article/details/109628471)

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/164227.html原文链接:https://javaforall.cn

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1 准备环境(所有主机执行)
    • 1.1 主机列表
      • 1.2 关闭防火墙
        • 1.3 关闭selinux
          • 1.4 关闭swap分区
            • 1.5 升级内核
              • 1.6 更新Yum源
                • 1.7 安装必要工具
                  • 1.8 时间同步配置
                    • 1.9 limit设置
                      • 1.10 免密登录
                        • 1.11 安装ipvsadm
                          • 1.12 将桥接的IPv4流量传递到iptables的链
                            • 1.13 安装脚本(ntp及免密外)
                            • 2 基本组件安装(所有节点执行)
                              • 2.1 docker安装
                                • 2.2 docker镜像加速及修改cgroup方式
                                  • 2.3 安装kubernetes1.18.6安装包
                                  • 3 高可用组件安装
                                    • 3.1 haproxy及keepalived安装
                                      • 3.2 高可用部署
                                        • 3.2.1 获取配置文件并修改
                                        • 3.2.2 集群初始化
                                        • 3.2.3 安装网络组件calico
                                        • 3.2.4 安装etcd客户端
                                    • 4 更新证书为100年
                                      • 4.1 安装go编译器
                                        • 4.2 编译kubeadm并替换
                                        • 5 在 Kubernetes 上最小化安装 KubeSphere3.1.1
                                        • 6 kubeadm添加与删除节点
                                          • 6.1 首先在master上生成新的token
                                            • 6.2在master上生成用于新master加入的证书
                                              • 6.3 添加新节点
                                                • 6.3.1 添加新node节点
                                                • 6.3.2 添加新master节点
                                              • 6.4 删除node节点
                                              相关产品与服务
                                              容器服务
                                              腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
                                              领券
                                              问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档