前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >使用 kubeadm 部署 Kubernetes 1.21.10 [2022 更新版]

使用 kubeadm 部署 Kubernetes 1.21.10 [2022 更新版]

原创
作者头像
懒人的小脑
修改2022-03-16 10:52:07
1.2K0
修改2022-03-16 10:52:07
举报

更新日期:2022-03

0. 环境

主机名

角色

OS

IP

node1

master

Ubuntu 20.04(ARM64)

192.168.50.11

node2

worker

Ubuntu 20.04(ARM64)

192.168.50.12

node3

worker

Ubuntu 20.04(ARM64)

192.168.50.13

node4

worker

Ubuntu 20.04(ARM64)

192.168.50.147

1. 关闭交换分区并配置内核参数

所有节点(不包括 Storage)均需要执行

代码语言:shell
复制
sudo sed -i '/ swap / s/^\(.\*\)$/#\1/g' /etc/fstab

sudo swapoff -a

修改内核参数(二选一执行)

方法 1

代码语言:shell
复制
# Enable kernel modules

sudo modprobe overlay

sudo modprobe br\_netfilter

sudo sysctl --system

方法 2

代码语言:shell
复制
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

br\_netfilter

EOF

# Add some settings to sysctl

sudo tee /etc/sysctl.d/kubernetes.conf<<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip\_forward = 1

EOF

# Reload sysctl

sudo sysctl --system

2. 部署 Container Runtime

所有节点(除了 node4)均需要执行

代码语言:shell
复制
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo apt install docker.io

修改 docker daemon 设置

代码语言:shell
复制
sudo tee /etc/docker/daemon.json <<EOF

{

  "exec-opts": ["native.cgroupdriver=systemd"],

  "registry-mirrors": ["https://cymwrg5y.mirror.aliyuncs.com"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF

重启 Docker

代码语言:shell
复制
systemctl daemon-reload

systemctl restart docker

3. 添加 Kubernetes 仓库

所有节点(除了 node4)均需要执行

代码语言:shell
复制
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'

deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main

EOF

4. 安装 kubeadm

Master节点

代码语言:shell
复制
sudo apt-get update

sudo apt-get install -y kubectl=1.21.10-00 kubeadm=1.21.10-00 kubelet=1.21.10-00

sudo apt-mark hold kubectl kubeadm kubelet

Worker 节点

代码语言:shell
复制
# Worker node doesn't need kubectl

sudo apt-get install -y kubeadm=1.21.10-00 kubelet=1.21.10-00

# 阻止自动更新(apt upgrade时忽略)。所以更新的时候先unhold,更新完再hold。

sudo apt-mark hold kubeadm kubelet

确认安装版本

代码语言:shell
复制
kubectl version --client && kubeadm version

5. 部署 Master 相关组件

有两种方法可以部署 Master 相关的组件,一种是使用init 配置文件,另外一种是使用命令行参数,建议使用 init 配置文件进行部署。

使用 init 配置文件

获取并修改 kubeadm-init 配置文件

  • advertiseAddress:API Service 的 IP地址
  • imagePullPolicy: IfNotPresent
  • name: master 的主机名
  • imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
  • kubernetesVersion: 1.21.0
代码语言:yaml
复制
sudo kubeadm config print init-defaults > init-config.yaml

修改后的yaml 文件

代码语言:yaml
复制
apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

- groups:

  - system:bootstrappers:kubeadm:default-node-token

 token: abcdef.0123456789abcdef

 ttl: 24h0m0s

 usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

 advertiseAddress: 192.168.50.11

 bindPort: 6443

nodeRegistration:

 criSocket: /var/run/dockershim.sock

 imagePullPolicy: IfNotPresent

 name: master

 taints: null

---

apiServer:

 timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta3

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager: {}

dns: {}

etcd:

 local:

 dataDir: /var/lib/etcd

imageRepository: registry.cn-hangzhou.aliyuncs.com/google\_containers

kind: ClusterConfiguration

kubernetesVersion: 1.21.10

networking:

 dnsDomain: cluster.local

 serviceSubnet: 10.96.0.0/12

scheduler: {}

---

# 根据cgroupDriver 实际情况选择,docker 配置为 systemd 则不需要执行下面的配置

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

#cgroupDriver: systemd

cgroupDriver: cgroupfs

这里 cgroupDriver 需要说明的是,kubeadm 默认会将kubelet 的 cgroupDriver 字段设置为 systemd,

如果设置为 systemd ,kubelet 会启动不了,所以需要保持 docker 和 kubelet cgroupDriver 保持一致。

使用命令行参数

代码语言:shell
复制
sudo kubeadm init \

     --image-repository registry.aliyuncs.com/google\_containers \

     --kubernetes-version 1.21.10 \

     --pod-network-cidr=$POD\_NETWORK \

     --service-cidr=10.96.0.0/16 \

     --apiserver-advertise-address=192.168.50.11

Master 组件创建完成后,需要配置 kubeconig 文件等

代码语言:shell
复制
# 对于普通用户

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 对于 root 用户

export KUBECONFIG=/etc/kubernetes/admin.conf

6. 配置 Worker node 加入集群

执行在 Master node 生成的命令加入集群

代码语言:shell
复制
sudo kubeadm join MASTERIP:6443 --token TOEKN \

    --discovery-token-ca-cert-hash sha256:SHA256

token 有 24h 有效期,过期后可以重新生成 token

代码语言:shell
复制
# 方法 1

kubeadm token create --print-join-command

# 方法 2

# 获取 token

kubeadm token create

# 列出 token

kubeadm token list | awk -F" " '{print $1}' | tail -n 1

# 获取 CA 公钥的哈希值

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed  's/^ .\* //'

# 从节点加入集群

kubeadm join masterIP:6443 --token TOKEN --discovery-token-ca-cert-hash sha256:HASHNUMBER

获取 kubeadm 帮助 cat kubeadm-help

7. 部署网络插件

目前有 flannel 、calico 等插件可选

方式 1

代码语言:shell
复制
kubectl apply -f https://docs.projectcalico.org/v3.21/manifests/calico.yaml

方式 2

代码语言:shell
复制
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

wget https://docs.projectcalico.org/manifests/custom-resources.yaml

#修改 pod network为 10.244.0.0/16

kubectl apply -f custom-resources.yaml

网络插件运行后,coredns pod 也会从 pending 状态变成 running 状态

8. 脚本

代码语言:shell
复制
#!/bin/bash

set -eux

set -o pipefail

K8S\_VERSION=1.21.10

MASTER\_IP=192.168.50.11

POD\_NETWORK=10.244.0.0/16

PACKAGE\_VERSION=$K8S\_VERSION-00

swapoff(){

    sudo sed -i '/ swap / s/^\(.\*\)$/#\1/g' /etc/fstab

    sudo swapoff -a

 echo "Disable Swap successfully"

}

sysctl\_config(){

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

    br\_netfilter

EOF

    sudo tee /etc/sysctl.d/kubernetes.conf<<EOF

    net.bridge.bridge-nf-call-ip6tables = 1

    net.bridge.bridge-nf-call-iptables = 1

    net.ipv4.ip\_forward = 1

EOF

    sudo sysctl --system

 echo "reload sysctl successfully"

}

set\_timezone(){

    sudo timedatectl set-timezone Asia/Shanghai

    sudo systemctl restart rsyslog

}

config\_container(){

    sudo cp -a /etc/apt/sources.list /etc/apt/sources.list.bak

    sudo sed -i 's\ports.ubuntu.com\mirrors.aliyun.com\g' /etc/apt/sources.list

    sudo apt update

    sudo apt-get install -y apt-transport-https ca-certificates curl nfs-common

    sudo apt -y install docker.io

    sudo tee /etc/docker/daemon.json <<EOF

    {

      "exec-opts": ["native.cgroupdriver=systemd"],

      "registry-mirrors": ["https://cymwrg5y.mirror.aliyuncs.com"],

      "log-driver": "json-file",

      "log-opts": {

        "max-size": "100m"

      },

        "storage-driver": "overlay2"

    }

EOF

 # 这里需要输入 sudo 密码,输入错误,脚本会停止

    sudo systemctl daemon-reload

    sudo systemctl restart docker

}

add\_k8s\_repo(){

 #curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

 #echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

    curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

    sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'

    deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main

EOF

}

install\_master\_package(){

    sudo apt-get update

    sudo apt-get install -y kubectl=$PACKAGE\_VERSION kubeadm=$PACKAGE\_VERSION kubelet=$PACKAGE\_VERSION

    sudo apt-mark hold kubectl kubeadm kubelet

}

install\_worker\_package(){

    sudo apt-get update

    sudo apt-get install -y kubeadm=$PACKAGE\_VERSION kubelet=$PACKAGE\_VERSION

    sudo apt-mark hold kubeadm kubelet

}

bootstrap(){

    sudo docker pull registry.aliyuncs.com/google\_containers/coredns:1.8.0

    sudo docker tag \

    registry.aliyuncs.com/google\_containers/coredns:1.8.0 \

    registry.aliyuncs.com/google\_containers/coredns/coredns:v1.8.0

    sudo docker rmi registry.aliyuncs.com/google\_containers/coredns:1.8.0

    sudo kubeadm config images pull --kubernetes-version v$K8S\_VERSION --image-repository registry.aliyuncs.com/google\_containers

 #sudo kubeadm config images pull --image-repository registry.aliyuncs.com/google\_containers --kubernetes-version v1.23.1

    sudo kubeadm init \

     --image-repository registry.aliyuncs.com/google\_containers \

     --kubernetes-version $K8S\_VERSION \

     --pod-network-cidr=$POD\_NETWORK \

     --service-cidr=10.96.0.0/16 \

     --apiserver-advertise-address=$MASTER\_IP

 #sudo kubeadm config images pull --kubernetes-verson $K8S\_VERSION

 #sudo kubeadm init \

 # --kubernetes-version $K8S\_VERSION \

 # --pod-network-cidr=$POD\_NETWORK \

 # --service-cidr=10.96.0.0/16 \

 # --apiserver-advertise-address=$MASTER\_IP

}

kubeconfig(){

 #执行脚本使用 bash 或者./,不能使用 sudo 执行,否则会文件会拷贝到 root

    mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config

 #sudo echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /root/.bashrc

}

install\_calico(){

    kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

    sudo wget https://docs.projectcalico.org/manifests/custom-resources.yaml

    sudo sed -i 's/192.168.0.0/10.244.0.0/g' custom-resources.yaml

 #sudo sed 's/[0-9]\+.[0-9]\+.[0-9]\+.[0-9]\+/$POD\_NETWORD/g' custom-resources.yaml

    kubectl apply -f custom-resources.yaml

}

master(){

    swapoff

    set\_timezone

    sysctl\_config

    config\_container

    add\_k8s\_repo

    install\_master\_package

    bootstrap

    kubeconfig

    install\_calico

}

worker(){

    swapoff

    set\_timezone

    sysctl\_config

    config\_container

    add\_k8s\_repo

    install\_worker\_package

}

#master

worker

9. 优化配置

zsh

代码语言:shell
复制
sudo apt-get install zsh -y

chsh -s $(which zsh)

wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | sh

git clone git://github.com/zsh-users/zsh-autosuggestions ${ZSH\_CUSTOM:-~/.oh-my-zsh}/plugins/zsh-autosuggestions

git clone git://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH\_CUSTOM:-~/.oh-my-zsh}/plugins/zsh-syntax-highlighting

sudo apt-get install autojump

vim ~/.zshrc

ZSH\_THEME="ys"

plugins=(git zsh-autosuggestions zsh-syntax-highlighting)

. /usr/share/autojump/autojump.sh

source ~/.zshrc

自动补全

代码语言:shell
复制
echo 'source <(kubectl completion zsh)' >> ~/.zshrc

echo 'compdef \_\_start\_kubectl k kc' >>~/.zshrc

别名

代码语言:shell
复制
echo 'alias k=kubectl' >>~/.zshrc

echo 'alias kc="kubectl"' >> ~/.zshrc

echo 'alias ka="kubectl apply -f"' >> ~/.zshrc

echo 'alias kd="kubectl delete -f"' >> ~/.zshrc

10. 配置存储

NFS

RBAC

代码语言:yaml
复制
apiVersion: v1

kind: ServiceAccount

metadata:

 name: nfs-client-provisioner

 # replace with namespace where provisioner is deployed

 namespace: kube-system

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

 name: nfs-client-provisioner-runner

rules:

  - apiGroups: [""]

 resources: ["persistentvolumes"]

 verbs: ["get", "list", "watch", "create", "delete"]

  - apiGroups: [""]

 resources: ["persistentvolumeclaims"]

 verbs: ["get", "list", "watch", "update"]

  - apiGroups: ["storage.k8s.io"]

 resources: ["storageclasses"]

 verbs: ["get", "list", "watch"]

  - apiGroups: [""]

 resources: ["events"]

 verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

 name: run-nfs-client-provisioner

subjects:

  - kind: ServiceAccount

 name: nfs-client-provisioner

 # replace with namespace where provisioner is deployed

 namespace: kube-system

roleRef:

 kind: ClusterRole

 name: nfs-client-provisioner-runner

 apiGroup: rbac.authorization.k8s.io

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

 name: leader-locking-nfs-client-provisioner

 # replace with namespace where provisioner is deployed

 namespace: kube-system

rules:

  - apiGroups: [""]

 resources: ["endpoints"]

 verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

 name: leader-locking-nfs-client-provisioner

 # replace with namespace where provisioner is deployed

 namespace: kube-system

subjects:

  - kind: ServiceAccount

 name: nfs-client-provisioner

 # replace with namespace where provisioner is deployed

 namespace: kube-system

roleRef:

 kind: Role

 name: leader-locking-nfs-client-provisioner

 apiGroup: rbac.authorization.k8s.io

Deployment

代码语言:yaml
复制
apiVersion: apps/v1

kind: Deployment

metadata:

 name: nfs-client-provisioner

 labels:

 app: nfs-client-provisioner

 # replace with namespace where provisioner is deployed

 namespace: kube-system

spec:

 replicas: 1

 strategy:

 type: Recreate

 selector:

 matchLabels:

 app: nfs-client-provisioner

 template:

 metadata:

 labels:

 app: nfs-client-provisioner

 spec:

 serviceAccountName: nfs-client-provisioner

 containers:

        - name: nfs-client-provisioner

 image: registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2

 volumeMounts:

            - name: nfs-client-root

 mountPath: /persistentvolumes

 env:

            - name: PROVISIONER\_NAME

 value: k8s-sigs.io/nfs-subdir-external-provisioner

            - name: NFS\_SERVER

 value: 192.168.50.147 

            - name: NFS\_PATH

 value: /nfs

 volumes:

        - name: nfs-client-root

 nfs:

 server: 192.168.50.147

 path: /nfs

Class

代码语言:yaml
复制
apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

 name: managed-nfs-storage

provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER\_NAME'

parameters:

 archiveOnDelete: "false"

检查

image.png
image.png

指定默认 Storageclass

代码语言:shell
复制
kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
image.png
image.png

回收策略

https://blog.csdn.net/zenglingmin8/article/details/121737953

11. 远程管理集群

a. 拷贝 master 节点的 kubeconfig 文件到本地用户家目录./kube/config,修改server配置段server: [https://kubernetes.default.svc:6443](https://kubernetes.default.svc:6443)

可以替换的域名: kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local

b. 调整 kubeconfig 文件权限

代码语言:shell
复制
chmod 600 $HOME/.kube/config

c. 调整域名解析

解析 kubernetes.default.svc 的 IP 为集群映射的公网 IP 地址

代码语言:shell
复制
cat /etc/hosts

X.X.X.X kubernetes.default.svc

12. Kuboard

代码语言:shell
复制
curl -o kuboard-v3.yaml https://addons.kuboard.cn/kuboard/kuboard-v3-storage-class.yaml

修改 node ip 和 Storageclass name 后执行

代码语言:shell
复制
kubectl apply -f kuboard-v3.yaml

13. 参考文档

https://k8s.iswbm.com/c01/p01_depoly-kubernetes-cluster-with-kubelet.html

https://www.cnblogs.com/cosmos-wong/p/15709412.html

https://kubernetes.io/zh/docs/tasks/tools/included/optional-kubectl-configs-zsh/

https://computingforgeeks.com/deploy-and-use-openebs-container-storage-on-kubernetes/

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 0. 环境
  • 1. 关闭交换分区并配置内核参数
  • 2. 部署 Container Runtime
  • 3. 添加 Kubernetes 仓库
  • 4. 安装 kubeadm
  • 5. 部署 Master 相关组件
  • 6. 配置 Worker node 加入集群
  • 7. 部署网络插件
  • 8. 脚本
  • 9. 优化配置
    • zsh
      • 自动补全
        • 别名
        • 10. 配置存储
          • NFS
            • 回收策略
            • 11. 远程管理集群
            • 12. Kuboard
            • 13. 参考文档
            相关产品与服务
            容器服务
            腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
            领券
            问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档