前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Kubernetes/K8S部署之kubeadm

Kubernetes/K8S部署之kubeadm

作者头像
码客说
发布2021-03-04 10:51:44
1.1K0
发布2021-03-04 10:51:44
举报
文章被收录于专栏:码客

安装K8S

环境

CentOS7.6

内存>2G

服务器两台及以上

IP

角色

安装软件

192.168.10.39

k8s-master

kube-apiserverkube-schdulerkube-controller-managerdockerflannelkubelet

192.168.10.40

k8s-node01

kubeletkube-proxydockerflannel

镜像源

代码语言:javascript
复制
cd /etc/yum.repos.d
mkdir bak
mv *.repo bak/

CentOS-Base.repo

代码语言:javascript
复制
vi CentOS-Base.repo

内容如下

代码语言:javascript
复制
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the 
# remarked out baseurl= line instead.
#
 
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#released updates 
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

epel-7.repo

代码语言:javascript
复制
vi epel-7.repo

如下

代码语言:javascript
复制
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=http://mirrors.aliyun.com/epel/7/$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
 
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
 
[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=http://mirrors.aliyun.com/epel/7/SRPMS
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0

docker-ce.repo

代码语言:javascript
复制
vi docker-ce.repo

如下

代码语言:javascript
复制
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

kubernetes.repo

代码语言:javascript
复制
vi kubernetes.repo

如下

代码语言:javascript
复制
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装Docker

所有主机上均安装

安装Docker

代码语言:javascript
复制
# 卸载旧版本(如果安装过旧版本的话)
yum remove docker  docker-common docker-selinux docker-engine
yum makecache
yum list docker-ce --showduplicates
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker

# 查看版本
docker --version
# 或者
docker version

显示Docker版本

Docker version 18.06.1-ce, build e68fc7a

如果报错

获取 GPG 密钥失败:[Errno 14] curl#7 - “Failed to connect to 2600:9000:2003:ca00:3:db06:4200:93a1: Network is unreachable”

解决方法

查看系统版本

代码语言:javascript
复制
cat /etc/redhat-release

mirrors.163.com 找到系统对应密钥

代码语言:javascript
复制
rpm --import http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7

针对Docker客户端版本大于 1.10.0 的用户

创建或修改 /etc/docker/daemon.json 文件

代码语言:javascript
复制
vi /etc/docker/daemon.json

添加或修改

代码语言:javascript
复制
{
    "registry-mirrors": ["https://tiaudqrq.mirror.aliyuncs.com"]
}

重启Docker

代码语言:javascript
复制
systemctl daemon-reload
systemctl restart docker.service

环境初始化

所有主机上均安装

所有节点加载ipvs模块

代码语言:javascript
复制
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4

关闭防火墙及selinux

代码语言:javascript
复制
systemctl stop firewalld && systemctl disable firewalld
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config  && setenforce 0

关闭 swap 分区

代码语言:javascript
复制
swapoff -a # 临时
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久

设置主机名及配置hosts

192.168.10.39

代码语言:javascript
复制
hostnamectl set-hostname k8s-master

192.168.10.40

代码语言:javascript
复制
hostnamectl set-hostname k8s-node01

在所有主机上上添加如下命令

代码语言:javascript
复制
cat >> /etc/hosts << EOF
192.168.10.39 k8s-master
192.168.10.40 k8s-node01
EOF

内核调整,将桥接的IPv4流量传递到iptables的链

代码语言:javascript
复制
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

设置系统时区并同步时间服务器

代码语言:javascript
复制
yum install -y ntpdate && \
ntpdate time.windows.com

安装K8S

所有主机上均安装

Ubuntu

代码语言:javascript
复制
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

CentOS

代码语言:javascript
复制
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4
systemctl enable kubelet && systemctl start kubelet

如果安装前有旧版本先调用卸载

代码语言:javascript
复制
yum remove -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0

查看

代码语言:javascript
复制
kubectl version --client

你需要在每台机器上安装以下的软件包:

  • kubeadm:用来初始化集群的指令。
  • kubelet:在集群中的每个节点上用来启动 Pod 和容器等。
  • kubectl:用来与集群通信的命令行工具。

准备镜像

获取kubeadm需要的所有镜像

代码语言:javascript
复制
kubeadm config images list

结果

k8s.gcr.io/kube-apiserver:v1.20.4 k8s.gcr.io/kube-controller-manager:v1.20.4 k8s.gcr.io/kube-scheduler:v1.20.4 k8s.gcr.io/kube-proxy:v1.20.4 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0

编写脚本批量pull,在文本中输入:

代码语言:javascript
复制
vi kubeadm_config_images_list.sh

输入

代码语言:javascript
复制
#! /bin/bash
images=(
kube-apiserver:v1.20.4
kube-controller-manager:v1.20.4
kube-scheduler:v1.20.4
kube-proxy:v1.20.4
pause:3.2
etcd:3.4.13-0
coredns:1.7.0
)
 
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done

其中docker tag用于标记本地镜像,将其归入某一仓库

保存脚本kubeadm_config_images_list.sh后运行:

代码语言:javascript
复制
sudo chmod +x kubeadm_config_images_list.sh

让其变得可执行,然后在当前文件夹运行:

代码语言:javascript
复制
./kubeadm_config_images_list.sh

即可.

部署Kubernetes Master

只在Master节点执行 这里的apiserve需要修改成自己的master地址

执行

代码语言:javascript
复制
# 删除之前的配置
rm -rf /var/lib/etcd && \
rm -rf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && \
rm -rf /etc/kubernetes/manifests/* && \
kubeadm init \
--apiserver-advertise-address=192.168.10.39 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.4 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

成功提示信息

Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p HOME/.kube sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (id -u):(id -g) Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.10.39:6443 –token am6y16.jfvsunhm28h7nn92 \ –discovery-token-ca-cert-hash sha256:21c1254f9da1bf4c46a6841b603ca1c21193eb32c27639ebf775f6fab8ba39de

根据提示执行

代码语言:javascript
复制
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

root用户执行

代码语言:javascript
复制
export KUBECONFIG=/etc/kubernetes/admin.conf

报错

The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz' failed with error

原因是

使用minikube部署测试环境时,它会修改kubeadm的配置文件,删除即可

解决方法 删除原来的配置文件

代码语言:javascript
复制
rm -rf /var/lib/etcd
rm -rf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

查看错误

代码语言:javascript
复制
systemctl status kubelet
systemctl list-units --failed

重新生成token

只在Master节点执行 token失效时执行

默认token的有效期为24小时,当过期之后,该token就不可用了,

如果后续有nodes节点加入,解决方法如下:

重新生成新的token

代码语言:javascript
复制
kubeadm token create

显示

vyaztf.snjf2za5636nekcv

查看列表

代码语言:javascript
复制
kubeadm token list

获取ca证书sha256编码hash值

代码语言:javascript
复制
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

显示

21c1254f9da1bf4c46a6841b603ca1c21193eb32c27639ebf775f6fab8ba39de

在Node节点就可以这样加入

代码语言:javascript
复制
kubeadm join 192.168.10.39:6443 --token vyaztf.snjf2za5636nekcv \
    --discovery-token-ca-cert-hash sha256:21c1254f9da1bf4c46a6841b603ca1c21193eb32c27639ebf775f6fab8ba39de

加入Kubernetes Node

在 Node 节点执行

使用kubeadm join 注册Node节点到Matser

kubeadm join 的内容,在上面kubeadm init 已经生成好了

代码语言:javascript
复制
kubeadm join 192.168.10.39:6443 --token am6y16.jfvsunhm28h7nn92 \
    --discovery-token-ca-cert-hash sha256:21c1254f9da1bf4c46a6841b603ca1c21193eb32c27639ebf775f6fab8ba39de

查看是否成功

在Master节点执行

代码语言:javascript
复制
kubectl get nodes

安装网络插件

只需要在Master 节点执行

执行

代码语言:javascript
复制
wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

修改镜像地址:(有可能默认不能拉取,确保能够访问到quay.io这个registery,否则修改如下内容)

代码语言:javascript
复制
vi kube-flannel.yml

进入编辑,把106行,120行的内容,替换如下image,替换之后查看如下为正确

image 修改为 lizhenliang/flannel:v0.11.0-amd64

代码语言:javascript
复制
cat -n  kube-flannel.yml|grep lizhenliang/flannel:v0.11.0-amd64

出现

106 image: lizhenliang/flannel:v0.11.0-amd64 120 image: lizhenliang/flannel:v0.11.0-amd64

应用

代码语言:javascript
复制
kubectl apply -f kube-flannel.yml
ps -ef|grep flannel

出现

root 2032 2013 0 21:00 ? 00:00:00 /opt/bin/flanneld –ip-masq –kube-subnet-mgr

查看集群的node状态,安装完网络工具之后,只有显示如下状态,所有节点全部都Ready好了之后才能继续后面的操作

代码语言:javascript
复制
kubectl get nodes

结果

NAME STATUS ROLES AGE VERSION k8s-master Ready master 37m v1.15.0 k8s-node01 Ready 5m22s v1.15.0

查看所有

代码语言:javascript
复制
kubectl get pod -n kube-system

只有全部都为1/1则可以成功执行后续步骤,如果flannel需检查网络情况,重新进行如下操作

代码语言:javascript
复制
kubectl delete -f kube-flannel.yml

然后重新wget,然后修改镜像地址,然后

代码语言:javascript
复制
kubectl apply -f kube-flannel.yml

报错处理

如果使用1.20.x版本就会报错

代码语言:javascript
复制
yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4

错误

unable to recognize “kube-flannel.yml”: no matches for kind “DaemonSet” in version “extensions/v1beta1”

原因:DaemonSetDeploymentStatefulSetReplicaSet 在 v1.16 中将不再从 extensions/v1beta1apps/v1beta1apps/v1beta2 提供服务

解决方法是:

将yml配置文件内的api接口修改为 apps/v1 导致原因为之间使用的kubernetes 版本是1.15.x版本,1.16.x 及以上版本放弃部分API支持

所以有两种办法

  1. 使用旧版本的k8s
  2. 修改配置

即上面配置中的

代码语言:javascript
复制
apiVersion: extensions/v1beta1

修改为

代码语言:javascript
复制
apiVersion: apps/v1

代码语言:javascript
复制
apiVersion: rbac.authorization.k8s.io/v1beta1

修改为

代码语言:javascript
复制
apiVersion: rbac.authorization.k8s.io/v1

代码语言:javascript
复制
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel

所有DaemonSet下template的同级填上

代码语言:javascript
复制
selector:
  matchLabels:
    app: flannel

修改之后的全部配置

代码语言:javascript
复制
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: s390x
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

测试集群

在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问:

代码语言:javascript
复制
kubectl create deployment nginx --image=nginx

出现

deployment.apps/nginx created

service

代码语言:javascript
复制
kubectl expose deployment nginx --port=80 --type=NodePort

出现

service/nginx exposed

查看

代码语言:javascript
复制
kubectl get pods,svc

出现

image-20210301175143560
image-20210301175143560

访问地址:http://NodeIP:Port

此例就是:http://192.168.10.39:32338

部署 Dashboard

代码语言:javascript
复制
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

vi kubernetes-dashboard.yaml

修改内容:

109 spec: 110 containers: 111 - name: kubernetes-dashboard 112 image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 # 修改此行 …… 157 spec: 158 type: NodePort # 增加此行 159 ports: 160 - port: 443 161 targetPort: 8443 162 nodePort: 30001 # 增加此行 163 selector: 164 k8s-app: kubernetes-dashboard

执行

代码语言:javascript
复制
kubectl apply -f kubernetes-dashboard.yaml

在浏览器访问地址: https://NodeIP:30001

即:http://192.168.10.39:30001

创建service account并绑定默认cluster-admin管理员集群角色:

代码语言:javascript
复制
kubectl create serviceaccount dashboard-admin -n kube-system

显示

serviceaccount/dashboard-admin created

输入

代码语言:javascript
复制
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

显示

clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

代码语言:javascript
复制
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

显示

代码语言:javascript
复制
Name:         dashboard-admin-token-hblz2
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: f6da436d-421b-4393-83ed-fc5ea8a6abc5

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlRhUTNvVnNxVng4YnhKSnBuTUNIRm9lYkpkUlllOEpSTE9yRGtpd1NCc1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4taGJsejIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjZkYTQzNmQtNDIxYi00MzkzLTgzZWQtZmM1ZWE4YTZhYmM1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.GZjCQhd71wCBMOvi7bKIjXP9uD4vrRxa9epZTve3I8g2V9AjbGZHFkHzStYrly1crqXbE9iEa71fwPcMWsUtzdYZKbOZXHovq7kg13VU_bBf_9A1TV0A-397wzL294KED7V0-WdmP1UnejDFN_gK0p40Iy3VcNW-KTr2opuZLE-7F2tJsZFHqmW9vd8RsEgWon8jp9OAdj0BCINJuZBof7cUJwNQUD75F5rJO6zNV_Po2pxEmxCryPCG2OUEbQ6o7-e-sImSrr6DpvUatYGkLC5coSs1YlJotrDK2WwmmtE0dzf4PLHOQdpZqML524I_Qfvh7Uwei3FkJKhKwsqvHw
ca.crt:     1025 bytes

解决其他浏览器不能访问的问题

代码语言:javascript
复制
[root@k8s-master ~]# cd /etc/kubernetes/pki/
[root@k8s-master pki]# mkdir ui
[root@k8s-master pki]# cp apiserver.crt  ui/
[root@k8s-master pki]# cp apiserver.key  ui/
[root@k8s-master pki]# cd ui/
[root@k8s-master ui]# mv apiserver.crt dashboard.pem
[root@k8s-master ui]# mv  apiserver.key   dashboard-key.pem
[root@k8s-master ui]# kubectl delete secret kubernetes-dashboard-certs -n kube-system
[root@k8s-master ui]# kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system

设置证书

代码语言:javascript
复制
[root@k8s-master]# vim kubernetes-dashboard.yaml #回到这个yaml的路径下修改

修改 dashboard-controller.yaml 文件,在args下面增加证书两行

代码语言:javascript
复制
- --tls-key-file=dashboard-key.pem
- --tls-cert-file=dashboard.pem

重复之前的步骤

代码语言:javascript
复制
[root@k8s-master ~]kubectl apply -f kubernetes-dashboard.yaml
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin  --serviceaccount=kube-system:dashboard-admin
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

成功信息

代码语言:javascript
复制
Name:         dashboard-admin-token-zbn9f
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 40259d83-3b4f-4acc-a4fb-43018de7fc19

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4temJuOWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDAyNTlkODMtM2I0Zi00YWNjLWE0ZmItNDMwMThkZTdmYzE5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.E0hGAkeQxd6K-YpPgJmNTv7Sn_P_nzhgCnYXGc9AeXd9k9qAcO97vBeOV-pH518YbjrOAx_D6CKIyP07aCi_3NoPlbbyHtcpRKFl-lWDPdg8wpcIefcpbtS6uCOrpaJdCJjWFcAEHdvcfmiFpdVVT7tUZ2-eHpRTUQ5MDPF-c2IOa9_FC9V3bf6XW6MSCZ_7-fOF4MnfYRa8ucltEIhIhCAeDyxlopSaA5oEbopjaNiVeJUGrKBll8Edatc7-wauUIJXAN-dZRD0xTULPNJ1BsBthGQLyFe8OpL5n_oiHM40tISJYU_uQRlMP83SfkOpbiOpzuDT59BBJB57OQtl3w
ca.crt:     1025 bytes

卸载

卸载服务

代码语言:javascript
复制
kubeadm reset

停止所有容器

代码语言:javascript
复制
docker stop `docker ps -a -q`

删除所有容器

代码语言:javascript
复制
docker rm `docker ps -a -q`

删除所有的镜像

代码语言:javascript
复制
docker rmi --force `docker images -q`
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2021-02-28,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 安装K8S
    • 镜像源
      • CentOS-Base.repo
      • epel-7.repo
      • docker-ce.repo
      • kubernetes.repo
    • 安装Docker
      • 环境初始化
        • 安装K8S
          • 准备镜像
            • 部署Kubernetes Master
              • 报错
            • 重新生成token
              • 加入Kubernetes Node
                • 安装网络插件
                  • 报错处理
              • 测试集群
              • 部署 Dashboard
              • 卸载
              相关产品与服务
              容器镜像服务
              容器镜像服务(Tencent Container Registry,TCR)为您提供安全独享、高性能的容器镜像托管分发服务。您可同时在全球多个地域创建独享实例,以实现容器镜像的就近拉取,降低拉取时间,节约带宽成本。TCR 提供细颗粒度的权限管理及访问控制,保障您的数据安全。
              领券
              问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档