前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >搭建K8S集群之node节点部署

搭建K8S集群之node节点部署

作者头像
后场技术
发布2020-09-03 17:41:53
2.3K0
发布2020-09-03 17:41:53
举报
文章被收录于专栏:后场技术后场技术

在上一篇文章中,我们完成了K8S系列之K8S集群之Master节点部署,在这篇文章中,我们将开始部署Node节点相关的组件。在node节点上,需要部署kubeletkube-proxy两个K8S组件,除此之外,还需要部署Docker环境、CNI网络插件flannel以及coredns服务。

根据我们的架构,我们在10.4.7.2110.4.7.22 两台服务器上同时部署Master节点和Node节点,所以,这两台服务器既是Master节点,又是Node节点。

「一、部署kubelet服务」

安装Docker环境

代码语言:javascript
复制
[root@k8s7-21 ~]# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
[root@k8s7-21 ~]# mkdir /etc/docker
[root@k8s7-21 ~]# mkdir /etc/docker
[root@k8s7-21 ~]# vim /etc/docker/daemon.json 
[root@k8s7-21 ~]# cat /etc/docker/daemon.json 
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.16.21.1/24",       # 10.4.7.22上此处要设置为172.16.22.1/24
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}
[root@k8s7-21 ~]# mkdir /data/docker
[root@k8s7-21 ~]# systemctl start docker
[root@k8s7-21 ~]# systemctl enable docker

两台Node节点部署过程大体相同,所以,我们还是以 10.4.7.21 这台为例来详述部署过程。kubelet组件是用来具体执行API Server发布的调度任务的,也就是说kubelet实际上是API Server的服务端,所以,我们要为其签发一个服务端的证书

在 10.4.7.200 上创建kubelet证书请求文件并签发证书

代码语言:javascript
复制
[root@k8s7-200 certs]# pwd
/opt/certs
[root@k8s7-200 certs]# cat kubelet-csr.json 
{
    "CN": "k8s-kubelet",
    "hosts": [
    "127.0.0.1",
    "10.4.7.10",
    "10.4.7.21",
    "10.4.7.22",
    "10.4.7.23",
    "10.4.7.24",
    "10.4.7.25",
    "10.4.7.26",
    "10.4.7.27",
    "10.4.7.28"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[root@k8s7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
2019/12/18 11:21:29 [INFO] generate received request
2019/12/18 11:21:29 [INFO] received CSR
2019/12/18 11:21:29 [INFO] generating key: rsa-2048
2019/12/18 11:21:29 [INFO] encoded CSR
2019/12/18 11:21:29 [INFO] signed certificate with serial number 667302028502029837499232472372152752250108397208
2019/12/18 11:21:29 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s7-200 certs]# ll -h kubelet*
-rw-r--r-- 1 root root 1.1K 12月 18 11:21 kubelet.csr
-rw-r--r-- 1 root root  452 12月 18 11:19 kubelet-csr.json
-rw------- 1 root root 1.7K 12月 18 11:21 kubelet-key.pem
-rw-r--r-- 1 root root 1.5K 12月 18 11:21 kubelet.pem

除此之外,kubelet还要去访问API Server,所以其又是API Server的客户端,所以还要为其签发一套客户端证书,其实我们完全可以使用之前API Server访问ETCD时签发的client证书,如果使用这套证书的话,则在创建k8s集群的用户的时候,需要使用这个证书中CN部分指定的名称。为了区分,所以我们再签发一套证书。过程和签发server证书一样,先创建证书请求文件。

代码语言:javascript
复制
[root@k8s7-200 certs]# cat kubelet-client-csr.json 
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

签发证书

代码语言:javascript
复制
[root@k8s7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubelet-client-csr.json | cfssl-json -bare kubelet-client
2019/12/18 17:11:11 [INFO] generate received request
2019/12/18 17:11:11 [INFO] received CSR
2019/12/18 17:11:11 [INFO] generating key: rsa-2048
2019/12/18 17:11:12 [INFO] encoded CSR
2019/12/18 17:11:12 [INFO] signed certificate with serial number 563915587426574985965139906231834454879742318048
2019/12/18 17:11:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s7-200 certs]# ll -h kubelet-client*
-rw-r--r-- 1 root root  993 12月 18 17:11 kubelet-client.csr
-rw-r--r-- 1 root root  280 12月 18 17:09 kubelet-client-csr.json
-rw------- 1 root root 1.7K 12月 18 17:11 kubelet-client-key.pem
-rw-r--r-- 1 root root 1.4K 12月 18 17:11 kubelet-client.pem

在 10.4.7.21 上拷贝kubelet证书

代码语言:javascript
复制
[root@k8s7-21 ~]# cd /opt/kubernetes/server/bin/cert/
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/kubelet.pem ./ 
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/kubelet-key.pem ./
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/kubelet-client.pem ./   
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/kubelet-client-key.pem ./

为kubelet创建配置文件,在我们启动kubelet的时候,需要用到一个配置文件,kubelet.kubeconfig,这个配置文件中指定了kubelet启动时的必要信息和证书信息,这个配置文件需要我们使用kubectl config命令来生成,总共分为4步:

a、设置集群参数,在这一步我们创建了一个名为 myk8s 的集群

代码语言:javascript
复制
[root@k8s7-21 ~]# cd /opt/kubernetes/server/bin/conf/
[root@k8s7-21 conf]# kubectl config set-cluster myk8s \    # 创建 myk8s 集群
>  --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \   # 指定集群认证使用的ca证书
>  --embed-certs=true \
>  --server=https://10.4.7.10:7443 \             # 指定API Server访问地址
>  --kubeconfig=kubelet.kubeconfig               # 在当前目录下生成kubelet.kubeconfig配置文件
Cluster "myk8s" set.

b、设置客户端认证参数,在这一步我们创建了用于访问 API Server的用户k8s-node

代码语言:javascript
复制
[root@k8s7-21 conf]# kubectl config set-credentials k8s-node \     # 创建了K8S用户,用户名为k8s-node
>  --client-certificate=/opt/kubernetes/server/bin/cert/kubelet-client.pem \  # 指定访问API Server时使用的client证书
>  --client-key=/opt/kubernetes/server/bin/cert/kubelet-client-key.pem \
>  --embed-certs=true \
>  --kubeconfig=kubelet.kubeconfig 
User "k8s-node" set.

c、设置上下文参数

代码语言:javascript
复制
[root@k8s7-21 conf]# kubectl config set-context myk8s-context \
>  --cluster=myk8s \
>  --user=k8s-node \
>  --kubeconfig=kubelet.kubeconfig
Context "myk8s-context" created.

d、应用上下文参数

代码语言:javascript
复制
[root@k8s7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
Switched to context "myk8s-context".

创建RBAC资源配置清单,因为K8S认证基于RBAC模式,所以,我们要将我们创建的集群用户k8s-node和我们的集群角色做一个绑定,这样我们的kubelet才能正常的工作,关于RBAC相关的内容,我们将在后续介绍,此处暂不展开。

代码语言:javascript
复制
[root@k8s7-21 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@k8s7-21 conf]# cat k8s-node.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

创建绑定,注意,这一步是在集群中创建一个用户绑定,所以不需要在所有node节点上都执行,只需要在一台节点上执行就可以,重复执行会报错,此处,我们只需要在 10.4.7.21 这台机器上执行一次就可以了。

代码语言:javascript
复制
[root@k8s7-21 conf]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created

kubelet在控制pod的时候需要用到一个基础镜像,我们需要先将这个基础镜像拉取下来,然后传到我们自己的镜像仓库中。在这之前,需要我们在harbor仓库中,创建一个名为public的公共仓库,push镜像前需要使用docker login命令登陆我们的镜像仓库。

代码语言:javascript
复制
[root@k8s7-21 ~]# docker pull kubernetes/pause
Using default tag: latest
latest: Pulling from kubernetes/pause
4f4fb700ef54: Pull complete 
b9c8ec465f6b: Pull complete 
Digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105
Status: Downloaded newer image for kubernetes/pause:latest
docker.io/kubernetes/pause:latest
[root@k8s7-21 ~]# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
kubernetes/pause    latest              f9d5de079539        5 years ago         240kB
[root@k8s7-21 ~]# docker tag f9d5de079539 harbor.od.com/public/pause:latest
[root@k8s7-21 ~]# docker login harbor.od.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@k8s7-21 ~]# docker push harbor.od.com/public/pause:latest
The push refers to repository [harbor.od.com/public/pause]
5f70bf18a086: Pushed 
e16a89738269: Pushed 
latest: digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 size: 938

创建kubelet启动脚本以及创建相关目录

代码语言:javascript
复制
[root@k8s7-21 ~]# cd /opt/kubernetes/server/bin/
[root@k8s7-21 bin]# vim kubelet.sh
[root@k8s7-21 bin]# cat kubelet.sh 
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override k8s7-21.host.com \             # 这里要设置服务器主机名
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.od.com/public/pause:latest \
  --root-dir /data/kubelet
[root@k8s7-21 bin]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
[root@k8s7-21 bin]# chmod +x kubelet.sh

创建supervisor任务。与其他组件相同,我们也需要将kubelet服务托管给supervisor,所以,需要创建托管配置文件

代码语言:javascript
复制
[root@k8s7-21 bin]# cat /etc/supervisord.d/kube-kubelet.ini 
[program:kube-kubelet-7-21]
command=/opt/kubernetes/server/bin/kubelet.sh     ; the program (relative uses PATH, can take args)
numprocs=1                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin              ; directory to cwd to before exec (def no cwd)
autostart=true                                    ; start at supervisord start (default: true)
autorestart=true                          ; retstart at unexpected quit (default: true)
startsecs=30                                      ; number of secs prog must stay running (def. 1)
startretries=3                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log   ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                       ; emit events on stdout writes (default false)

启动托管任务

代码语言:javascript
复制
[root@k8s7-21 bin]# supervisorctl update
kube-kubelet-7-21: added process group
[root@k8s7-21 bin]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 41034, uptime 1 day, 23:50:50
kube-apiserver-7-21              RUNNING   pid 42543, uptime 1 day, 21:23:22
kube-controller-manager-7-21     RUNNING   pid 54937, uptime 22:36:40
kube-kubelet-7-21                RUNNING   pid 67928, uptime 0:01:06
kube-scheduler-7-21              RUNNING   pid 55104, uptime 22:23:45

至此,10.4.7.21 上的kubelet组件部署完成,按相同步骤部署一下 10.4.7.22 ,需要修改的地方分别是docker的daemon.json中的bip配置,kubelet.sh中的主机名配置,supervisor托管配置文件中的任务名称,在此就不再赘述了。

当两台Node服务器都部署完成后,我们来查看下Node状态

代码语言:javascript
复制
[root@k8s7-21 bin]# kubectl get nodes
NAME               STATUS   ROLES    AGE     VERSION
k8s7-21.host.com   Ready    <none>   2m10s   v1.15.2
k8s7-22.host.com   Ready    <none>   118s    v1.15.2

此时我们可以看到,我们的两台Node节点已经注册到我们的k8s集群中了。

「二、部署kube-proxy服务」

在 10.4.7.200 上位kube-proxy组件签发证书,kube-proxy组件主要是为我们提供服务注册和服务发现的功能,为我们建立service IP到pod IP的映射关系,kube-proxy需要访问API Server去实现这一功能,即kube-proxy是API Server的Client,所以,我们应该为kube-proxy签发一个client证书。

创建证书申请文件

代码语言:javascript
复制
[root@k8s7-200 certs]# cat kube-proxy-csr.json 
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

签发证书

代码语言:javascript
复制
[root@k8s7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
2019/12/18 17:40:51 [INFO] generate received request
2019/12/18 17:40:51 [INFO] received CSR
2019/12/18 17:40:51 [INFO] generating key: rsa-2048
2019/12/18 17:40:52 [INFO] encoded CSR
2019/12/18 17:40:52 [INFO] signed certificate with serial number 566301941237575212073431307767208155082865812029
2019/12/18 17:40:52 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s7-200 certs]# ll -h kube-proxy*
-rw-r--r-- 1 root root 1005 12月 18 17:40 kube-proxy-client.csr
-rw------- 1 root root 1.7K 12月 18 17:40 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1.4K 12月 18 17:40 kube-proxy-client.pem
-rw-r--r-- 1 root root  268 12月 18 17:38 kube-proxy-csr.json

在 10.4.7.21 上拷贝证书

代码语言:javascript
复制
[root@k8s7-21 cert]# pwd
/opt/kubernetes/server/bin/cert
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/kube-proxy-client.pem ./
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/kube-proxy-client-key.pem ./

为kube-proxy创建配置文件,和kubelet组件一样,kube-proxy启动也需要kube-proxy.kubeconfig配置文件,所以我们依然分成四步去生成这个文件

a、设置集群参数

代码语言:javascript
复制
[root@k8s7-21 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@k8s7-21 conf]# kubectl config set-cluster myk8s \
> --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
> --embed-certs=true \
> --server=https://10.4.7.10:7443 \
> --kubeconfig=kube-proxy.kubeconfig
Cluster "myk8s" set.

b、设置客户端认证参数

代码语言:javascript
复制
[root@k8s7-21 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@k8s7-21 conf]# kubectl config set-credentials kube-proxy \
>  --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
>  --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
>  --embed-certs=true \
>  --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.

c、设置上下文参数

代码语言:javascript
复制
[root@k8s7-21 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@k8s7-21 conf]# kubectl config set-context myk8s-context \
>  --cluster=myk8s \
>  --user=kube-proxy \
>  --kubeconfig=kube-proxy.kubeconfig
Context "myk8s-context" created.

d、应用上下文参数

代码语言:javascript
复制
[root@k8s7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
Switched to context "myk8s-context".

创建IPVS脚本并执行,我们使用IPVS模块来作为我们的IP映射算法

代码语言:javascript
复制
[root@k8s7-21 conf]# cd /opt/
[root@k8s7-21 opt]# cat ipvs.sh 
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done
[root@k8s7-21 opt]# chmod +x ipvs.sh 
[root@k8s7-21 opt]# ./ipvs.sh 

创建kube-proxy启动脚本并创建相关目录

代码语言:javascript
复制
[root@k8s7-21 bin]# pwd
/opt/kubernetes/server/bin
[root@k8s7-21 bin]# cat kube-proxy.sh 
#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.16.0.0/16 \
  --hostname-override k8s7-21.host.com \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ./conf/kube-proxy.kubeconfig
[root@k8s7-21 bin]# chmod +x kube-proxy.sh
[root@k8s7-21 bin]# mkdir -p /data/logs/kubernetes/kube-proxy

将kube-proxy服务托管给supervisor,创建托管配置文件

代码语言:javascript
复制
[root@k8s7-21 bin]# cat /etc/supervisord.d/kube-proxy.ini 
[program:kube-proxy-7-21]
command=/opt/kubernetes/server/bin/kube-proxy.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                           ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                 ; directory to cwd to before exec (def no cwd)
autostart=true                                                       ; start at supervisord start (default: true)
autorestart=true                                                     ; retstart at unexpected quit (default: true)
startsecs=30                                                         ; number of secs prog must stay running (def. 1)
startretries=3                                                       ; max # of serial start failures (default 3)
exitcodes=0,2                                                        ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                      ; signal used to kill process (default TERM)
stopwaitsecs=10                                                      ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                            ; setuid to this UNIX account to run the program
redirect_stderr=true                                                 ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log     ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                         ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                             ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                          ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                          ; emit events on stdout writes (default false)

启动托管任务

代码语言:javascript
复制
[root@k8s7-21 bin]# supervisorctl update
kube-proxy-7-21: added process group
[root@k8s7-21 bin]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 41034, uptime 2 days, 18:53:42
kube-apiserver-7-21              RUNNING   pid 42543, uptime 2 days, 16:26:14
kube-controller-manager-7-21     RUNNING   pid 54937, uptime 1 day, 17:39:32
kube-kubelet-7-21                RUNNING   pid 80586, uptime 17:26:21
kube-proxy-7-21                  RUNNING   pid 129657, uptime 0:01:54
kube-scheduler-7-21              RUNNING   pid 55104, uptime 1 day, 17:26:37

至此,我们的kube-proxy组件就已经部署完成,同样,依照上述步骤部署 10.4.7.22 ,此处不再赘述。

「三、测试集群」

当我们部署完成kube-proxy后,实际上已经将K8S集群本身的所有组件部署完成,接下来,我们测试一下集群的功能,首先利用ipvsadm工具查看一下service网络和pod网络的映射。在 10.4.7.21 上执行如下命令

代码语言:javascript
复制
[root@k8s7-21 ~]# yum -y install ipvsadm
[root@k8s7-21 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 10.4.7.21:6443               Masq    1      0          0         
  -> 10.4.7.22:6443               Masq    1      0          0    

在 10.4.7.200 上拉取一个nginx镜像,然后将这个镜像推送到我们的harbor仓库中。

代码语言:javascript
复制
[root@k8s7-200 ~]# docker pull nginx
[root@k8s7-200 ~]# docker image ls
REPOSITORY                      TAG                        IMAGE ID            CREATED             SIZE
nginx                           latest                     231d40e811cd        3 weeks ago         126MB
[root@k8s7-200 ~]# docker tag 231d40e811cd harbor.od.com/public/nginx:latest
[root@k8s7-200 ~]# docker login harbor.od.com
[root@k8s7-200 ~]# docker push harbor.od.com/public/nginx:latest

然后我们利用这个镜像,在我们的K8S集群中部署一个nginx的服务,我们首先需要创建一个POD控制器,此处我们采用DaemonSet类型的POD控制器,此种类型的控制器会在所有node节点上都帮我们创建一个pod。关于pod控制器的内容,我们会在后面介绍。此处,我们使用资源配置清单来创建pod控制器:

代码语言:javascript
复制
[root@k8s7-21 ~]# cat /root/nginx-ds.yaml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: harbor.od.com/public/nginx:latest
        ports:
        - containerPort: 80
[root@k8s7-21 ~]# kubectl create -f nginx-ds.yaml
daemonset.extensions/nginx-ds created
[root@k8s7-21 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-kv6z4   1/1     Running   0          29s
nginx-ds-w8xks   1/1     Running   0          29s
[root@k8s7-21 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES
nginx-ds-kv6z4   1/1     Running   0          37s   172.16.22.2   k8s7-22.host.com   <none>           <none>
nginx-ds-w8xks   1/1     Running   0          37s   172.16.21.2   k8s7-21.host.com   <none>           <none>

此时,我们看到,K8S帮我们在所有的node节点上都拉起了一个nginx的pod,至此,说明我们的集群已经正常工作了。

「四、部署flannel插件」

我们部署完成K8S集群后,也测试了集群的可用性,此时我们可以看到,我们在两台node节点上分别运行了两个pod,这两个pod的IP地址分别是 172.16.21.2 和 172.16.22.2 ,其中,172.16.21.2 的pod运行在我们的 10.4.7.21 这台node节点上,我们在 10.4.7.21 节点上分别ping一下这两个IP地址:

代码语言:javascript
复制
[root@k8s7-21 ~]# ping 172.16.21.2
PING 172.16.21.2 (172.16.21.2) 56(84) bytes of data.
64 bytes from 172.16.21.2: icmp_seq=1 ttl=64 time=0.133 ms
^C
--- 172.16.21.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms
[root@k8s7-21 ~]# ping 172.16.22.2
PING 172.16.22.2 (172.16.22.2) 56(84) bytes of data.
^C
--- 172.16.22.2 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1000ms

[root@k8s7-21 ~]# 

此时我们看到,当我们访问其他宿主机上的pod网络的时候,此时网络是不通的。很显然,这是不满足我们的需求的,此时,我们就需要部署我们之前提到过的CNI网络插件,这个插件来帮我们打通跨宿主机间的pod网络。

在之前的文章中我们介绍过,CNI网络插件有很多,应用比较广泛的有flannel和calico,我们的集群中选用flannel。接下来,我们就在集群中部署flannel服务。我们需要在集群中所有的Node节点上都安装部署flannel。

下载安装flannel

代码语言:javascript
复制
[root@k8s7-21 ~]# cd /opt/src/
[root@k8s7-21 src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@k8s7-21 src]# mkdir /opt/flannel-v0.11.0
[root@k8s7-21 src]# tar -zxf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel-v0.11.0/
[root@k8s7-21 src]# ln -s /opt/flannel-v0.11.0/ /opt/flannel

因为flannel需要访问etcd去获取集群信息,是etcd集群的客户端,所以,我们需要为flannel准备访问etcd集群的客户端证书,此处我们使用之前生成的client证书就可以了。

代码语言:javascript
复制
[root@k8s7-21 opt]# cd flannel
[root@k8s7-21 flannel]# mkdir cert
[root@k8s7-21 flannel]# cd cert/
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/ca.pem ./
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/client.pem ./  
[root@k8s7-21 cert]# scp k8s7-200:/opt/certs/client-key.pem ./

准备flannel网络变量的文件

代码语言:javascript
复制
[root@k8s7-21 cert]# cd ..
[root@k8s7-21 flannel]# cat subnet.env 
FLANNEL_NETWORK=172.16.0.0/16            # 指定集群pod网络
FLANNEL_SUBNET=172.16.21.1/24            # 指定当前node节点的pod网络
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false

准备flannel启动脚本并创建相关目录

代码语言:javascript
复制
[root@k8s7-21 flannel]# cat flanneld.sh 
#!/bin/sh
./flanneld \
  --public-ip=10.4.7.21 \        # 当前node节点的IP地址
  --etcd-endpoints=https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
  --etcd-keyfile=./cert/client-key.pem \
  --etcd-certfile=./cert/client.pem \
  --etcd-cafile=./cert/ca.pem \
  --iface=ens33 \                # 此处要指定为本地宿主机的网卡
  --subnet-file=./subnet.env \
  --healthz-port=2401
[root@k8s7-21 flannel]# chmod +x flanneld.sh 
[root@k8s7-21 flannel]# mkdir -p /data/logs/flanneld

在etcd集群中手动设置flannel的工作网络模型

代码语言:javascript
复制
[root@k8s7-21 flannel]# cd /opt/etcd/
[root@k8s7-21 etcd]# ./etcdctl set /coreos.com/network/config '{"Network": "172.16.0.0/16", "Backend": {"Type": "host-gw"}}'
{"Network": "172.16.0.0/16", "Backend": {"Type": "host-gw"}}
[root@k8s7-21 etcd]# ./etcdctl get /coreos.com/network/config
{"Network": "172.16.0.0/16", "Backend": {"Type": "host-gw"}}

将flannel进程托管给supervisor,创建托管配置文件

代码语言:javascript
复制
[root@k8s7-21 flannel]# cat /etc/supervisord.d/flannel.ini 
[program:flanneld-7-21]
command=/opt/flannel/flanneld.sh                             ; the program (relative uses PATH, can take args)
numprocs=1                                                   ; number of processes copies to start (def 1)
directory=/opt/flannel                                       ; directory to cwd to before exec (def no cwd)
autostart=true                                               ; start at supervisord start (default: true)
autorestart=true                                             ; retstart at unexpected quit (default: true)
startsecs=30                                                 ; number of secs prog must stay running (def. 1)
startretries=3                                               ; max # of serial start failures (default 3)
exitcodes=0,2                                                ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                              ; signal used to kill process (default TERM)
stopwaitsecs=10                                              ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                    ; setuid to this UNIX account to run the program
redirect_stderr=true                                         ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/flanneld/flanneld.stdout.log       ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                 ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                     ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                  ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                  ; emit events on stdout writes (default false)

启动托管任务

代码语言:javascript
复制
[root@k8s7-21 flannel]# supervisorctl update
flanneld-7-21: added process group
[root@k8s7-21 flannel]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 41034, uptime 2 days, 23:47:24
flanneld-7-21                    RUNNING   pid 74017, uptime 0:03:27
kube-apiserver-7-21              RUNNING   pid 42543, uptime 2 days, 21:19:56
kube-controller-manager-7-21     RUNNING   pid 54937, uptime 1 day, 22:33:14
kube-kubelet-7-21                RUNNING   pid 80586, uptime 22:20:03
kube-proxy-7-21                  RUNNING   pid 129657, uptime 4:55:36
kube-scheduler-7-21              RUNNING   pid 55104, uptime 1 day, 22:20:19

至此,10.4.7.21 上的flannel已经部署完成了,接下来要将 10.4.7.22 上的flannel也部署完成,需要注意的是,10.4.7.22上部署时要更改subnet.env文件中的当前node节点的pod网络,还要修改flanneld.sh脚本中的当前节点IP,并且不需要再执行在etcd集群中创建网络模型的操作。当所有的flannel部署完成后,此时我们再ping一下看看网络是否正常

代码语言:javascript
复制
[root@k8s7-21 ~]# ping 172.16.22.2
PING 172.16.22.2 (172.16.22.2) 56(84) bytes of data.
64 bytes from 172.16.22.2: icmp_seq=1 ttl=63 time=0.532 ms
64 bytes from 172.16.22.2: icmp_seq=2 ttl=63 time=0.492 ms
^C
--- 172.16.22.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.492/0.512/0.532/0.020 ms
[root@k8s7-21 ~]# 

由结果可见,此时跨宿主机的pod网络已经通了。

「五、优化flannel」

网络正常后,我们来做个实现,从 10.4.7.21 这台主机上的pod中访问 10.4.7.22 这台主机上的pod,之前我们在集群里创建了nginx的pod,我们来访问一下:

代码语言:javascript
复制
[root@k8s7-21 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE    IP            NODE               NOMINATED NODE   READINESS GATES
nginx-ds-kv6z4   1/1     Running   0          146m   172.16.22.2   k8s7-22.host.com   <none>           <none>
nginx-ds-w8xks   1/1     Running   0          146m   172.16.21.2   k8s7-21.host.com   <none>           <none>
[root@k8s7-21 ~]# kubectl exec -ti nginx-ds-w8xks /bin/bash
root@nginx-ds-w8xks:/# apt-get update
root@nginx-ds-w8xks:/# apt-get install -y curl
root@nginx-ds-w8xks:/# curl 172.16.22.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

此时,我们可以看到,在pod内部,已经可以互相访问了,这时,我们查看一下 172.16.22.2 上的日志记录:

代码语言:javascript
复制
[root@k8s7-22 flannel]# kubectl logs -f nginx-ds-kv6z4
10.4.7.21 - - [19/Dec/2019:08:45:24 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"

此时我们可以看到,我们明明是从 172.16.21.2 这个源地址访问的,但是在 172.16.22.2 上记录的源IP却是 10.4.7.21 ,这是我们宿主机的IP。这就导致一个问题,如果我们的宿主机上有很多pod时,我们如何区分一个访问到底是从哪个pod来的呢?此时,我们就需要对flannel的SNAT规则做优化。

安装iptables-services,保存当前的iptables规则。

代码语言:javascript
复制
[root@k8s7-21 ~]# yum -y install iptables-services
[root@k8s7-21 ~]# systemctl start iptables
[root@k8s7-21 ~]# systemctl enable iptables
[root@k8s7-21 ~]# service iptables save

保存完成后,我们需要调整一下iptables规则,默认情况下,iptables服务会拒绝filter表中FORWARD链上的所有规则,所以我们要做一下调整。

代码语言:javascript
复制
[root@k8s7-21 ~]# cat /etc/sysconfig/iptables
# Generated by iptables-save v1.4.21 on Thu Dec 19 16:51:20 2019
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2324:336228]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
# -A INPUT -j REJECT --reject-with icmp-host-prohibited   // 注释掉此条规则,允许INPUT链上所有访问
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -s 172.16.0.0/16 -j ACCEPT
-A FORWARD -d 172.16.0.0/16 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited  // 将此条注释掉或者放在所有FORWARD规则的最后面
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
...

此时,我们再来查看我们的iptables规则,在nat表中会发现有这样一条记录:

代码语言:javascript
复制
-A POSTROUTING -s 172.16.21.0/24 ! -o docker0 -j MASQUERADE

这就表示,iptables对所有的源地址来自 172.16.21.0/24 网段的IP地址并且出口设备不是docker0的IP都做了源地址转换,正是这条记录,让我们在对端拿到的是我们宿主机的IP,所以,我们要对这条记录优化,当我们的访问的目的地址不是 172.16.0.0/16 这个网段的时候再做源地址转换,于是,规则优化如下:

代码语言:javascript
复制
-A POSTROUTING -s 172.16.21.0/24 ! -d 172.16.0.0/16 ! -o docker0 -j MASQUERADE

此时我们再从pod内部访问一下就可以看到,对端已经拿到了我们的pod地址。上述的优化过程,需要在每一台安装了flannel插件的主机上执行。至此,CNI网络插件Flannel就部署完毕。

「六、部署coredns插件」

之前我们在介绍K8S集群时提到过,K8S中有三个网络,分别是Node网络,POD网络及Service网络,其中Service是K8S为我们提供的一种资源,为了解决Pod地址不固定的问题,将pod地址和Service地址做关联,这样即便pod地址变化了,我们的Service地址照样是不变的。

我们有了Service地址后,虽然这个地址不会变化了,但是如果在访问服务时直接访问IP地址,一来地址不方便记忆,二来我们也无法直观的获知这个地址提供的具体服务内容,不便于管理。显然,如果能创建一个服务名称到IP地址的映射,类似于域名到IP的映射,那么我们就能更方便的来访问我们的服务,于是,coredns服务就应运而生。

CoreDNS 是一款使用GoLang开发的插件式DNS服务器,其可以在多种环境中使用,是Kubernetes 1.13 后所内置的默认DNS服务器。我们实际上已经完成了K8S集群的部署,所以,我们可以将coredns服务托管至K8S集群中。接下来,我们就来部署coredns服务。

在 10.4.7.200 下载coredns镜像并推送至本地Harbor仓库

代码语言:javascript
复制
[root@k8s7-200 ~]# docker pull docker.io/coredns/coredns:1.6.1
[root@k8s7-200 ~]# docker images | grep coredns
coredns/coredns                 1.6.1                      c0f6e815079e        4 months ago        42.2MB
[root@k8s7-200 ~]# docker tag c0f6e815079e harbor.od.com/public/coredns:v1.6.1
[root@k8s7-200 ~]# docker push harbor.od.com/public/coredns:v1.6.1

向K8S集群中部署服务,我们采用资源配置清单的方式,为了方便管理这些资源配置清单,我们在 10.4.7.200 上搭建一个nginx虚拟主机,用来存放资源配置清单,这样集群内部的其他主机访问这个虚拟主机就可以获取到资源配置清单了。我们给这个虚拟主机添加一个域名 k8s-yaml.od.com。

代码语言:javascript
复制
[root@k8s7-200 ~]# cd /etc/nginx/conf.d/
[root@k8s7-200 conf.d]# ls
habor.od.com.conf
[root@k8s7-200 conf.d]# cat k8s-yaml.od.com.conf 
server {
    listen       80;
    server_name  k8s-yaml.od.com;

    location / {
        autoindex on;
        default_type text/plain;
        root /data/k8s-yaml;
    }
}
[root@k8s7-200 conf.d]# mkdir -p /data/k8s-yaml
[root@k8s7-200 conf.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@k8s7-200 conf.d]# nginx -s reload

在 10.4.7.11 上添加域名解析记录

代码语言:javascript
复制
[root@k8s7-11 named]# pwd
/var/named
[root@k8s7-11 named]# vim od.com.zone 
[root@k8s7-11 named]# cat od.com.zone 
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@     IN SOA dns.od.com. dnsadmin.od.com. (
    2019121503 ; serial
    10800      ; refresh (3 hours)
    900        ; retry (15 minutes)
    604800     ; expire (1 week)
    86400      ; minimum (1 day)
    )
    NS   dns.od.com.
$TTL 60 ; 1 minute
dns            A    10.4.7.11
harbor     A 10.4.7.200
k8s-yaml    A 10.4.7.200    # 添加此条解析记录
[root@k8s7-11 named]# systemctl restart named

在 10.4.7.200 上准备部署coredns使用到的资源配置清单

代码语言:javascript
复制
[root@k8s7-200 ~]# cd /data/k8s-yaml/
[root@k8s7-200 k8s-yaml]# mkdir coredns
[root@k8s7-200 k8s-yaml]# cd coredns/

a、rbac.yaml

代码语言:javascript
复制
[root@k8s7-200 coredns]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system

b、cm.yaml

代码语言:javascript
复制
[root@k8s7-200 coredns]# cat cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        ready
        kubernetes cluster.local 192.168.0.0/16
        forward . 10.4.7.11
        cache 30
        loop
        reload
        loadbalance
       }

c、dp.yaml

代码语言:javascript
复制
[root@k8s7-200 coredns]# cat dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: harbor.od.com/public/coredns:v1.6.1
        args:
        - -conf
        - /etc/coredns/Corefile
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile

d、svc.yaml

代码语言:javascript
复制
[root@k8s7-200 coredns]# cat svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 192.168.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
  - name: metrics
    port: 9153
    protocol: TCP

在 10.4.7.21 上应用这些资源配置清单,创建服务

代码语言:javascript
复制
[root@k8s7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
[root@k8s7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/cm.yaml
configmap/coredns created
[root@k8s7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/dp.yaml
deployment.apps/coredns created
[root@k8s7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
service/coredns created

检查服务状态

代码语言:javascript
复制
[root@k8s7-21 ~]# kubectl get svc -n kube-system
NAME      TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
coredns   ClusterIP   192.168.0.2   <none>        53/UDP,53/TCP,9153/TCP   82s

我们可以看到,至此,coredns服务已经部署完成,到当前,我们的K8S Node节点各组件和主要插件已经完全部署完毕。

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2020-08-24,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 后场技术 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 「一、部署kubelet服务」
  • 「二、部署kube-proxy服务」
  • 「三、测试集群」
  • 「四、部署flannel插件」
  • 「五、优化flannel」
  • 「六、部署coredns插件」
相关产品与服务
容器服务
腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档