原创

k8s on eks

本文参考 kubernetes-the-hard-way/k8s-hard-way 在 eks 上创建一个虚拟 kubernetes

虚拟 kubernetes

虚拟 kubernetes 是一种多租户 kubernetes 的运行方式,有兴趣的可以阅读这篇文章

我们的操作目标是以容器的方式在 eks 上运行一个 1.18 的 kubernetes master(包括 apiserver/control-manger/scheduler),同时给这个虚拟集群添加一个虚拟节点(virtual-kubelet)

实战

创建 eks 集群

  1. 在页面 https://console.cloud.tencent.com/tke2,新建弹性集群
  2. 在 【基础信息】中打开外网访问,本地配置 kubectl,方便使用

准备证书和配置

安装 cfssl

# CFSSL是CloudFlare开源的一款PKI/TLS工具。 
# CFSSL 包含一个命令行工具 和一个用于 签名,验证并且捆绑TLS证书的 HTTP API 服务。 使用Go语言编写。
# 项目地址: https://github.com/cloudflare/cfssl
brew install cfssl

创建用于生成其他 TLS 证书的 Certificate Authority

  • 新建 CA 配置文件
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
EOF
  • 新建 CA 凭证签发请求文件
cat > ca-csr.json <<EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}
EOF
  • 生成 CA 凭证和私钥 (root证书和私钥):
➜ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2020/08/25 10:36:02 [INFO] generating a new CA key and certificate from CSR
2020/08/25 10:36:02 [INFO] generate received request
2020/08/25 10:36:02 [INFO] received CSR
2020/08/25 10:36:02 [INFO] generating key: rsa-2048
2020/08/25 10:36:02 [INFO] encoded CSR
2020/08/25 10:36:02 [INFO] signed certificate with serial number 392819875150794091584482897464808879405260412728

生成文件

ca.csr
ca.pem

创建用于 Kubernetes 组件的 client 与 server 凭证

以及一个用于 Kubernetes admin 用户的 client 凭证。

  • Admin 客户端凭证
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:masters",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF
  • 创建 admin client 凭证和私钥
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin

生成文件

admin-key.pem
admin.pem
  • Kubelet 客户端凭证 (暂时跳过)
  • Kube-controller-manager 客户端凭证
cat > kube-controller-manager-csr.json <<EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

生成文件

kube-controller-manager-key.pem
kube-controller-manager.pem
  • Kube-proxy 客户端凭证
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:node-proxier",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy

生成文件

kube-proxy-key.pem
kube-proxy.pem
  • kube-scheduler 证书
cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler

生成文件

kube-scheduler-key.pem
kube-scheduler.pem
  • Kubernetes API Server 证书
    • 这一步先暂停一下,回到 eks, 先创建一个 api server 用的 service (叫做 my-kubernetes)
    • 创建出的 service 同时一个外网 ip,得到 ip 81.69.155.91, 172.17.16.29
    • 创建证书,注意我这里把 service 的ip 和 name 都加到 hostname 里面了
cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=81.69.155.91,172.17.16.29,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.local,my-kubernetes,my-kubernetes.default,my-kubernetes.default.svc,my-kubernetes.default.svc.local \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes

生成文件

kubernetes-key.pem
kubernetes.pem
  • Service Account 证书
cat > service-account-csr.json <<EOF
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account

生成文件

service-account-key.pem
service-account.pem

创建各个组件使用的 kubeconfig

  • 创建 control-manager 使用的 kubeconfig
kubectl config set-cluster my-kubernetes \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://my-kubernetes:443 \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.pem \
    --client-key=kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
    --cluster=my-kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
  • 创建 scheduler 使用的 kubeconfig
kubectl config set-cluster my-kubernetes \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://my-kubernetes:443  \
    --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.pem \
    --client-key=kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context default \
    --cluster=my-kubernetes \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
  • 创建 admin 配置文件
kubectl config set-cluster my-kubernetes \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://my-kubernetes:443  \
    --kubeconfig=admin.kubeconfig

kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

kubectl config set-context default \
    --cluster=my-kubernetes \
    --user=admin \
    --kubeconfig=admin.kubeconfig

kubectl config use-context default --kubeconfig=admin.kubeconfig

创建组件

创建 etcd

  • 创建 etcd, 为了简单(也是为了节约费用),这里我们创建一个单节点的, 不使用 pvc etcd
helm repo add bitnami https://charts.bitnami.com/bitnami
# 这里 etcd-values.yaml 基本只用默认配置,disable 了 pvc 
helm template metcd bitnami/etcd -f etcd-values.yaml > etcd.yaml

完整 etcd 配置请参考代码仓库。

创建 etcd 完成,登录验证

kubectl create -f yaml/etcd.yaml
I have no name!@metcd-0:/opt/bitnami/etcd$ etcdctl member list
6be738648d9cc341, started, metcd-0, http://metcd-0.metcd-headless.default.svc.cluster.local:2380, http://metcd-0.metcd-headless.default.svc.cluster.local:2379, false
  • 生成配置 configmap 并创建
kubectl create configmap config  --from-file=`pwd`/cert 

创建 api-server

  • 创建 api-server,这里我们使用 hypekube 镜像, from https://hub.docker.com/r/rancher/hyperkube/tags
  • 完整的配置如下, 这里 etcd 我们直接写了 pod ip, 由于(eks的一些策略,可能是安全组原因,设置 headless service 网络很慢,添加的可能相关的 annotation, 关于 eks.tke.cloud.tencent.com/security-group-id 这个 annotation 是如何生成的,可以在页面创建一个 nginx deployment 然后 copy下来)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-kubernetes
  labels:
    app: my-kubernetes
spec:
  replicas: 3
  selector:
    matchLabels:
      app: "my-kubernetes"
  strategy:
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 100%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        eks.tke.cloud.tencent.com/cpu-type: amd
        eks.tke.cloud.tencent.com/security-group-id: sg-56mfwq82
      labels:
        app: "my-kubernetes"
    spec:
      containers:
      - name: apiserver
        image: ccr.ccs.tencentyun.com/leiwang/hyperkube:v1.18.6-rancher1
        imagePullPolicy: Always
        args: 
        - kube-apiserver
        - --allow-privileged=true
        - --apiserver-count=3
        - --audit-log-maxage=30
        - --audit-log-maxbackup=3
        - --audit-log-maxsize=100
        - --audit-log-path=/var/log/audit.log
        - --authorization-mode=Node,RBAC
        - --bind-address=0.0.0.0
        - --secure-port=443
        - --insecure-bind-address=0.0.0.0
        - --insecure-port=80
        - --client-ca-file=/var/lib/kubernetes/ca.pem
        - --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
        - --enable-swagger-ui=true
        - --etcd-servers=http://172.17.16.44:2379
        - --event-ttl=1h
        - --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem 
        - --requestheader-client-ca-file=/var/lib/kubernetes/ca.pem 
        - --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem 
        - --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem 
        - --kubelet-https=true 
        - --service-account-key-file=/var/lib/kubernetes/service-account.pem 
        - --service-cluster-ip-range=10.32.0.0/24 
        - --service-node-port-range=30000-32767 
        - --tls-cert-file=/var/lib/kubernetes/kubernetes.pem 
        - --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem 
        - --v=2
        resources:
          requests:
            memory: 250Mi
            cpu: 250m
          limits:
            memory: 250Mi
            cpu: 250m
        volumeMounts:
          - name: config
            mountPath: /var/lib/kubernetes
      volumes:
        - name: config
          configMap:
            name: config

这时候 apiserver 已经能正常工作了

// 81.69.155.91 是 上面申请的 apiserver 的service 外网ip
➜ curl http://81.69.155.91/
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    "/apis/admissionregistration.k8s.io",
    "/apis/admissionregistration.k8s.io/v1",
    "/apis/admissionregistration.k8s.io/v1beta1",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1",
    "/apis/apiextensions.k8s.io/v1beta1",
    "/apis/apiregistration.k8s.io",
    "/apis/apiregistration.k8s.io/v1",
    "/apis/apiregistration.k8
    ...

创建 control-manager

  • 创建 control-manager, 完整配置如下 (为了节约资源,control-manager 没有像 api-server 使用 3 副本,当然 改成3也没问题)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: controller-manager
  labels:
    app: controller-manager
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "controller-manager"
  strategy:
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 100%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        eks.tke.cloud.tencent.com/cpu-type: amd
        eks.tke.cloud.tencent.com/security-group-id: sg-56mfwq82
      labels:
        app: "controller-manager"
    spec:
      containers:
      - name: controller-manager
        image: ccr.ccs.tencentyun.com/leiwang/hyperkube:v1.18.6-rancher1
        imagePullPolicy: Always
        args: 
        - kube-controller-manager
        - --address=0.0.0.0 
        - --cluster-cidr=10.200.0.0/16 
        - --cluster-name=kubernetes 
        - --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem 
        - --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem 
        - --authentication-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig
        - --authorization-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig
        - --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig 
        - --leader-elect=true 
        - --root-ca-file=/var/lib/kubernetes/ca.pem 
        - --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem 
        - --service-cluster-ip-range=10.32.0.0/24 
        - --use-service-account-credentials=true 
        - --client-ca-file=/var/lib/kubernetes/ca.pem
        - --requestheader-client-ca-file=/var/lib/kubernetes/ca.pem
        - --v=2
        resources:
          requests:
            memory: 250Mi
            cpu: 250m
          limits:
            memory: 250Mi
            cpu: 250m
        volumeMounts:
          - name: config
            mountPath: /var/lib/kubernetes
      volumes:
        - name: config
          configMap:
            name: config

观察 control-manager 日志,运行正常

创建 scheduler

  • 创建 scheduler, 配置如下
apiVersion: apps/v1
kind: Deployment
metadata:
  name: scheduler
  labels:
    app: scheduler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "scheduler"
  strategy:
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 100%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        eks.tke.cloud.tencent.com/cpu-type: amd
        eks.tke.cloud.tencent.com/security-group-id: sg-56mfwq82
      labels:
        app: "scheduler"
    spec:
      containers:
      - name: scheduler
        image: ccr.ccs.tencentyun.com/leiwang/hyperkube:v1.18.6-rancher1
        imagePullPolicy: Always
        args: 
        - kube-scheduler
        - --leader-elect=true
        - --kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig 
        - --v=2
        resources:
          requests:
            memory: 250Mi
            cpu: 250m
          limits:
            memory: 250Mi
            cpu: 250m
        volumeMounts:
          - name: config
            mountPath: /var/lib/kubernetes
      volumes:
        - name: config
          configMap:
            name: config

观察 scheduler 运行正常

cat > virtual-kubelet-csr.json <<EOF
{
  "CN": "system:node:virtual-kubelet",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=virtual-kubelet \
  -profile=kubernetes \
  virtual-kubelet-csr.json | cfssljson -bare virtual-kubelet

新增一个 node 节点

完整的启动配置如下

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: virtual-kubelet
  labels:
    k8s-app: virtual-kubelet
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: virtual-kubelet
subjects:
  - kind: ServiceAccount
    name: virtual-kubelet
    namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: virtual-kubelet
  labels:
    k8s-app: kubelet
spec:
  strategy:
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 100%
    type: RollingUpdate
  replicas: 1
  selector:
    matchLabels:
      k8s-app: virtual-kubelet
  template:
    metadata:
      labels:
        pod-type: virtual-kubelet
        k8s-app: virtual-kubelet
    spec:
      containers:
        - name: virtual-kubelet
          image: ccr.ccs.tencentyun.com/leiwang/virtual-node:v0.1-7-g79ac0394c93acf
          imagePullPolicy: IfNotPresent
          env:
            - name: KUBELET_PORT
              value: "10450"
            - name: APISERVER_CERT_LOCATION
              value: /etc/kubernetes/virtual-kubelet.pem
            - name: APISERVER_KEY_LOCATION
              value: /etc/kubernetes/virtual-kubelet-key.pem
            - name: DEFAULT_NODE_NAME
              value: virtual-kubelet
            - name: VKUBELET_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          args:
            - --provider=k8s 
            - --client-kubeconfig="a" 
            - --kubeconfig=/etc/kubernetes/admin.kubeconfig
            - --nodename=virtual-kubelet 
            - --disable-taint=true 
            - --kube-api-qps=1 
            - --kube-api-burst=1 
            - --client-qps=5 
            - --client-burst=1 
            - --klog.v=3 
            - --log-level=debug 
            - --metrics-addr=:10455 
          resources:
            requests:
              memory: 250Mi
              cpu: 150m
            limits:
              memory: 250Mi
              cpu: 150m
          livenessProbe:
            tcpSocket:
              port: 10450
            initialDelaySeconds: 20
            periodSeconds: 20
          volumeMounts:
          - name: config
            mountPath: /etc/kubernetes
      volumes:
        - name: config
          configMap:
            name: config
      serviceAccountName: virtual-kubelet

测试

  • 观察节点
➜ kubectl get node --kubeconfig=`pwd`/admin.kubeconfig
NAME              STATUS   ROLES   AGE    VERSION
virtual-kubelet   Ready    agent   5m6s   v1.16.9

可以看到多出了一个 virtual-kubelet 节点,这其实是一个虚拟的节点,实际创建 pod 的需求会被转发到另一个集群,即 EKS

  • 现在我们创建一个 example nginx 试一下
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "nginx"
  template:
    metadata:
      labels:
        app: "nginx"
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: Always
        resources:
          requests:
            memory: 50Mi
            cpu: 50m
          limits:
            memory: 50Mi
            cpu: 50m
➜ kubectl get pod --kubeconfig=`pwd`/admin.kubeconfig
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5fc4bc67cf-h7w9n   1/1     Running   0          15s

运行正常!观察一下 nginx 实际创建的位置, 实际 pod 出现在了 eks 上。

➜ kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
controller-manager-55447c7f98-vc5lr   1/1     Running   0          117m
metcd-0                               1/1     Running   0          4h39m
my-kubernetes-5844846895-5dc85        1/1     Running   0          122m
my-kubernetes-5844846895-7h9kx        1/1     Running   0          122m
my-kubernetes-5844846895-8zmkt        1/1     Running   0          122m
nginx-5fc4bc67cf-h7w9n                1/1     Running   0          5m33s
scheduler-78c98957f6-hzbm9            1/1     Running   0          121m
virtual-kubelet-764545fc94-bhwbm      1/1     Running   0          14m

最后

到这一步我们已经完成了创建并测试一个 virtual kubernetes 的基本步骤,nginx deployment 已经能正常运行,后面就是对接网络组件,让 nginx 可以正常暴露了。

virtual kubernetes 实际就是 eks 的实现方式,所以这个例子也可以看成是 eks on eks.

完整的配置仓库在:https://github.com/u2takey/k8sOnk8s

参考

原创声明,本文系作者授权云+社区发表,未经许可,不得转载。

如有侵权,请联系 yunjia_community@tencent.com 删除。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • 多租户Kubernetes

    Kubernetes Multitenancy WG Deep Dive KubeCon EU 2019 (last updated 5/22/2019) 描述...

    王磊-AI基础
  • ffmpeg 命令的 golang binding 工具

    王磊-AI基础
  • DDIA 笔记

    王磊-AI基础
  • 02-创建 TLS CA证书及密钥

    程序员同行者
  • iMac使用过程中的简单故障解决

    子勰
  • 使用Stata完成广西碳酸钙企业的主成分分析和因子分析

    原文首发:https://maoli.blog.csdn.net/article/details/104787308

    润森
  • Leetcode 26 Remove Duplicates from Sorted Array

    Given a sorted array, remove the duplicates in place such that each element app...

    triplebee
  • Single Number III

    Tyan
  • leetcode之重新排列数组

    这里使用双指针,两个指针都从0开始,一个每次加2,一个每次加1,每次遍历给i及i+1赋值。

    codecraft
  • leetcode哈希表之两数之和

    这里利用HashMap来存储目标值与当前值的差值及其索引,遍历nums数组,遇到存在的key则直接返回。

    codecraft

扫码关注云+社区

领取腾讯云代金券