本文参考 kubernetes-the-hard-way/k8s-hard-way 在 eks 上创建一个虚拟 kubernetes
虚拟 kubernetes 是一种多租户 kubernetes 的运行方式,有兴趣的可以阅读这篇文章
我们的操作目标是以容器的方式在 eks 上运行一个 1.18 的 kubernetes master(包括 apiserver/control-manger/scheduler),同时给这个虚拟集群添加一个虚拟节点(virtual-kubelet)
# CFSSL是CloudFlare开源的一款PKI/TLS工具。 # CFSSL 包含一个命令行工具 和一个用于 签名,验证并且捆绑TLS证书的 HTTP API 服务。 使用Go语言编写。 # 项目地址: https://github.com/cloudflare/cfssl brew install cfssl
cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": ["signing", "key encipherment", "server auth", "client auth"], "expiry": "8760h" } } } } EOF
cat > ca-csr.json <<EOF { "CN": "Kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "Kubernetes", "OU": "CA", "ST": "Oregon" } ] } EOF
➜ cfssl gencert -initca ca-csr.json | cfssljson -bare ca 2020/08/25 10:36:02 [INFO] generating a new CA key and certificate from CSR 2020/08/25 10:36:02 [INFO] generate received request 2020/08/25 10:36:02 [INFO] received CSR 2020/08/25 10:36:02 [INFO] generating key: rsa-2048 2020/08/25 10:36:02 [INFO] encoded CSR 2020/08/25 10:36:02 [INFO] signed certificate with serial number 392819875150794091584482897464808879405260412728
生成文件
ca.csr ca.pem
以及一个用于 Kubernetes admin 用户的 client 凭证。
cat > admin-csr.json <<EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:masters", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF
cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin
生成文件
admin-key.pem admin.pem
cat > kube-controller-manager-csr.json <<EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:kube-controller-manager", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
生成文件
kube-controller-manager-key.pem kube-controller-manager.pem
cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:node-proxier", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare kube-proxy
生成文件
kube-proxy-key.pem kube-proxy.pem
cat > kube-scheduler-csr.json <<EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:kube-scheduler", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-scheduler-csr.json | cfssljson -bare kube-scheduler
生成文件
kube-scheduler-key.pem kube-scheduler.pem
cat > kubernetes-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "Kubernetes", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=81.69.155.91,172.17.16.29,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.local,my-kubernetes,my-kubernetes.default,my-kubernetes.default.svc,my-kubernetes.default.svc.local \ -profile=kubernetes \ kubernetes-csr.json | cfssljson -bare kubernetes
生成文件
kubernetes-key.pem kubernetes.pem
cat > service-account-csr.json <<EOF { "CN": "service-accounts", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "Kubernetes", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ service-account-csr.json | cfssljson -bare service-account
生成文件
service-account-key.pem service-account.pem
kubectl config set-cluster my-kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://my-kubernetes:443 \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context default \ --cluster=my-kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-cluster my-kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://my-kubernetes:443 \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context default \ --cluster=my-kubernetes \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-cluster my-kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://my-kubernetes:443 \ --kubeconfig=admin.kubeconfig kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs=true \ --kubeconfig=admin.kubeconfig kubectl config set-context default \ --cluster=my-kubernetes \ --user=admin \ --kubeconfig=admin.kubeconfig kubectl config use-context default --kubeconfig=admin.kubeconfig
helm repo add bitnami https://charts.bitnami.com/bitnami # 这里 etcd-values.yaml 基本只用默认配置,disable 了 pvc helm template metcd bitnami/etcd -f etcd-values.yaml > etcd.yaml
完整 etcd 配置请参考代码仓库。
创建 etcd 完成,登录验证
kubectl create -f yaml/etcd.yaml
I have no name!@metcd-0:/opt/bitnami/etcd$ etcdctl member list 6be738648d9cc341, started, metcd-0, http://metcd-0.metcd-headless.default.svc.cluster.local:2380, http://metcd-0.metcd-headless.default.svc.cluster.local:2379, false
kubectl create configmap config --from-file=`pwd`/cert
apiVersion: apps/v1 kind: Deployment metadata: name: my-kubernetes labels: app: my-kubernetes spec: replicas: 3 selector: matchLabels: app: "my-kubernetes" strategy: rollingUpdate: maxSurge: 100% maxUnavailable: 100% type: RollingUpdate template: metadata: annotations: eks.tke.cloud.tencent.com/cpu-type: amd eks.tke.cloud.tencent.com/security-group-id: sg-56mfwq82 labels: app: "my-kubernetes" spec: containers: - name: apiserver image: ccr.ccs.tencentyun.com/leiwang/hyperkube:v1.18.6-rancher1 imagePullPolicy: Always args: - kube-apiserver - --allow-privileged=true - --apiserver-count=3 - --audit-log-maxage=30 - --audit-log-maxbackup=3 - --audit-log-maxsize=100 - --audit-log-path=/var/log/audit.log - --authorization-mode=Node,RBAC - --bind-address=0.0.0.0 - --secure-port=443 - --insecure-bind-address=0.0.0.0 - --insecure-port=80 - --client-ca-file=/var/lib/kubernetes/ca.pem - --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota - --enable-swagger-ui=true - --etcd-servers=http://172.17.16.44:2379 - --event-ttl=1h - --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem - --requestheader-client-ca-file=/var/lib/kubernetes/ca.pem - --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem - --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem - --kubelet-https=true - --service-account-key-file=/var/lib/kubernetes/service-account.pem - --service-cluster-ip-range=10.32.0.0/24 - --service-node-port-range=30000-32767 - --tls-cert-file=/var/lib/kubernetes/kubernetes.pem - --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem - --v=2 resources: requests: memory: 250Mi cpu: 250m limits: memory: 250Mi cpu: 250m volumeMounts: - name: config mountPath: /var/lib/kubernetes volumes: - name: config configMap: name: config
这时候 apiserver 已经能正常工作了
// 81.69.155.91 是 上面申请的 apiserver 的service 外网ip ➜ curl http://81.69.155.91/ { "paths": [ "/api", "/api/v1", "/apis", "/apis/", "/apis/admissionregistration.k8s.io", "/apis/admissionregistration.k8s.io/v1", "/apis/admissionregistration.k8s.io/v1beta1", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1", "/apis/apiextensions.k8s.io/v1beta1", "/apis/apiregistration.k8s.io", "/apis/apiregistration.k8s.io/v1", "/apis/apiregistration.k8 ...
apiVersion: apps/v1 kind: Deployment metadata: name: controller-manager labels: app: controller-manager spec: replicas: 1 selector: matchLabels: app: "controller-manager" strategy: rollingUpdate: maxSurge: 100% maxUnavailable: 100% type: RollingUpdate template: metadata: annotations: eks.tke.cloud.tencent.com/cpu-type: amd eks.tke.cloud.tencent.com/security-group-id: sg-56mfwq82 labels: app: "controller-manager" spec: containers: - name: controller-manager image: ccr.ccs.tencentyun.com/leiwang/hyperkube:v1.18.6-rancher1 imagePullPolicy: Always args: - kube-controller-manager - --address=0.0.0.0 - --cluster-cidr=10.200.0.0/16 - --cluster-name=kubernetes - --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem - --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem - --authentication-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig - --authorization-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig - --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig - --leader-elect=true - --root-ca-file=/var/lib/kubernetes/ca.pem - --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem - --service-cluster-ip-range=10.32.0.0/24 - --use-service-account-credentials=true - --client-ca-file=/var/lib/kubernetes/ca.pem - --requestheader-client-ca-file=/var/lib/kubernetes/ca.pem - --v=2 resources: requests: memory: 250Mi cpu: 250m limits: memory: 250Mi cpu: 250m volumeMounts: - name: config mountPath: /var/lib/kubernetes volumes: - name: config configMap: name: config
观察 control-manager 日志,运行正常
apiVersion: apps/v1 kind: Deployment metadata: name: scheduler labels: app: scheduler spec: replicas: 1 selector: matchLabels: app: "scheduler" strategy: rollingUpdate: maxSurge: 100% maxUnavailable: 100% type: RollingUpdate template: metadata: annotations: eks.tke.cloud.tencent.com/cpu-type: amd eks.tke.cloud.tencent.com/security-group-id: sg-56mfwq82 labels: app: "scheduler" spec: containers: - name: scheduler image: ccr.ccs.tencentyun.com/leiwang/hyperkube:v1.18.6-rancher1 imagePullPolicy: Always args: - kube-scheduler - --leader-elect=true - --kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig - --v=2 resources: requests: memory: 250Mi cpu: 250m limits: memory: 250Mi cpu: 250m volumeMounts: - name: config mountPath: /var/lib/kubernetes volumes: - name: config configMap: name: config
观察 scheduler 运行正常
cat > virtual-kubelet-csr.json <<EOF { "CN": "system:node:virtual-kubelet", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:nodes", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=virtual-kubelet \ -profile=kubernetes \ virtual-kubelet-csr.json | cfssljson -bare virtual-kubelet
完整的启动配置如下
--- apiVersion: v1 kind: ServiceAccount metadata: name: virtual-kubelet labels: k8s-app: virtual-kubelet --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: virtual-kubelet subjects: - kind: ServiceAccount name: virtual-kubelet namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin --- apiVersion: apps/v1 kind: Deployment metadata: name: virtual-kubelet labels: k8s-app: kubelet spec: strategy: rollingUpdate: maxSurge: 100% maxUnavailable: 100% type: RollingUpdate replicas: 1 selector: matchLabels: k8s-app: virtual-kubelet template: metadata: labels: pod-type: virtual-kubelet k8s-app: virtual-kubelet spec: containers: - name: virtual-kubelet image: ccr.ccs.tencentyun.com/leiwang/virtual-node:v0.1-7-g79ac0394c93acf imagePullPolicy: IfNotPresent env: - name: KUBELET_PORT value: "10450" - name: APISERVER_CERT_LOCATION value: /etc/kubernetes/virtual-kubelet.pem - name: APISERVER_KEY_LOCATION value: /etc/kubernetes/virtual-kubelet-key.pem - name: DEFAULT_NODE_NAME value: virtual-kubelet - name: VKUBELET_POD_IP valueFrom: fieldRef: fieldPath: status.podIP args: - --provider=k8s - --client-kubeconfig="a" - --kubeconfig=/etc/kubernetes/admin.kubeconfig - --nodename=virtual-kubelet - --disable-taint=true - --kube-api-qps=1 - --kube-api-burst=1 - --client-qps=5 - --client-burst=1 - --klog.v=3 - --log-level=debug - --metrics-addr=:10455 resources: requests: memory: 250Mi cpu: 150m limits: memory: 250Mi cpu: 150m livenessProbe: tcpSocket: port: 10450 initialDelaySeconds: 20 periodSeconds: 20 volumeMounts: - name: config mountPath: /etc/kubernetes volumes: - name: config configMap: name: config serviceAccountName: virtual-kubelet
➜ kubectl get node --kubeconfig=`pwd`/admin.kubeconfig NAME STATUS ROLES AGE VERSION virtual-kubelet Ready agent 5m6s v1.16.9
可以看到多出了一个 virtual-kubelet 节点,这其实是一个虚拟的节点,实际创建 pod 的需求会被转发到另一个集群,即 EKS
apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 1 selector: matchLabels: app: "nginx" template: metadata: labels: app: "nginx" spec: containers: - name: nginx image: nginx imagePullPolicy: Always resources: requests: memory: 50Mi cpu: 50m limits: memory: 50Mi cpu: 50m
➜ kubectl get pod --kubeconfig=`pwd`/admin.kubeconfig NAME READY STATUS RESTARTS AGE nginx-5fc4bc67cf-h7w9n 1/1 Running 0 15s
运行正常!观察一下 nginx 实际创建的位置, 实际 pod 出现在了 eks 上。
➜ kubectl get pod NAME READY STATUS RESTARTS AGE controller-manager-55447c7f98-vc5lr 1/1 Running 0 117m metcd-0 1/1 Running 0 4h39m my-kubernetes-5844846895-5dc85 1/1 Running 0 122m my-kubernetes-5844846895-7h9kx 1/1 Running 0 122m my-kubernetes-5844846895-8zmkt 1/1 Running 0 122m nginx-5fc4bc67cf-h7w9n 1/1 Running 0 5m33s scheduler-78c98957f6-hzbm9 1/1 Running 0 121m virtual-kubelet-764545fc94-bhwbm 1/1 Running 0 14m
到这一步我们已经完成了创建并测试一个 virtual kubernetes 的基本步骤,nginx deployment 已经能正常运行,后面就是对接网络组件,让 nginx 可以正常暴露了。
virtual kubernetes 实际就是 eks 的实现方式,所以这个例子也可以看成是 eks on eks.
完整的配置仓库在:https://github.com/u2takey/k8sOnk8s
原创声明,本文系作者授权云+社区发表,未经许可,不得转载。
如有侵权,请联系 yunjia_community@tencent.com 删除。
我来说两句