用过minikube, VM启动比较慢, 而且下载最新版的时候, 阿里云的mirror都没有最新版本的镜像, 导致一直启动不起来. 非常难受.
基于K3S的K3D完美符合我的以上需求.
轻量级 Kubernetes。安装简单,内存只有一半,所有的二进制都不到 200MB。包含K3S的完整镜像大小如下:
REPOSITORY TAG IMAGE ID CREATED SIZE
rancher/k3s v1.18.2-k3s1 e9f6bccce7de 6 months ago 151MB
我这边安装完成后, (又安装了traefik和Kubernetes dashboard和一个demo deployment), 消耗如下:
适用于:
K3s 是一个完全符合 Kubernetes 的发行版,有以下增强功能。
k3d创建容器化的k3s集群。 这意味着,您可以使用docker在单台计算机上启动多节点k3s集群。
📖 参考文档: rancher.cn - 使用 k3d 搭建 k3s 集群
使用 k3d 搭建 k3s 集群. k3d是快速搭建容器化 k3s 集群的工具。 可以使用 Docker 在单台计算机上启动多节点 k3s 集群。
📓 备注: 我的计算机环境:
root
执行:curl -fL https://octopus-assets.oss-cn-beijing.aliyuncs.com/k3d/cluster-k3s-spinup.sh | bash -
⚠️ 注意: 如果安装成功,则应该看到以下日志:
please input CTRL+C to stop the local cluster
如果想要停止K3S集群, 请运行CTRL+C
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 13549 100 13549 0 0 6784 0 0:00:01 0:00:01 --:--:-- 6781
[INFO] [1107 17:02:03] cleanup proxy config
[INFO] [1107 17:02:03] creating edge cluster with v1.18.2
[INFO] [1107 17:02:03] INGRESS_HTTP_PORT is 54836
[INFO] [1107 17:02:03] INGRESS_HTTPS_PORT is 54837
INFO[0000] Created cluster network with ID ba03de48d65b8e1fbef6ff03cbba0b9e9ad008e7cc81d67d8393c69272a1c4b9
INFO[0000] Add TLS SAN for 0.0.0.0
INFO[0000] Created docker volume k3d-edge-images
INFO[0000] Creating cluster [edge]
INFO[0000] Creating server using docker.io/rancher/k3s:v1.18.2-k3s1...
INFO[0006] SUCCESS: created cluster [edge]
INFO[0006] You can now use the cluster with:
export KUBECONFIG="$(k3d get-kubeconfig --name='edge')"
kubectl cluster-info
[WARN] [1107 17:02:09] default kubeconfig has been backup in /root/.kube/config_k3d_bak
[INFO] [1107 17:02:09] edge cluster's kubeconfig wrote in /root/.kube/config now
[INFO] [1107 17:02:09] waiting node edge-control-plane for ready
INFO[0000] Adding 1 agent-nodes to k3d cluster edge...
INFO[0000] Created agent-node with ID 752aebb8f9bb1af1c5fcf62ff9313163c243835373872595f38de03004257514
[INFO] [1107 17:02:21] waiting node edge-worker for ready
INFO[0000] Adding 1 agent-nodes to k3d cluster edge...
INFO[0000] Created agent-node with ID 7d0aa70e24f387217d3094911a7c0f5fa2f504c1fe3e106b08d00f3a6b11158c
[INFO] [1107 17:02:34] waiting node edge-worker1 for ready
INFO[0000] Adding 1 agent-nodes to k3d cluster edge...
INFO[0000] Created agent-node with ID 7b880c8966f9b8b252c5385ee10167384d9517c87ff60763989b69f5c3f344ab
[INFO] [1107 17:02:47] waiting node edge-worker2 for ready
[WARN] [1107 17:02:59] please input CTRL+C to stop the local cluster
KUBECONFIG
以访问本地 k3s 集群。export KUBECONFIG="$(k3d get-kubeconfig --name='edge')"kubectl cluster-info
输出结果如下:
Kubernetes master is running at https://0.0.0.0:54835
CoreDNS is running at https://0.0.0.0:54835/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:54835/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
kubectl get node
命令, 检查本地 k3s 集群的节点是否正常:
# kubectl get node
NAME STATUS ROLES AGE VERSION
edge-worker Ready <none> 3h17m v1.18.2+k3s1
edge-worker2 Ready <none> 3h17m v1.18.2+k3s1
edge-control-plane Ready master 3h17m v1.18.2+k3s1
edge-worker1 Ready <none> 3h17m v1.18.2+k3s1
kubectl get pod -A
命令, 检查本地 k3s 集群的pod是否正常: (默认就已经部署好了traefik)
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system metrics-server-7566d596c8-6h776 1/1 Running 0 3h18m
kube-system local-path-provisioner-6d59f47c7-sz5tp 1/1 Running 0 3h18m
kube-system coredns-8655855d6-lmrkq 1/1 Running 0 3h18m
kube-system svclb-traefik-wxp6k 2/2 Running 0 133m
kube-system svclb-traefik-jls5w 2/2 Running 0 133m
kube-system svclb-traefik-j776k 2/2 Running 0 133m
kube-system svclb-traefik-qbfx4 2/2 Running 0 133m
kube-system helm-install-traefik-jxptl 0/1 Completed 0 120m
kube-system traefik-6cbfb44969-r9fj2 1/1 Running 0 118m
📓 笔记:
K3D的快速启动脚本, 涉及到以下docker镜像: (只有第一个镜像是在外边pull的, 其他镜像其实都是在启动后的k3s 容器里pull的.)
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
rancher/k3s v1.18.2-k3s1 e9f6bccce7de 6 months ago 151MB
rancher/klipper-helm v0.2.5 6207e2a3f522 6 months ago 136MB
rancher/library-traefik 1.7.19-amd64 aa764f7db305 12 months ago 85.7MB
rancher/metrics-server v0.3.6 9dd718864ce6 13 months ago 39.9MB
rancher/local-path-provisioner v0.0.11 9d12f9848b99 13 months ago 36.2MB
rancher/coredns-coredns 1.6.3 c4d3d16fe508 14 months ago 44.3MB
K3D的快速启动脚本, 会启动4个docker容器作为4个node节点:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b880c8966f9 rancher/k3s:v1.18.2-k3s1 "/bin/k3s agent --no…" 3 hours ago Up 3 hours k3d-edge-worker-3
7d0aa70e24f3 rancher/k3s:v1.18.2-k3s1 "/bin/k3s agent --no…" 3 hours ago Up 3 hours k3d-edge-worker-2
752aebb8f9bb rancher/k3s:v1.18.2-k3s1 "/bin/k3s agent --no…" 3 hours ago Up 3 hours k3d-edge-worker-1
dca9851cf5d6 rancher/k3s:v1.18.2-k3s1 "/bin/k3s server --h…" 3 hours ago Up 3 hours 0.0.0.0:54835->54835/tcp, 0.0.0.0:54836->80/tcp, 0.0.0.0:54837->443/tcp k3d-edge-server
从上文可以看到, 1个k3s server(就是控制平面), 3个k3s agent. k3s server对外暴露了3个 随机 端口:
所以我们要访问部署在容器中的应用, 就用这2个随机端口: http://localhost:54836 或https://localhost:54836 . 部署好了后, 默认是没有任何的Ingress的, 所以访问这2个地址都是报: 404
而且默认脚本是没有启用Traefik的Dashboard的, 管理不便. 我们将它启用起来.
首先是进入到k3s server容器里. 这个容器没有/bin/bash
, 只有/bin/sh
, 如下:
# docker exec -it <k3s server container id> ls /bin
addgroup cat containerd-shim df expr fstrim i2cdetect ipcs kubectl lsof mkswap openvt ptx runcon sha512sum swapoff tr unxz whoami
adduser charon containerd-shim-runc-v2 diff factor fuser i2cdump iplink last lspci mktemp partprobe pwd runlevel shred swapon traceroute unzip xargs
ar chattr coreutils dir fallocate getopt i2cget ipneigh less lsscsi modprobe passwd rdate sed shuf switch_root true uptime xtables-legacy-multi
arch chcon cp dircolors false getty i2cset iproute link lsusb more paste readlink seq sleep sync truncate users xxd
arp check-config cpio dirname fbset ginstall id iprule linux32 lzcat mountpoint patch readprofile setarch slirp4netns sysctl tsort usleep xz
arping chgrp crictl dmesg fdflush grep ifconfig ipset linux64 lzma mt pathchk realpath setconsole socat syslogd tty uudecode xzcat
ash chmod crond dnsd fdformat groups ifdown iptables linuxrc lzopcat mv pidof reboot setfattr sort tac ubirename uuencode yes
aux chown crontab dnsdomainname fdisk gunzip ifup iptables-restore ln makedevs nameif pigz renice setkeycodes split tail udhcpc vconfig zcat
awk chroot csplit dos2unix fgrep gzip inetd iptables-save loadfont md5sum netstat ping reset setlogcons start-stop-daemon tar uevent vdir
b2sum chrt ctr du find halt init iptunnel loadkmap mdev nice pinky resize setpriv stat tc umount vi
base32 chvt cut dumpkmap flannel hdparm insmod join logger mesg nl pipe_progress resume setserial strings tee uname vlock
base64 cksum date ebtables flock head install k3s login microcom nohup pivot_root rm setsid stty telnet unexpand w
basename clear dc echo fmt hexdump ip k3s-agent logname mkdir nproc portmap rmdir sh su test uniq watch
blkid cmp dd egrep fold hexedit ip6tables k3s-server loopback mkdosfs nsenter poweroff rmmod sha1sum sulogin tftp unix2dos watchdog
bridge cni deallocvt eject free host-local ip6tables-restore kill losetup mke2fs nslookup pr route sha224sum sum time unlink wc
bunzip2 comm delgroup env freeramdisk hostid ip6tables-save killall ls mkfifo nuke printenv run-init sha256sum svc timeout unlzma wget
busybox conntrack deluser ether-wake fsck hostname ipaddr killall5 lsattr mknod numfmt printf run-parts sha384sum svok top unlzop which
bzcat containerd devmem expand fsfreeze hwclock ipcrm klogd lsmod mkpasswd od ps runc sha3sum swanctl touch unpigz who
所以通过/bin/sh
进入到容器里:
# docker exec -it <k3s server container id> /bin/sh
---------------已经进入容器里--------------
/ # cd /var/lib/rancher/k3s/server/manifests
/var/lib/rancher/k3s/server/manifests # vi traefik.yaml
编辑后的traefik.yaml
如下: (增加:dashboard.enabled: "true"
)
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
valuesContent: |-
rbac:
enabled: true
ssl:
enabled: true
metrics:
prometheus:
enabled: true
kubernetes:
ingressEndpoint:
useDefaultPublishedService: true
dashboard:
enabled: true
image: "rancher/library-traefik"
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
保存后就会重新部署traefik.yaml
, 如下:
# kubectl get events -n kube-system
LAST SEEN TYPE REASON OBJECT MESSAGE
43s Normal Pulled pod/helm-install-traefik-jxptl Successfully pulled image "rancher/klipper-helm:v0.2.5"
43s Normal Created pod/helm-install-traefik-jxptl Created container helm
43s Normal Started pod/helm-install-traefik-jxptl Started container helm
43s Normal ScalingReplicaSet deployment/traefik Scaled up replica set traefik-6cbfb44969 to 1
43s Normal SuccessfulCreate replicaset/traefik-6cbfb44969 Created pod: traefik-6cbfb44969-r9fj2
<unknown> Normal Scheduled pod/traefik-6cbfb44969-r9fj2 Successfully assigned kube-system/traefik-6cbfb44969-r9fj2 to edge-worker2
42s Normal Pulling pod/traefik-6cbfb44969-r9fj2 Pulling image "rancher/library-traefik:1.7.19"
42s Normal Completed job/helm-install-traefik Job completed
41s Normal SandboxChanged pod/helm-install-traefik-jxptl Pod sandbox changed, it will be killed and re-created.
9s Normal Pulled pod/traefik-6cbfb44969-r9fj2 Successfully pulled image "rancher/library-traefik:1.7.19"
9s Normal Created pod/traefik-6cbfb44969-r9fj2 Created container traefik
9s Normal Started pod/traefik-6cbfb44969-r9fj2 Started container traefik
部署后, 会自动配置ingress, 如下:
# kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kube-system traefik-dashboard <none> traefik.example.com 172.18.0.2 80 149m
所以我们配置hosts: 127.0.0.1 traefik.example.com
. 就可以访问: http://traefik.example.com:54836/dashboard/, 如下图所示:
📓 备注: 其实还有另一种方法可以进行访问:
kubectl port-forward
. 如下: $ kubectl port-forward $(kubectl get pods --selector "app=traefik" --output=name -n kube-system) --address 0.0.0.0 8080:8080 -n kube-system 则可以通过http://localhost:8080/dashboard/ 访问到traefik的管理页面.
使用whoami
应用程序部署测试.
$ kubectl create deploy whoami --image containous/whoami
deployment.apps/whoami created
$ kubectl expose deploy whoami --port 80
service/whoami exposed
然后我们定义一个 Ingress 规则来使用我们新的 Traefik,Traefik 既能读取自己的 CRD IngressRoute,也能读取传统的 Ingress 资源。
vi whoami-ingress.yaml
具体内容如下:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: whoami
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: whoami
servicePort: 80
kubectl apply
应用:
kubectl apply -f whoami-ingress.yaml -n default
在这个例子中,我们在 HTTP 和 HTTPs 两个入口点上暴露了 whoami 服务,每一个 URL 都会被发送到该服务上,我们可以在 Traefik Dashboard 上看到新的Ingress。
要测试这个应用我们可以直接在浏览器中访问:http://localhost:54836/ 即可,这是因为上面我们安装 Traefik 的时候自动创建了一个 LoadBalancer 的 Service 服务。为啥要加端口号, 因为k3s server在容器里, 映射到外边是54386
端口.
GITHUB_URL=https://github.com/kubernetes/dashboard/releases
VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml
输出如下:
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
验证pod已正常启动:
# kubectl get pod -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-6b4884c9d5-ltk42 1/1 Running 0 14m
kubernetes-dashboard-7d8574ffd9-sptn6 1/1 Running 0 98s
⚠️ 重要: 本指南中创建的
admin-user
将在仪表板中拥有管理权限。
创建以下资源清单文件:
vi dashboard.admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
vi dashboard.admin-user-role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
部署admin-user
配置:
kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml
kubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token
结果如下:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Im9XNENjc0VlSzVBTDJGRWpPT2VuY1pkbzNJblYybFFwY2YxQnBvZVlMVlEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXA1Y253Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0NmI4NGFkYS02MDQ3LTQzN2EtODk2My1lY2NmZWQ4MjE0ZDQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.N8Zhsf2JU5Hoa8yfhrspJbMGP7AFmfs2JeWXVpDksAEMfWf5mI-MXYcqMkbZ9_Qbwp-h9S7k7oZE41lUp8UXlDWi0Ovm4I4fsuoWqq-aJoyt-c060bWNla1edVZ5BzMTanIYzJHPjS7-cOnsxqg-EtXfdN3JRsiE0QevLvJLhYU37HFc7-cImJ8iH8-r-GHCD8MmuBbTV0EBidLmSo-BdWC5hcZoYghgNtfnMkN0p1e3O23EPRO2XDmaw_lVN4TNgZXPS9hirBD1AZxm1ZE1Iyo2mSOgYjCNQOF8IcaUtjTGqt4RzK4R9AWRbL9z-HMbK_JamcQvDz3fnW3aauCezQ
kubectl proxy
现在可以通过以下网址访问仪表盘:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
使用admin-user Bearer Token Sign In
port-forward
方式 $ kubectl port-forward $(kubectl get pods --selector "k8s-app=kubernetes-dashboard" --output=name -n kubernetes-dashboard) --address 0.0.0.0 8443:8443 -n kubernetes-dashboard
# helm repo add stable http://mirror.azure.cn/kubernetes/charts
# helm repo update
# helm install jenkins stable/jenkins
WARNING: This chart is deprecated
NAME: jenkins
LAST DEPLOYED: Sat Nov 7 22:25:02 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
*******************
****DEPRECATED*****
*******************
* The Jenkins chart is deprecated. Future development has been moved to https://github.com/jenkinsci/helm-charts
1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace default jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=jenkins" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:8080
kubectl --namespace default port-forward $POD_NAME 8080:8080
3. Login with the password from step 1 and the username: admin
4. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: http:///configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
For more information about Jenkins Configuration as Code, visit:
https://jenkins.io/projects/jcasc/
通过K3S/K3D, 有以下优势:
领取专属 10元无门槛券
私享最新 技术干货