本篇文章是「DevOps云学堂」与你共同进步的第 62篇
如何在工作站上启动并运行轻量级 Kubernetes。K3s 是一种轻量级、经过认证的 Kubernetes 发行版,专为资源受限的环境(例如边缘设备、物联网设备和小规模部署)而设计。它由 Rancher Labs
开发,构建的目标是提供一个简约且易于使用的 Kubernetes 发行版,消耗更少的资源,同时保持与 Kubernetes API 的完全兼容性。
总体而言,K3s 提供了一个轻量级、易于使用且资源高效的 Kubernetes 发行版,在边缘计算、物联网、开发/测试和小规模部署场景中特别有用。
(base) skondla@Sams-MBP:Downloads $ brew search k3d
==> Formulae
k3d ✔ f3d
# k3d is already installed on my macbook
(base) skondla@Sams-MBP:Downloads $ brew update && brew install k3d
Updated 3 taps (weaveworks/tap, homebrew/core and homebrew/cask).
==> New Formulae
bbot erlang@25 trzsz-ssh
==> New Casks
whisky
==> Outdated Formulae
aws-iam-authenticator eksctl libuv
You have 3 outdated formulae installed.
You can upgrade them with brew upgrade
or list them with brew outdated.
Warning: k3d 5.5.1 is already installed and up-to-date.
To reinstall 5.5.1, run:
brew reinstall k3d
(base) skondla@Sams-MBP:Downloads $ which k3d
/usr/local/bin/k3d
(base) skondla@Sams-MBP:~ $ k3d cluster create devhacluster --servers 3 --agents 1
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-devhacluster'
INFO[0000] Created image volume k3d-devhacluster-images
INFO[0000] Starting new tools node...
INFO[0000] Creating initializing server node
INFO[0000] Creating node 'k3d-devhacluster-server-0'
INFO[0000] Starting Node 'k3d-devhacluster-tools'
INFO[0001] Creating node 'k3d-devhacluster-server-1'
INFO[0002] Creating node 'k3d-devhacluster-server-2'
INFO[0002] Creating node 'k3d-devhacluster-agent-0'
INFO[0002] Creating LoadBalancer 'k3d-devhacluster-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0002] Starting new tools node...
INFO[0002] Starting Node 'k3d-devhacluster-tools'
INFO[0003] Starting cluster 'devhacluster'
INFO[0003] Starting the initializing server...
INFO[0004] Starting Node 'k3d-devhacluster-server-0'
INFO[0005] Starting servers...
INFO[0005] Starting Node 'k3d-devhacluster-server-1'
INFO[0027] Starting Node 'k3d-devhacluster-server-2'
INFO[0040] Starting agents...
INFO[0040] Starting Node 'k3d-devhacluster-agent-0'
INFO[0042] Starting helpers...
INFO[0042] Starting Node 'k3d-devhacluster-serverlb'
INFO[0049] Injecting records for hostAliases (incl. host.k3d.internal) and for 6 network members into CoreDNS configmap...
INFO[0051] Cluster 'devhacluster' created successfully!
INFO[0051] You can now use it like this:
kubectl cluster-info
(base) skondla@Sams-MBP:~ $ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-devhacluster-agent-0 Ready <none> 76s v1.26.4+k3s1 172.23.0.6 <none> K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
k3d-devhacluster-server-0 Ready control-plane,etcd,master 109s v1.26.4+k3s1 172.23.0.3 <none> K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
k3d-devhacluster-server-1 Ready control-plane,etcd,master 92s v1.26.4+k3s1 172.23.0.4 <none> K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
k3d-devhacluster-server-2 Ready control-plane,etcd,master 79s v1.26.4+k3s1 172.23.0.5 <none> K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
(base) skondla@Sams-MBP:~ $ k get po -o wide -A --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-59b4f5bbd5-hkdm6 1/1 Running 0 5m34s 10.42.0.5 k3d-devhacluster-server-0 <none> <none>
kube-system helm-install-traefik-crd-gphwk 0/1 Completed 0 5m34s 10.42.0.2 k3d-devhacluster-server-0 <none> <none>
kube-system helm-install-traefik-r8w4p 0/1 Completed 1 5m34s 10.42.0.3 k3d-devhacluster-server-0 <none> <none>
kube-system local-path-provisioner-76d776f6f9-dlkfm 1/1 Running 0 5m34s 10.42.0.4 k3d-devhacluster-server-0 <none> <none>
kube-system metrics-server-7b67f64457-2mgv8 1/1 Running 0 5m34s 10.42.0.6 k3d-devhacluster-server-0 <none> <none>
kube-system svclb-traefik-cabd407d-jz4v5 2/2 Running 0 5m23s 10.42.1.3 k3d-devhacluster-server-1 <none> <none>
kube-system svclb-traefik-cabd407d-lpn5n 2/2 Running 0 5m23s 10.42.0.7 k3d-devhacluster-server-0 <none> <none>
kube-system svclb-traefik-cabd407d-rzqpb 2/2 Running 0 5m14s 10.42.3.2 k3d-devhacluster-agent-0 <none> <none>
kube-system svclb-traefik-cabd407d-zgs5m 2/2 Running 0 5m16s 10.42.2.2 k3d-devhacluster-server-2 <none> <none>
kube-system traefik-56b8c5fb5c-2mtmf 1/1 Running 0 5m23s 10.42.1.2 k3d-devhacluster-server-1 <none> <none>
rabbitmq-system rabbitmq-cluster-operator-54b4bf5cbf-ghrrr 1/1 Running 0 11s 10.42.3.3 k3d-devhacluster-agent-0 <none> <none>
(base) skondla@Sams-MBP:~ $ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com condition met
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
olmconfig.operators.coreos.com/cluster created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
operatorgroup.operators.coreos.com/global-operators created
operatorgroup.operators.coreos.com/olm-operators created
clusterserviceversion.operators.coreos.com/packageserver created
catalogsource.operators.coreos.com/operatorhubio-catalog created
Waiting for deployment "olm-operator" rollout to finish: 0 of 1 updated replicas are available...
deployment "olm-operator" successfully rolled out
deployment "catalog-operator" successfully rolled out
Package server phase: Succeeded
deployment "packageserver" successfully rolled out
查看namespace:
(base) skondla@Sams-MBP:~ $ k get ns
NAME STATUS AGE
default Active 21m
flaskapp1-namespace Active 12m
kube-node-lease Active 21m
kube-public Active 21m
kube-system Active 21m
olm Active 36s
operators Active 36s
rabbitmq-system Active 16m
部署Prometheus:
(base) skondla@Sams-MBP:~ $ kubectl create -f https://operatorhub.io/install/prometheus.yaml
subscription.operators.coreos.com/my-prometheus created
(base) skondla@Sams-MBP:~ $ kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
elastic-cloud-eck.v2.8.0 Elasticsearch (ECK) Operator 2.8.0 elastic-cloud-eck.v2.7.0 Succeeded
(base) skondla@Sams-MBP:~ $
(base) skondla@Sams-MBP:~ $
(base) skondla@Sams-MBP:~ $
(base) skondla@Sams-MBP:~ $ kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
elastic-cloud-eck.v2.8.0 Elasticsearch (ECK) Operator 2.8.0 elastic-cloud-eck.v2.7.0 Succeeded
prometheusoperator.v0.65.1 Prometheus Operator 0.65.1 prometheusoperator.0.47.0 Succeeded
(base) skondla@Sams-MBP:~ $ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0
OLM is already installed in olm namespace. Exiting...
(base) skondla@Sams-MBP:~ $ kubectl create -f https://operatorhub.io/install/grafana-operator.yaml
subscription.operators.coreos.com/my-grafana-operator created
(base) skondla@Sams-MBP:~ $ kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
elastic-cloud-eck.v2.8.0 Elasticsearch (ECK) Operator 2.8.0 elastic-cloud-eck.v2.7.0 Succeeded
prometheusoperator.v0.65.1 Prometheus Operator 0.65.1 prometheusoperator.0.47.0 Succeeded
(base) skondla@Sams-MBP:~ $ kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
elastic-cloud-eck.v2.8.0 Elasticsearch (ECK) Operator 2.8.0 elastic-cloud-eck.v2.7.0 Succeeded
prometheusoperator.v0.65.1 Prometheus Operator 0.65.1 prometheusoperator.0.47.0 Succeeded
部署Grafana
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
spec:
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
supplementalGroups:
- 0
containers:
- name: grafana
image: grafana/grafana:9.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-pv
volumes:
- name: grafana-pv
persistentVolumeClaim:
claimName: grafana-pvc
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 3000
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
sessionAffinity: None
type: LoadBalancer
启动代理
(base) skondla@Sams-MBP:grafana $ kubectl port-forward service/grafana 3000:3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Handling connection for 3000
文章翻译 https://kondlawork.medium.com/lightweight-kubernetes-k3s-on-local-machine-with-grafana-docker-5f5f8b514dfa