[toc]
Q: 什么是资源控制器(资源控制器介绍)? 答:Kubernetes中内建了很多controller(控制器),这些相当于一个状态机,用来控制Pod的具体状态和行为。
Q: 为什么要使用控制器?
答: 前面说过Pod是k8s最小的部署单元,而如果需要创建批量的Pod副本并进行扩容缩以及POD回退
就必须使用Controller进行实现;
Q: 有那些类型的控制器?
Q: 什么是Replication Controller(RC)?
答: ReplicationController是一个基本的控制器类型, 它即我们所说的RC(
副本控制器
), 用于确保任意时间都有指定数量的Pod”副本”在运行;
Q: RC它有何作用?
答: 确保容器应用的副本数始终保持在用户定义的副本数,即如果有容器
异常退出,会自动创建新的Pod
来替代而如果异常多出来的容器也会自动回收
;
Q: RC工作原理
答: 创建RC之后K8S Master节点上的Controller Manager(控制器管理器)组件接到创建RC的通知进行创建满足副本数相应的Pod, 然后它会定期巡视系统当中的存活的目标Pod, 然后进行标签匹配来监视对应的Pod,并确保目标Pod实例刚好等于RC中定义的副本数期望值,而目标Pod超过副本期望值将会被销毁;
Q: RC组成部分
Q: RC使用实例
# rc-demo.yaml
apiVersion: v1
kind: ReplicationCaontroller
metadata:
name: rc-demo # Rc 名称
spec:
replicas: 3 # Pod副本期望值
selector:
app: nginx
template:
metadata:
name: nginx # pod 名称
lables: # Pod 模板标签
app: nginx # KV 值
spec:
containers:
- name: nginx # 容器名称
image: nginx:latest
ports:
- contianerPort: 80
操作实践:
# 创建
kubectl create -f rc-demo.yaml
# 获取RC & 详细信息
kubectl get rc/rc-demo
kubectl describe rc/rc-demo
# 获取Pod的信息
kubectl get pod -l app=nginx
# 删除RC创建的Pod (他会重新创建满足其副本数)
kubectl delete pod nginx-*
# 删除RC (慎用 -f 它是强制的)
kubectl delete -f rc-demo.yaml
Tips: 我们可以在删除RC时不删除其构建的Pod进行更新修改RC,可以采用delete命令子参数cascade=false
(级联),原RC被删除后可以创建一个新的RC来替换它,前提是旧的和新.spec.selector相匹配,那么新的将会采用旧的Pod;
Tips: 官方在新版本的Kubernetes中建议舍弃ReplicationController,切换使用功能能更强的ReplicaSet(RS) 这也是下一节的主要内容;
Q: 什么是ReplicaSet?
答: ReplicaSet即是我们说的(RS)副本集, 它是在k8s v1.2引入可以将它看做是RC的升级版本; 而ReplicaSet跟ReplicationController 没有本质的不同只是名字不一样,然后支持集合式(Set-Based selecor)的 selector【标签】,这使得RS在资源选择上更为灵活;
Q: ReplicaSet有何作用?
答: RS 和 RC 一样都能确保运行满足副本数期望值的Pod; 虽然RS可以独立使用而它主要用于协调Deployment对Pod创建、删除、更新等,当使用Deployment时候不用担心RS因为可以直接通过Deployment对其进行管理;
Tips : 快速查看控制器的apiVsersion
$ kubectl api-versions | grep $(kubectl api-resources | grep "ReplicaSet" | awk -F " " '{print $3}')
apps/v1
Pod 资源文件示例:
cat > pod-demo.yaml <<'EOF'
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
labels:
app: pod-demo
spec:
containers:
- name: nginx-pod-demo
image: harbor.weiyigeek.top/test/nginx:v1.0
ports:
- containerPort: 80
EOF
ReplicaSet 资源文件示例:
cat > replicaset-demo.yaml <<'EOF'
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset-demo # ReplicaSet 名称
labels: # ReplicaSet 标签
app: replicaset-demo
spec:
replicas: 3 # 有3个副本
selector: # 标签选择器
matchLabels: # matchLabels是{key,value}对的映射
app: replicaset-demo # 注意: 匹配的标签需要原数据中的replicaset-demo一致
matchExpressions:
- {key: tier, operator: In, values: {frontend}}
template: # 模板
metadata:
labels:
app: replicaset-demo
spec: # 模板细则
containers:
- name: nginx-replicaset-demo
image: harbor.weiyigeek.top/test/nginx:v1.0
command: ["sh","-c","java -jar nginx-app-${RELASE_VER}.jar"]
env: # 环境变量 (此时对实际环境中项目的部署值得学习)
- name: GET_HOSTS_FROM
value: dns
- name: RELASE_VER
value: 1.3.5
ports: # 暴露端口
- containerPort: 80
EOF
操作流程:
# (1) 使用K8s部署自助式Pod以及ReplicaSet控制器部署Pod
~/K8s/Day5/demo1$ kubectl create -f pod-demo.yaml
# pod/pod-demo created
~/K8s/Day5/demo1$ kubectl create -f replicaset-demo.yaml
# replicaset.apps/replicaset-demo created
# (2) 查看 Pod 的信息
~/K8s/Day5/demo1$ kubectl get pod --show-labels -o wide
# NAME READY STATUS RESTARTS AGE IP NODE LABELS
# pod-demo 1/1 Running 0 28m 10.244.1.16 k8s-node-4 app=pod-demo
# replicaset-demo-9j457 1/1 Running 0 17m 10.244.1.19 k8s-node-4 tier=replicaset-demo
# replicaset-demo-9sm9d 1/1 Running 0 17m 10.244.1.18 k8s-node-4 tier=replicaset-demo
# replicaset-demo-zkl7t 1/1 Running 0 17m 10.244.1.17 k8s-node-4 tier=replicaset-demo
# PS : 对于 ReplicaSet 控制器创建的 Pod 可以采用下列命令查看
$ kubectl get rs -o wide # Pod 名称在名称空间中必须是惟一的
# NAME DESIRED CURRENT READY AGE CONTAINERS(容器名称) IMAGES(镜像) SELECTOR (标签)
# replicaset-demo 3 3 3 7m39s nginx-replicaset-demo harbor.weiyigeek.top/test/nginx:v1.0 tier=replicaset-demo
# PS : pod 中执行命令 kubectl exec replicaset-demo-9j457 -it -- /bin/sh
$ kubectl exec replicaset-demo-9j457 -it -- env
# PATH=/usr/local/nginx/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# HOSTNAME=replicaset-demo-9j457
# TERM=xterm
# GET_HOSTS_FROM=dns # 定义的 GET_HOSTS_FROM 环境变量
# KUBERNETES_SERVICE_PORT=443
# KUBERNETES_SERVICE_PORT_HTTPS=443
# KUBERNETES_PORT=tcp://10.96.0.1:443
# KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
# KUBERNETES_PORT_443_TCP_PROTO=tcp
# KUBERNETES_PORT_443_TCP_PORT=443
# KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
# KUBERNETES_SERVICE_HOST=10.96.0.1
# NGINX_VERSION=1.19.4
# NJS_VERSION=0.4.4
# PKG_RELEASE=1~buster
# IMAGE_VERSION=1.0
# HOME=/root
# (3) 访问Pod中的容器
~/K8s/Day5/demo1$ curl http://10.244.1.{16..19}/host.html
# Hostname: pod-demo <br>
# Image Version: <u> 1.0 </u>
# Nginx Version: 1.19.4
# Hostname: replicaset-demo-zkl7t <br>
# Image Version: <u> 1.0 </u>
# Nginx Version: 1.19.4
# Hostname: replicaset-demo-9sm9d <br>
# Image Version: <u> 1.0 </u>
# Nginx Version: 1.19.4
# Hostname: replicaset-demo-9j457 <br>
# Image Version: <u> 1.0 </u>
# Nginx Version: 1.19.4
# (4) 【重点】验证自助式Pod与控制器式Pod(Labels-标签)的差异
~/K8s/Day5/demo1 $ kubectl delete pod --all # 删除default名称空间下的所有Pod
# pod "pod-demo" deleted
# pod "replicaset-demo-9j457" deleted
# pod "replicaset-demo-9sm9d" deleted
# pod "replicaset-demo-zkl7t" deleted
~/K8s/Day5/demo1 $ kubectl get pod --show-labels -o wide # 再次查看 Pod 相关信息发现容器名称以及IP地址发生改变
# NAME READY STATUS RESTARTS AGE IP NODE LABELS
# replicaset-demo-frfsw 1/1 Running 0 43s 10.244.1.22 k8s-node-4 tier=replicaset-demo
# replicaset-demo-hh7fd 1/1 Running 0 43s 10.244.1.20 k8s-node-4 tier=replicaset-demo
# replicaset-demo-lqjq8 1/1 Running 0 43s 10.244.1.21 k8s-node-4 tier=replicaset-demo
# PS : 结论控制器创建的Pod在delete会重新创建以达到资源清单中的replicas期望值
# (5) 结束Replicaset创建的Pod(管理者被清除了其下面管理的资源也将被释放)
~/K8s/Day5/demo2$ kubectl delete rs replicaset-demo
# replicaset.apps "replicaset-demo" deleted
Tips : 除非需要自定义更新编排或者根本不需要更新,否则建议使用Deployment来创建Pod而不是直接使用ReplicaSets创建Pod;
Q: 什么是Deployment?
答: Deployment 也是在k8s v1.2版本引入,其内部使用了RS进行实现副本期望值数量Pod的创建,
即通过RS去创建和管理对应的pod及不同的RS交替去完成滚动更新
。
Q: 为何要使用Deployment?
答: 使用其的主要原因是其支持更新、回滚(Rollback)可以极大节约部署时间以及简化部署流程, 全部在其控制器内部完成对用户是不可见的(Pod 创建过程可以参考上一章的声明周期);
Q: Deployment组成部分
Pod、RelicationController 、ReplicaSet、Deployment关系说明
WeiyiGeek.deployment-scale-replicas
Deployment 资源文件示例:
cat > nginx-deployment-demo.yaml<<'EOF'
apiVersion: apps/v1
kind: Deployment # 资源类型
metadata: # 源数据
name: nginx-deployment-demo # Deployment 名称
spec:
replicas: 3 # 副本数量
selector:
matchLabels: # matchLabels是{key,value}对的映射等同于key=value
app: nginx-deployment # 键值对
template:
metadata:
labels:
app: nginx-deployment # Pod 名称
spec:
containers: # 镜像定义
- name: nginx-deployment
image: harbor.weiyigeek.top/test/nginx:v1.0
ports: # 容器端口暴露
- containerPort: 80
EOF
演示流程:
# (1) docker build dockerfile后上传镜像到Harbor之中
~/K8s/Day5/demo2$ docker push harbor.weiyigeek.top/test/nginx:v1.0
~/K8s/Day5/demo2$ docker push harbor.weiyigeek.top/test/nginx:v2.0
# (2) 部署
~/K8s/Day5/demo2$ kubectl create -f nginx-deployment-demo.yaml --record
## --record参数可以记录命令,我们可以很方便的查看每次 revision 的变化 更新的时候可以记录状态,每一步是使用什么命令进行更新的
# deployment.apps/nginx-deployment-demo created
# (3) 查看
~/K8s/Day5/demo2$ kubectl get deploy -o wide # 你可以在Deploy中查看到也可以通过RS查看到创建的Pod
# NAME READY(Pod状态) UP-TO-DATE(用于滚动升级过程中表示多少副本已经成功升级) AVAILABLE(可用) AGE(好久前创建或者运行) CONTAINERS(容器名称) IMAGES(镜像) SELECTOR(标签选择器)
# nginx-deployment-demo 3/3 3 3 2m33s nginx-deployment harbor.weiyigeek.top/test/nginx:v1.0 app=nginx-deployment
~/K8s/Day5/demo2$ kubectl get rs # 【重点】 此处印证了Deployment不是直接管理并创建POD,而是通过ReplicasSet管理并创建的POD;
# NAME(注意点) DESIRED(期待得副本数) CURRENT(当前创建完成得副本数) READY AGE
# nginx-deployment-demo-65f7d977f8 3 3 3 55s
~/K8s/Day5/demo2$ kubectl get pod -o wide --show-labels
# NAME(注意点) READY STATUS RESTARTS AGE IP NODE LABELS
# nginx-deployment-demo-65f7d977f8-285dc 1/1 Running 0 46s 10.244.1.25 k8s-node-4 app=nginx-deployment,pod-template-hash=65f7d977f8
# nginx-deployment-demo-65f7d977f8-4xxf7 1/1 Running 0 46s 10.244.1.23 k8s-node-4 app=nginx-deployment,pod-template-hash=65f7d977f8
# nginx-deployment-demo-65f7d977f8-dpj4p 1/1 Running 0 46s 10.244.1.24 k8s-node-4 app=nginx-deployment,pod-template-hash=65f7d977f8
# (4) nginx 访问
~/K8s/Day5/demo2$ curl http://10.244.1.{23..25}/host.html
# Hostname: nginx-deployment-demo-65f7d977f8-4xxf7 <br>
# Image Version: <u> 1.0 </u>
# Nginx Version: 1.19.4
# Hostname: nginx-deployment-demo-65f7d977f8-dpj4p <br>
# Image Version: <u> 1.0 </u>
# Nginx Version: 1.19.4
# Hostname: nginx-deployment-demo-65f7d977f8-285dc <br>
# Image Version: <u> 1.0 </u>
# Nginx Version: 1.19.4
# (5) 扩容
~/K8s/Day5/demo2$ kubectl scale deployment nginx-deployment-demo --replicas 5
# deployment.apps/nginx-deployment-demo scaled
#如果集群支持horizontal pod autoscaling的话,还可以为Deployment设置自动扩展
kubectl autoscale deployment nginx-deployment-demo --min=10 --max=15 --cpu-percent=80
# (6) 更新容器中的镜像(注意镜像名称是在资源清单中设置的)
kubectl set image deployment/nginx-deployment-demo nginx-deployment=harbor.weiyigeek.top/test/nginx:v2.0
# deployment.apps/nginx-deployment-demo image updated
kubectl get rs -o wide # 查看 rs 替换历史从下面可以看见已经成功替换成为了新的镜像
# NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
# nginx-deployment-demo-59f9c8458c 5 5 5 88s nginx-deployment harbor.weiyigeek.top/test/nginx:v2.0 app=nginx-deployment,pod-template-hash=59f9c8458c
# nginx-deployment-demo-65f7d977f8 0 0 0 29m nginx-deployment harbor.weiyigeek.top/test/nginx:v1.0 app=nginx-deployment,pod-template-hash=65f7d977f8
# (7) 查看扩容更新的结果
~$ curl http://10.244.1.{38..42}/host.html
# Hostname: nginx-deployment-demo-59f9c8458c-x8hsz <br>
# Image Version: <u> 2.0 </u> # 注意到版本的变化
# Nginx Version: 1.19.4
# Hostname: nginx-deployment-demo-59f9c8458c-kjrjl <br>
# Image Version: <u> 2.0 </u>
# Nginx Version: 1.19.4
# Hostname: nginx-deployment-demo-59f9c8458c-rb5nv <br>
# Image Version: <u> 2.0 </u>
# Nginx Version: 1.19.4
# Hostname: nginx-deployment-demo-59f9c8458c-qw6h7 <br>
# Image Version: <u> 2.0 </u>
# Nginx Version: 1.19.4
# Hostname: nginx-deployment-demo-59f9c8458c-5zgdk <br>
# Image Version: <u> 2.0 </u>
# Nginx Version: 1.19.4
# (8) 回滚 & 历史版本
$ kubectl rollout undo deployment/nginx-deployment-demo
# deployment.apps/nginx-deployment-demo rolled back
# 查看rollout的状态
$ kubectl rollout status deployment/nginx-deployment-demo # 成功后的状态
# deployment "nginx-deployment-demo" successfully rolled out
$ kubectl rollout history deployment/nginx-deployment-demo
# deployment.apps/nginx-deployment-demo
# REVISION CHANGE-CAUSE
# 3 kubectl create --filename=nginx-deployment-demo.yaml --record=true
# 4 kubectl create --filename=nginx-deployment-demo.yaml --record=true
# (9) 使用edit命令编辑Deployment
~/K8s/Day5/demo2$ kubectl edit deployment/nginx-deployment-demo
# (10) 收缩
~/K8s/Day5/demo2$ kubectl scale deployment nginx-deployment-demo --replicas 0
# deployment.apps/nginx-deployment-demo scaled
~/K8s/Day5/demo2$ kubectl get pod
# NAME READY STATUS RESTARTS AGE
# nginx-deployment-demo-59f9c8458c-5zgdk 1/1 Terminating 0 15m
# nginx-deployment-demo-59f9c8458c-kjrjl 1/1 Terminating 0 15m
# nginx-deployment-demo-59f9c8458c-qw6h7 1/1 Terminating 0 15m
# nginx-deployment-demo-59f9c8458c-rb5nv 1/1 Terminating 0 15m
# nginx-deployment-demo-59f9c8458c-x8hsz 1/1 Terminating 0 15m
~/K8s/Day5/demo2$ kubectl get pod
# No resources found in default namespace.
# (11) 删除Deployment创建的Pod
~/K8s/Day5/demo2$ kubectl delete deploy nginx-deployment-demo
# deployment.apps "nginx-deployment-demo" deleted
~/K8s/Day5/demo2$ kubectl get pod # 在没有执行收缩为0的情况下进行停止并删除Deployment控制器创建的Pod
# NAME READY STATUS(正在终结中) RESTARTS AGE
# nginx-deployment-demo-65f7d977f8-285dc 1/1 Terminating 0 8m43s
# nginx-deployment-demo-65f7d977f8-4xxf7 1/1 Terminating 0 8m43s
# nginx-deployment-demo-65f7d977f8-dpj4p 1/1 Terminating 0 8m43s
~/K8s/Day5/demo2$ kubectl get pod # 再次查看发现所用的Pod资源已被清理
# No resources found in default namespace.
Tips : Deployment、ReplicaSet和Pod间得命名关系
Deployment Name: [Name]
ReplicaSet Name: [deployment-name]-[随机字符串]
Pod Name: [replicaset-name]-[随机字符串]
Tips: Deployment 更新策略【如果我们需要更新时将会创建出两个RS,其中旧的RS一次减少25%的pod而新的RS一次创建25%的pod
】
Deployment 可以保证在升级时只有一定数量的Pod是down的。默认的,它会确保至少有比期望的Pod数量少一个是up状态(最多一个不可用)
Deployment同时也可以确保只创建出超过期望数量的一定数量的Pod。默认的,它会确保最多比期望的Pod数量多一个的Pod是up的(最多1个surge)
未来的Kuberentes 版本中,将从1-1变成25%-25%
# deployments 详细查看 (更新事件查看)
~$ kubectl describe deployments
# Name: nginx-deployment-demo
# Namespace: default
# CreationTimestamp: Tue, 10 Nov 2020 21:20:57 +0800
# Labels: <none>
# Annotations: deployment.kubernetes.io/revision: 4
# kubernetes.io/change-cause: kubectl create --filename=nginx-deployment-demo.yaml --record=true
# Selector: app=nginx-deployment
# Replicas: 0 desired | 0 updated | 0 total | 0 available | 0 unavailable # 副本数量变化状态
# StrategyType: RollingUpdate # 策略类型 【关键点】
# MinReadySeconds: 0
# RollingUpdateStrategy: 25% max unavailable, 25% max surge # 滚动更新策略【关键点】
# Pod Template:
# Labels: app=nginx-deployment
# Containers:
# nginx-deployment:
# Image: harbor.weiyigeek.top/test/nginx:v2.0
# Port: 80/TCP
# Host Port: 0/TCP
# Environment: <none>
# Mounts: <none>
# Volumes: <none>
# Conditions:
# Type Status Reason
# ---- ------ ------
# Available True MinimumReplicasAvailable
# Progressing True NewReplicaSetAvailable
# OldReplicaSets: <none>
# NewReplicaSet: nginx-deployment-demo-59f9c8458c (0/0 replicas created)
# Events: (事件) 可查看采用Deployment进行扩缩容得Pod容器副本变化记录
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Normal ScalingReplicaSet 50m deployment-controller Scaled up replica set nginx-deployment-demo-65f7d977f8 to 3
# Normal ScalingReplicaSet 27m deployment-controller Scaled up replica set nginx-deployment-demo-849b98d6f7 to 2
# Normal ScalingReplicaSet 24m (x2 over 27m) deployment-controller Scaled up replica set nginx-deployment-demo-849b98d6f7 to 3
# Normal ScalingReplicaSet 23m (x2 over 29m) deployment-controller Scaled up replica set nginx-deployment-demo-65f7d977f8 to 5
# Normal ScalingReplicaSet 23m deployment-controller Scaled down replica set nginx-deployment-demo-849b98d6f7 to 0
# Normal ScalingReplicaSet 22m (x2 over 27m) deployment-controller Scaled down replica set nginx-deployment-demo-65f7d977f8 to 4
# Normal ScalingReplicaSet 22m deployment-controller Scaled up replica set nginx-deployment-demo-59f9c8458c to 2
# Normal ScalingReplicaSet 22m deployment-controller Scaled up replica set nginx-deployment-demo-59f9c8458c to 3
# Normal ScalingReplicaSet 22m deployment-controller Scaled down replica set nginx-deployment-demo-65f7d977f8 to 3
# Normal ScalingReplicaSet 22m deployment-controller Scaled up replica set nginx-deployment-demo-59f9c8458c to 4
# Normal ScalingReplicaSet 22m (x4 over 22m) deployment-controller (combined from similar events): Scaled down replica set nginx-deployment-demo-65f7d977f8 to 0
# Normal ScalingReplicaSet 7m14s deployment-controller Scaled down replica set nginx-deployment-demo-59f9c8458c to 0
Tips : Deployment 回滚策略Rollover【多个rollout并行】
只要Deployment的rollout被触发,就会创建一个revision。也就是说当且仅当Deployment的Pod template(如.spec.template
)被更改,例如更新template中的label和容器镜像时,就会创建出一个新的revision。其他的更新,
比如扩容Deployment不会创建revision, 因此我们可以很方便的手动或者自动扩容。这意味着当您回退到历史revision时,只有Deployment中的Pod template
部分才会回退
#设置镜像
kubectl set image deployment/Pod名称 容器名称=镜像仓库/名称:版本
#查看当前更新状态
kubectl rollout status deployments nginx-deployment
kubectl get pods
#查看可回滚的历史版本
kubectl rollout history deployment/nginx-deployment
kubectl rollout undo deployment/nginx-deployment
##可以使用--revision参数指定回退到某个历史版本
kubectl rollout undo deployment/nginx-deployment --to-revision=2
##暂停 deployment的更新
kubectl rollout pause deployment/nginx-deployment
您可以用 kubectl rollout status
命令查看Deployment是否完成。如果 rollout成功完成,kubect1rollout status
将返回一个0值的 Exit Code
【非常值得注意在进行自动化运维脚本编程时非常使用】
$ kubect1 rollout status deploy/nginx
$ echo $? # 在实际脚本开发中可以判断是否回滚成功
0
Tips : Deployment 历史版本策略
revisionHistoryLimit: 10
), 如果将该项设置为 0 Deployment 就不允许回退了Tips: 命令式 & 申明式编程
rs
) create(优) applyDeployment
)apply(优) create描述: 如果我们需要在每一台Node节点上运行一个容器应用, 我们可以采用k8s提供的DaemonSet控制器满足我们的需求; 即 DaemonSet 类似与守护进程的应用程序可用,它能够让所有或指定的Node上运行同一个Pod;
DaemonSet 功能说明:
Q:DaemonSet 的应用场景:
示例: 例如 kube-proxy 、kube-system 在每个节点上都有一个Pod;
~$ kubectl get daemonsets.apps --all-namespaces
# NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# kube-system calico-node 7 7 7 7 7 kubernetes.io/os=linux 22d
# kube-system kube-proxy 7 7 7 7 7 kubernetes.io/os=linux 24d
DaemonSet的创建和查看
# DaemonSet 控制器资源清单
cat > daemonset-example.yaml <<'EOF'
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example # DaemonSet 控制创建资源对象的名称
labels:
app: daemonset
spec:
selector: # 选择器进行匹配模板标签
matchLabels:
name: daemonset-example # 注意点: 匹配的标签必须和pod源数据name键值一致;
template:
metadata:
labels:
name: daemonset-example
spec:
# tolerations: # 容忍说明
# - key: node-role.kubernetes.io/master
# effect: NoShedule
containers:
- name: daemonset-example
image: harbor.weiyigeek.top/test/nginx:v2.0
imagePullPolicy: IfNotPresent # 如果本地存在则不拉取
resources: # 资源限制
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
terminationGracePeriodSeconds: 30 # 销毁宽限实际(s)
EOF
操作流程:
# (1) 查看 DaemonSet 的 api 资源名称
$ kubectl api-resources | grep "DaemonSet"
# daemonsets ds apps true DaemonSet
# (2) 部署
$ kubectl create -f daemonset-example.yaml
# daemonset.apps/daemonset-example created
# (3) 查看(由于这里只有一个node节点)
~/K8s/Day5/demo3$ kubectl get ds -o wide # daemonset 控制器创建信息
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
# daemonset-example 1 1 1 1 1 <none> 2m26s daemonset-example harbor.weiyigeek.top/test/nginx:v2.0 name=daemonset-example
~/K8s/Day5/demo3$ kubectl get pod -o wide --show-labels # 创建的pod信息
# NAME READY STATUS RESTARTS AGE IP NODE LABELS
# daemonset-example-858ks 1/1 Running 0 49s 10.244.1.43 k8s-node-4 controller-revision-hash=5bcf6b6794,name=daemonset-example,pod-template-generation=1
# (4) 验证 daemonset 是否有上述作用我们将master节点污点去除, 然后查看与上面的区别;
$ kubectl describe node k8s-master-1 | grep "Taints"
Taints: node-role.kubernetes.io/master:NoSchedule
$ kubectl taint nodes k8s-master-1 node-role.kubernetes.io/master=:NoSchedule-
# node/k8s-master-1 untainted # 无污点的
~/K8s/Day5/demo3$ kubectl describe node k8s-master-1 | grep "Taints"
Taints: <none>
# (5) 结果
~/K8s/Day5/demo3$ kubectl get ds -o wide # 可以看见当前daemonset控制器已经生成两个可用的pod
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
# daemonset-example 2 2 2 2 2 <none> 20m daemonset-example harbor.weiyigeek.top/test/nginx:v2.0 name=daemonset-example
~/K8s/Day5/demo3$ kubectl get pod -o wide --show-labels # 查看到节点分别在Master节点以及工作节点上存在
# NAME READY STATUS RESTARTS AGE IP NODE LABELS
# daemonset-example-4rvkf 1/1 Running 0 173m 10.244.0.6 k8s-master-1 controller-revision-hash=5bcf6b6794,name=daemonset-example,pod-template-generation=1
# daemonset-example-858ks 1/1 Running 0 3h9m 10.244.1.43 k8s-node-4 controller-revision-hash=5bcf6b6794,name=daemonset-example,pod-template-generation=1
# (6) Pod访问
~/K8s/Day5/demo3$ curl http://10.244.0.6/host.html
# Hostname: daemonset-example-4rvkf <br> Image Version: <u> 2.0 </u> Nginx Version: 1.19.4
~/K8s/Day5/demo3$ curl http://10.244.1.43/host.html
# Hostname: daemonset-example-858ks <br> Image Version: <u> 2.0 </u> Nginx Version: 1.19.4
# (7) 再次设置master节点的污点
~/K8s/Day5/demo3$ kubectl taint node k8s-master-1 node-role.kubernetes.io/master=:NoSchedule
# node/k8s-master-1 tainted
~/K8s/Day5/demo3$ kubectl get ds # 关键点 此时daemonset管理的Pod已经只有一个了;
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# daemonset-example 1 1 1 1 1 <none> 3h19m
~$ kubectl get pod -o wide --show-labels # 此时运行在 Master 节点上的Pod(daemonset-example-4rvkf)变成了自助式Pod
# NAME READY STATUS RESTARTS AGE IP NODE LABELS
# daemonset-example-4rvkf 1/1 Running 0 3h5m 10.244.0.6 k8s-master-1 controller-revision-hash=5bcf6b6794,name=daemonset-example,pod-template-generation=1
# daemonset-example-858ks 1/1 Running 0 3h22m 10.244.1.43 k8s-node-4 controller-revision-hash=5bcf6b6794,name=daemonset-example,pod-template-generation=1
~$ kubectl delete pod daemonset-example-4rvkf # (重点)删除后由于不受控制器管理则不会重新运行满足副本数的Pod
# pod "daemonset-example-4rvkf" deleted
# (8) 删除 daemonset 控制器创建的Pod(如添加 -cascade=false 参数则删除DeamonSet控制器创建的资源时不会将创建的Pod删除,相反 -cascade=true 则将一起删除)
$ kubectl -cascade=false delete ds daemonset-example
$ kubectl -cascade=true delete ds daemonset-example # 缺省
# daemonset.apps "daemonset-example" deleted
PS : 由于污点的原因所有的Pod不会运行在Master节点中,但是我们可以手动或者在资源清单中申明取消污点,或者在yaml资源清单文件中的 ·spec
对象中添加如下;
# tolerations: # 容忍说明
# - key: node-role.kubernetes.io/master
# effect: NoShedule
```
**Q: 静态 Pod 与 DaemonSet 的不同点。**
> A: 前面我们说过 Static Pod 不受kubectl和其他k8s API 客户端管理,并且不依赖于Api Server,这使得它们在集群启动的情况下非常有用,实际环境中除非有特殊应用请不要使用此种方式;
Tips :可以通过 `.spec.template.spec.nodeSelector` 或者 `Affinity` 等高级调度策略部署DaemonSet到指定节点或者拓扑;
<br/>
### 5.Job
描述: Job 控制器资源对象有自身的纠错能力, 如果Job运行的脚本(script)没有以0状态码退出的话(或者脚本执行),Job将会重新执行Job负责批处理任务,仅执行一次的任务,它保证批处理任务的一个或多个Pod成功结束
即当返回结果为非0的情况下将会重新执行, 并且保证批处理成功执行;
<br/>
**Q: 什么是Job?**
> A: Job负责批处理任务,即仅执行一次的任务,它保证批处理任务的一个或多个Pod成功结束
资源对象特殊说明:
```bash
·spec.template 格式同Pod
·RestartPolicy 仅支持 `Never 或 OnFailure` 单个Pod时,默认Pod成功运行后Job即结束
.spec.completions 标志Job结束需要成功运行的Pod个数,默认为1·
.spec.parallelism 标志并行运行的Pod的个数,默认为1
·spec.activeDeadlineseconds 标志失败Pod的重试最大时间,超过这个时间不会继续重试
Job & CronJob 控制器资源类型及版本:
$ kubectl api-resources | grep "Job"
# jobs batch true Job
# cronjobs cj batch true CronJob
$ kubectl api-versions | grep $(kubectl api-resources | grep " Job" | awk -F " " '{print $2}')
# batch/v1 # jobs
# batch/v1beta1 # cronjobs
Job资源文件示例:
cat > job-demo.yaml <<'EOF'
apiVersion: batch/v1
kind: Job
metadata:
name: job-demo-pi
spec:
template:
metadata:
name: job-demo-pi
spec:
containers:
- name: busybox-pi
image: busybox
command: ["/bin/sh","-c","date;echo Job Controller Demo!"] #利用Perl求取圆周率 小数点后的2000位
imagePullPolicy: IfNotPresent # 如果本地存在则不拉取
restartPolicy: Never
EOF
操作实例:
# (1) Job 控制器 创建 Pod
~/K8s/Day5/demo3$ kubectl create -f job-demo.yaml
# job.batch/job-demo-pi created
# (2) 查看
~/K8s/Day5/demo3$ kubectl get jobs -o wide --show-labels
# NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR LABELS
# job-demo-pi 1/1 1s 41s busybox-pi busybox controller-uid=3988087c-fa65-4ca0-90e4-c23bbf01884c controller-uid=3988087c-fa65-4ca0-90e4-c23bbf01884c,job-name=job-demo-pi
~/K8s/Day5/demo3$ kubectl get pod -o wide --show-labels
# NAME READY STATUS RESTARTS AGE IP NODE LABELS
# job-demo-pi-p7j59 0/1 Completed 0 60s 10.244.1.57 k8s-node-4 controller-uid=3988087c-fa65-4ca0-90e4-c23bbf01884c,job-name=job-demo-pi
# (3) 结果验证
~/K8s/Day5/demo3$ kubectl logs job-demo-pi-p7j59
# Wed Nov 11 07:49:00 UTC 2020
# Job Controller Demo!
~/K8s/Day5/demo3$ kubectl describe pod job-demo-pi-p7j59
# Events:
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Normal Scheduled 3m54s default-scheduler Successfully assigned default/job-demo-pi-p7j59 to k8s-node-4
# Normal Pulled 3m53s kubelet Container image "busybox" already present on machine
# Normal Created 3m53s kubelet Created container busybox-pi
# Normal Started 3m53s kubelet Started container busybox-pi
# (4) 删除 Jobs 控制器创建的资源
~/K8s/Day5/demo3$ kubectl delete -f job-demo.yaml # 指定
# job.batch "job-demo-pi" deleted
~/K8s/Day5/demo3$ kubectl delete jobs --all # 删除所有
# job.batch "job-demo-pi" deleted
~/K8s/Day5/demo3$ kubectl get jobs
# No resources found in default namespace.
~/K8s/Day5/demo3$ kubectl get pod
# No resources found in default namespace. # 可以看见已经所有Pod资源已经被删除
5.CronJob【本质上是在特定的时间循环创建Job去实现的】批处理脚本程序可用
CronJob 管理基于时间的Job,即:
·在给定时间点只运行一次 分 时 日 月 周
·周期性地在给定时间点运行
使用前提条件:当前使用的Kubernetes集群,版本>=1.8(对Cronjob)。对于先前版本的集群,版本<1.8,启动API Server时,通过传递选项–runtime-config=batch/v2alpha1=true 可以开启batch/v2alpha1API
典型的用法如下所示:
·在给定的时间点调度Job运行
创建周期性运行的Job,例如:数据库备份、发送邮件
CronJob Spec ·spec.template格式同Pod ·RestartPolicy仅支持Never或OnFailure ·单个Pod时,默认Pod成功运行后Job即结束· .spec.completions标志Job结束需要成功运行的Pod个数,默认为1· .spec.parallelism 标志并行运行的Pod的个数,默认为1 ·spec.activeDeadlineSeconds 标志失败Pod的重试最大时间,超过这个时间不会继续重试
.spec.schedule:调度,必需字段,指定任务运行周期,格式同 Cron· .spec.jobTemplate:Job模板,必需字段,指定需要运行的任务,格式同Job .spec.startingDeadlineSeconds:启动Job的期限(秒级别),该字段是可选的。如果因为任何原因而错过了被调度的时间,那么错过执行时间的Job将被认为是失败的。如果没有指定,则没有期限.spec.concurrencyPolicy:并发策略,该字段也是可选的。 它指定了如何处理被 Cron Job创建的Job的并发执行。只允许指定下面策略中的一种: Allow(默认):允许并发运行Job Forbid:禁止并发运行,如果前一个还没有完成,则直接跳过下一个 Replace:取消当前正在运行的Job,用一个新的来替换 注意,当前策略只能应用于同一个CronJob创建的Job。如果存在多个Cron Job,它们创建的Job之间总是允许并发运行。 .spec.suspend:挂起,该字段也是可选的。如果设置为true,后续所有执行都会被挂起。它对已经开始执行的Job不起作用。默认值为false。 .spec.successfulJobsHistoryLimit和.spec.failed]obsHistoryLimit:历史限制,是可选的字段。它们指定了可以保留多少完成和失败的Job。默认情况下,它们分别设置为3和1。设置限制的值为0,相关类型的Job完成后将不会被保留。
CronJob资源文件示例
cat > cronjob-demo.yaml <<'EOF'
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-demo
spec:
schedule: "*/1 * * * *" # 表示每一分钟运行一次
jobTemplate:
spec:
template:
spec: # 模板配置说明书
containers:
- name: cronjob-demo
image: busybox
args:
- /bin/sh
- -c
- date;echo Hello from the Kubernetes cluster, This is cronjob-demo;
imagePullPolicy: IfNotPresent # 如果本地存在则不拉取
restartPolicy: OnFailure
EOF
操作流程:
# (1) 忘记如果编写CronJob的资源清单的时候可以按照下面格式查找帮助;
~$ kubectl explain CronJob.spec.jobTemplate.spec.template.spec.containers
# (2) 创建
~/K8s/Day5/demo3$ kubectl create -f cronjob-demo.yaml
cronjob.batch/cronjob-demo created
# (3) 查看
$ kubectl get cj -o wide --show-labels # cronjob 控制创建
# NAME SCHEDULE SUSPEND(是否延迟) ACTIVE LAST SCHEDULE(下一次执行) AGE CONTAINERS IMAGES SELECTOR LABELS
# cronjob-demo */1 * * * * False 0 41s 3m3s cronjob-demo busybox <none> <none>
$ kubectl get jobs -o wide # 实际上 cronjob 调用 Job 创建pod,可以看见已经完成有三个job(缺省保留3次成功或者失败的Job)
# NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR
# cronjob-demo-1605082620 1/1 2s 2m46s cronjob-demo busybox controller-uid=0e44795c-d799-4a5a-8f83-4025fd966f18
# cronjob-demo-1605082680 1/1 2s 106s cronjob-demo busybox controller-uid=e9ccc654-ed2b-4b49-8456-e4dd2fd21950
# cronjob-demo-1605082740 1/1 2s 46s cronjob-demo busybox controller-uid=7398827f-14f2-4fc9-b410-3089f9982bbe
~/K8s/Day5/demo3$ kubectl get pod -o wide --show-labels
# NAME READY(这里由于完成后便立即退出了) STATUS RESTARTS AGE IP NODE LABELS
# cronjob-demo-1605082740-pfw87 0/1 Completed 0 2m32s 10.244.1.62 k8s-node-4 controller-uid=7398827f-14f2-4fc9-b410-3089f9982bbe,job-name=cronjob-demo-1605082740
# cronjob-demo-1605082800-6jslj 0/1 Completed 0 92s 10.244.1.63 k8s-node-4 controller-uid=42835ca2-c28e-4c3d-b7ea-107a0acf59f3,job-name=cronjob-demo-1605082800
# cronjob-demo-1605082860-2747h 0/1 Completed 0 32s 10.244.1.65 k8s-node-4 controller-uid=d4b1e587-b881-4721-8330-7a3ef78ddc69,job-name=cronjob-demo-1605082860
# (4) 执行反馈结果
~$ kubectl get pod | grep "cronjob" | cut -d " " -f1
# cronjob-demo-1605083580-twsb8
# cronjob-demo-1605083640-dtscf
# cronjob-demo-1605083700-64g88
~$ kubectl logs cronjob-demo-1605083580-twsb8
# Wed Nov 11 08:33:03 UTC 2020
# Hello from the Kubernetes cluster, This is cronjob-demo
~$ kubectl logs cronjob-demo-1605083640-dtscf
# Wed Nov 11 08:34:03 UTC 2020
# Hello from the Kubernetes cluster, This is cronjob-demo
~$ kubectl logs cronjob-demo-1605083700-64g88
# Wed Nov 11 08:35:03 UTC 2020
# Hello from the Kubernetes cluster, This is cronjob-demo
# (5) 两种方式删除CronJob控制器创建的Job以及附属的Pod资源
~/K8s/Day5/demo3$ kubectl delete cronjob --all
~/K8s/Day5/demo3$ kubectl delete -f cronjob-demo.yaml
cronjob.batch "cronjob-demo" deleted
PS : Cronjob 本身的一些限制创建Job操作应该是幂等的, CronJob并不太好去判断任务是否成功,CronJob通过创建Job去完成任务,Job成功与否可以判断,但CronJob无法链接到Job去获取成功与否,Cron只会定期的去创建Job仅此而已。
描述: 前面我们学习了RC、RS、Deployment、DaemonSet 与 Job 等它们都是面向无状态服务的,而本节学习的StatefulSet的控制器是有状态服务的即主要用于部署(有状态服务的应用程序)。
在 K8s 中使用 StatefulSet 控制器来部署有状态服务, StatefulSet 是一个给Pod提供唯一标准的控制器,它可以保证部署 和 扩展收缩的顺序;
Q: 什么是有状态服务? 什么又是无状态服务?
A: 服务所维护的与客户交互活动的信息称为状态信息 Stateless Service(无状态服务):在应用程序运行过程之中不保存任何数据和状态信息的服务;例如 Mysql 它需要存储产生的新数据; Stataful Service (有状态服务) : 在应用程序运行过程之中保存的数据或状态的服务;例如 Nginx;
StatefulSet 特征
Headless Service(即没有Cluster IP的Service)
来实现,例如一个Pod叫做mysql-0,第二个叫做mysql-1,然后依次顺序满足其期望值即n的一个叫做mysql-(n-1)。Statefulset 启停顺序详述:
下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态
。Tips : StatefulSet 除了需要采用PV/PVC持久化Pod状态数据之外还需使用到Headless Service, 该Service为StatefulSet控制的每个Pod创建一个DNS域名,其格式PodName.HeadlessServiceName
# 例如 Headless Service Name = database , 而 StatefulSet名称为 mysql-app 则其DNS名称为
mysql-app-1.database
mysql-app-2.database
此次是针对于StatefulSet控制器做一个简单的了解后续在进行PVC持久化卷数据时演示;
# statefulset 资源控制器
~$ kubectl api-resources | grep "stateful" # 资源
# statefulsets sts apps true StatefulSet
~$ kubectl api-versions | grep "apps" # api-groups及其版本
# apps/v1
~$ kubectl get storageclass
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
# managed-nfs-storage (default) fuseim.pri/ifs Delete Immediate false 57d
资源清单示例:
cat > statefulset-demo.yaml <<'END'
apiVersion: v1
kind: Service
metadata:
name: nginx-statefulset-service
labels:
app: nginx
spec:
clusterIP: None
selector:
app: nginx-sfs
ports:
- name: web
port: 80
---
apiVersion: apps/v1
kind: StatefulSet # 资源控制器
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx-sfs # 匹配 .spec.template.metadata.labels 中的标签
serviceName: "nginx-statefulset-service" # 绑定的SVC 与上面的 Service 名称对应
replicas: 3
template:
metadata:
labels:
app: nginx-sfs # Pod 模板的标签 (有了它StatefulSet知道创建的Pod是否在自己期望数量中以及便于SVC)
spec:
containers:
- name: nginx
image: nginx:1.19.6
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates: # volume 模板
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ] # 允许模式
storageClassName: "managed-nfs-storage" # 采用storageClass进行动态创建的PVC (后续将会讲述)
resources:
requests:
storage: 200Mi
END
构建部署:
# (1) 创建Pod
~/K8s/Day5/demo4$ kubectl apply -f statefulset-demo.yaml
# service/nginx-statefulset-service created
# statefulset.apps/web created
# (2) 查看StatefulSet控制器、SVC、PVC
~/K8s/Day5/demo4$ kubectl get sts,pvc -o wide | egrep "web"
# statefulset.apps/web 3/3 6m29s nginx nginx:1.19.6
# persistentvolumeclaim/www-web-0 Bound pvc-c1352011-6bc8-450f-941c-d6369ca55ca6 200Mi RWO managed-nfs-storage 6m29s Filesystem
# persistentvolumeclaim/www-web-1 Bound pvc-07127f78-f282-4fac-970d-e6c991242b03 200Mi RWO managed-nfs-storage 6m19s Filesystem
# persistentvolumeclaim/www-web-2 Bound pvc-205e0dfc-6a9c-450a-be20-b3440d94a453 200Mi RWO managed-nfs-storage 6m Filesystem
~/K8s/Day5/demo4$ kubectl get svc | grep "nginx-statefulset-service"
# nginx-statefulset-service ClusterIP None <none> 80/TCP 7m4s
~/K8s/Day5/demo4$ kubectl get ep -o wide | grep "nginx-statefulset-service" # 查看 SVC 绑定的 Endport
# nginx-statefulset-service 10.244.0.247:80,10.244.1.230:80,10.244.2.133:80 8m23s
# (3) 访问搭建的Nginx
~/K8s/Day5/demo4$ echo "Web-0" > /nfs/data/default-www-web-0-pvc-c1352011-6bc8-450f-941c-d6369ca55ca6/index.html
~/K8s/Day5/demo4$ curl http://10.244.0.247
Web-0
描述: 前面我们知道 Kubernetes 集群可以通过 Replication Controller
的 scale 机制完成服务的扩容或缩容,实现具有伸缩性的服务, 但是其需要我们进行人工干预。
Q: 那么是否有自动完成扩容和收缩的机制?
答: 那就本文主要讲解的 HPA (Horizontal Pod Autoscaling
) 进行服务的 autoscale 自动伸缩;
描述: Horizontal Pod Autoscaling (简称HPA
) 自动扩展, 它可以根据当前pod资源的使用率(如CPU、磁盘、内存等),进行副本数的动态的扩容与缩容,以便减轻各个pod的压力。当pod负载达到一定的阈值后,会根据扩缩容的策略生成更多新的pod来分担压力,当pod的使用比较空闲时,在稳定空闲一段时间后,还会自动减少pod的副本数量。
HPA 起始作用是 Pod 按照一定的策略进行自动扩容缩,例如根据CPU利用率自动伸缩一个 Replication Controller、 Deployment 或者 ReplicaSet
中管理的的Pod副本数量; 您可以简单理解为它并不是一个控制器而是一个控制器的附属品
, 可以使用它去控制其他的控制器,从而使其他的控制器具有自动扩展的功能;
Tips: 若要实现自动扩缩容的功能,还需要部署heapster
服务,用来收集及统计资源的利用率,支持kubectl top命令
,heapster服务集成在prometheus(普罗米修斯) MertricServer服务中,所以说,为了方便,我这里基于prometheus服务的环境上进行部署HPA(动态扩缩容)的服务。
集群资源伸缩方式说明:
Kubernetes AutoScale (自动扩展) 主要分为两种方式:
简单实例:
1) 构建测试专业的镜像 (#运行构建Deployment的hpa资源,名称为php-apache,并设置请求CPU的资源为200m并暴露一个80端口)
~/K8s/Day11$ cat > hpa-demo.yaml <<'EOF'
kind: Deployment #该配置的类型,我们使用的是 Deployment
apiVersion: apps/v1 #与k8s集群版本有关,使用 kubectl api-versions 即可查看当前集群支持的版本
metadata: #译名为元数据,即 Deployment 的一些基本属性和信息
name: php-apache-deployment # Deployment 控制器的名称
labels: #标签,可以灵活定位一个或多个资源,其中key和value均可自定义,可以定义多组,目前不需要理解
app: php-apache-deploy # 为该Deployment设置key为app,value为nginx的标签
spec: #这是关于该Deployment的描述,可以理解为你期待该Deployment在k8s中如何使用
# replicas: 1 #使用该Deployment创建一个应用程序实例
selector: #标签选择器,与上面的标签共同作用,目前不需要理解
matchLabels: #选择包含标签app:nginx的资源
app: php-apache
matchExpressions:
- key: app
operator: In
values: [php-apache]
template: #这是选择或创建的Pod的模板
metadata: #Pod的元数据
labels: #Pod的标签,上面的selector即选择包含标签app:nginx的Pod
app: php-apache
spec: #期望Pod实现的功能(即在pod中部署)
containers: #生成container,与docker中的container是同一种
- name: nginx #container的名称
image: mirrorgooglecontainers/hpa-example #使用镜像nginx最新版本创建container,该container默认80端口可访问
ports:
- containerPort: 80
resources: # 资源的限制
requests: # 资源申请值
cpu: 128m
memory: 128Mi
limits: # 资源限制值
cpu: 200m
memory: 200Mi
imagePullPolicy: IfNotPresent
---
kind: Service
apiVersion: v1
metadata:
name: php-apache
spec:
type: ClusterIP # Service 类型
selector: # backend 绑定
app: php-apache
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
2) 按照上面的资源清单构建 Deloyment 资源控制器并查看对应的资源;
~/K8s/Day11$ kubectl create -f hpa-demo.yaml
# deployment.apps/php-apache-deployment created
# service/php-apache created
~/K8s/Day11$ kubectl get deployments -o wide
# NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
# php-apache-deployment 1/1 1 1 2m nginx mirrorgooglecontainers/hpa-example app=php-apache
~/K8s/Day11$ kubectl get svc
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33d
# php-apache ClusterIP 10.96.176.18 <none> 80/TCP 5m8s
~/K8s/Day11$ kubectl get replicasets.apps -o wide
# NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
# php-apache-deployment-75695fbcd8 1 1 1 6m59s nginx mirrorgooglecontainers/hpa-example app=php-apache,pod-template-hash=75695fbcd8
(3) 创建HPA控制器(#当hpa资源的deployment资源对象的CPU使用率达到50%时,就进行扩容最多可以扩容到10个
)
# 方式1.autoscale 命令
~/K8s/Day11$ kubectl autoscale deployment php-apache-deployment --cpu-percent=50 --min=1 --max=10
# horizontalpodautoscaler.autoscaling/php-apache-deployment autoscaled
# 方式2.资源清单
~/K8s/Day11$ tee HorizontalPodAutoscaler.yaml <<'EOF'
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache-deployment
namespace: default
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
kind: Deployment
name: php-apache-deployment
apiVersion: apps/v1
targetCPUUtilizationPercentage: 50
EOF
~/K8s/Day11$ kubectl get hpa
# NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
# php-apache-deployment Deployment/php-apache-deployment <unknown>/50% 1 10 1 19s
(4) 模拟消耗php-apache的资源,并验证pod是否会自动扩容与缩容
# 新开启多个终端(也可使用node节点),对php-apache的pod进行死循环请求,如下(如果你的系统资源比较充足,可以选择开启多个终端,对pod进行死循环请求,我这里开启了两个node的终端,同时请求php-apache的pod):
$ kubectl run -i --tty load-generator --image=busybox /bin/sh
# 方式1.模拟多用户对php-apache的pod造成的并发请求
$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
# OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK
# 方式2.使用ab进行压力测试
$ ab -c 5000 -n 2000000 http://tomcat-svc:8080/
(5) 监控查看hpa资源对cpu的占用情况以及Pod扩容结果;
~/K8s/Day11$ watch -n 2 "kubectl get deployments | grep "php-apache-deployment" && kubectl get hpa && kubectl top pod -l app=php-apache"
php-apache-deployment 4/4 4 4 6m46s # 可以看见创建了四个副本
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache-deployment Deployment/php-apache-deployment 93%/50% 1 10 4 4m46s # CPU 使用值已超过我们设定的额定百分百50%
NAME CPU(cores) MEMORY(bytes) # 创建的Pod
php-apache-deployment-5c6b89b758-55m2m 122m 12Mi
php-apache-deployment-5c6b89b758-gm8rz 0m 2Mi
php-apache-deployment-5c6b89b758-s8ldx 118m 12Mi
php-apache-deployment-5c6b89b758-wdt9p 0m 0Mi
# PS : 当然最大也就只可以产生10个pod,因为我们之前规定最多产生10个pod
(6) 当停止死循环请求后,也并不会立即减少pod数量,会等一段时间(大约5分分钟)后减少pod数量(5~10分钟),防止流量再次激增。
# 当停止死循环请求后,也并不会立即减少pod数量,会等一段时间后减少pod数量(5~10分钟),防止流量再次激增。
~/K8s/Day11$ watch -c -t -n 2 "kubectl get deployments | grep "php-apache-deployment" && kubectl get hpa && kubectl get pod -l app=php-apache"
# php-apache-deployment 1/1 1 1 17m
# NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
# php-apache-deployment Deployment/php-apache-deployment 0%/50% 1 10 1 15m
# NAME READY STATUS RESTARTS AGE
# php-apache-deployment-5c6b89b758-s8ldx 1/1 Running 0 17m
(7) 至此HPA实现pod副本数量的自动扩容与缩容就实现了,下面可以测试删除策略扩容HPA
~/K8s/Day11$ kubectl delete horizontalpodautoscalers.autoscaling php-apache-deployment
# horizontalpodautoscaler.autoscaling "php-apache-deployment" deleted
Tips: K8s 1.19.x 的 HAP 控制器只能针对于Deployment, ReplicaSet, StatefulSet, or ReplicationController
等资源控制器创建的Pod资源进行扩容缩;
# 注意许多CSDN 或者 博客园中会写出以下的方式验证(它并不适用于当前版本)
kubectl create php-apache-test --image=mirrorgooglecontainers/hpa-example --requests=cpu=200m --expose --port=80
# service/php-apache created
# pod/php-apache created # 可以看见它创建的是Pod
kubectl get pod
# NAME READY STATUS RESTARTS AGE
# php-apache 1/1 Running 0 48s