以nginx为例
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
# kubectl create -f pod.yaml
pod/nginx-pod created
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 79s
# kubectl describe pod nginx-pod
Name: nginx-pod
Namespace: kubernetes-plugin
Priority: 0
PriorityClassName: <none>
Node: node2/192.168.152.145
Start Time: Tue, 13 Aug 2019 10:19:28 +0800
Labels: app=nginx
Annotations: <none>
Status: Running
IP: 10.244.2.65
Containers:
nginx:
Container ID: docker://1519ea97f735345a59eb36068ecc8063c1065a42ee308dd391164d1aa1915f0a
Image: nginx
Image ID: docker-pullable://nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 13 Aug 2019 10:20:00 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rfz89 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-rfz89:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rfz89
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 119s default-scheduler Successfully assigned kubernetes-plugin/nginx-pod to node2
Normal Pulling 117s kubelet, node2 pulling image "nginx"
Normal Pulled 87s kubelet, node2 Successfully pulled image "nginx"
Normal Created 87s kubelet, node2 Created container
Normal Started 87s kubelet, node2 Started container
有两种方法,一种是先删除pod,然后再更改yaml文件,重新创建
kubectl delete -f pod.yaml
kubectl create -f pod.yaml
另一种是使用kubectl apply命令
# kubectl apply -f pod.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/nginx-pod configured
这里我们改变了yaml配置文件,更改nginx的镜像版本为1.13
kubectl delete -f pod.yaml
其他删除方法
# kubectl get pod
NAME READY STATUS RESTARTS AGE
my-nginx-756fb87568-5z944 0/1 ContainerCreating 0 24s
my-nginx-756fb87568-7pfvq 0/1 ContainerCreating 0 24s
这个pod在启动时创建了两个副本,直接删除pod是删除不了的
kubectl delete pod my-nginx-756fb87568-5z944 my-nginx-756fb87568-7pfvq
这种删除由于会触发了replicas的确保机制,所以需要删除deployment
kubectl delete deployment my-nginx
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
k8s是通过docker来做资源限制的,可以先查看下,pod在哪个节点上运行的,然后查看下容器的具体信息
查看是哪个节点
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-pod 1/1 Running 0 4m36s 10.244.2.66 node2 <none> <none>
去node2节点查看容器的信息
docker inspect 1a
找到对应信息
"CpuShares": 256,
"Memory": 134217728,
强制约束Pod调度到指定Node节点上
下面是通过指定节点名
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
nodeName: node1 #指定节点名
containers:
- name: nginx
image: nginx:1.13
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
通过标签选择器
给节点分别创建一个标签
kubectl label nodes node1 env=dev
kubectl label nodes node2 env=test
修改yaml文件
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
nodeSelector:
env: test
containers:
- name: nginx
image: nginx:1.13
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
启动即可看到在node1节点上创建了一个pod
always: 当容器停止,总是重建容器,默认策略
OnFailure:当容器异常退出时,重启容器
Never:当容器终止退出,从不重启容器
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
nodeSelector:
env: test
containers:
- name: nginx
image: nginx:1.13
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
restartPolicy: OnFailure #异常退出,重启容器
livenessProbe:如果检查失败,将杀死容器,根据pod的restartPolicy来操作
readinessProbe: 如果检查失败,k8s会把pod从service endpoints中剔除
Probe支持三种检查方法:
httpGet: 发送HTTP请求,返回200-400范围状态码为成功
exec:执行shell命令,返回状态码为0 表示成功
tcpSocket: 发起TCP Socket建立三次握手进行探测,这种多用于探测端口是否监听
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
nodeSelector:
env: test
containers:
- name: nginx
image: nginx:1.13
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
livenessProbe: #发起http请求
httpGet:
path: /index.html
port: 80
restartPolicy: OnFailure
可以通过kubectl describe和kubectl logs来查看探测信息
我们可以通过进入容器中,将index.html删除,然后再来查看
进入容器方法
kubectl exec -it nginx-pod bash
进入容器删除index.html,退出再来查看nginx日志
# kubectl logs nginx-pod
10.244.2.1 - - [13/Aug/2019:03:11:38 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"
2019/08/13 03:11:48 [error] 6#6: *21 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /index.html HTTP/1.1", host: "10.244.2.69:80"
10.244.2.1 - - [13/Aug/2019:03:11:48 +0000] "GET /index.html HTTP/1.1" 404 170 "-" "kube-probe/1.13" "-"
2019/08/13 03:11:58 [error] 6#6: *22 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /index.html HTTP/1.1", host: "10.244.2.69:80"
10.244.2.1 - - [13/Aug/2019:03:11:58 +0000] "GET /index.html HTTP/1.1" 404 170 "-" "kube-probe/1.13" "-"
2019/08/13 03:12:08 [error] 6#6: *23 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /index.html HTTP/1.1", host: "10.244.2.69:80"
10.244.2.1 - - [13/Aug/2019:03:12:08 +0000] "GET /index.html HTTP/1.1" 404 170 "-" "kube-probe/1.13" "-"
查看pod的详细信息
kubectl describe pod nginx-pod
找到Events,查看信息
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m3s default-scheduler Successfully assigned kubernetes-plugin/nginx-pod to node2
Normal Pulled 75s (x2 over 5m3s) kubelet, node2 Container image "nginx:1.13" already present on machine
Normal Created 75s (x2 over 5m3s) kubelet, node2 Created container
Warning Unhealthy 75s (x3 over 95s) kubelet, node2 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 75s kubelet, node2 Killing container with id docker://nginx:Container failed liveness probe.. Container will be killed and recreated.
Normal Started 74s (x2 over 5m2s) kubelet, node2 Started container
再次查看nginx日志
# kubectl logs nginx-pod
10.244.2.1 - - [13/Aug/2019:03:12:18 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"
10.244.2.1 - - [13/Aug/2019:03:12:28 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"
10.244.2.1 - - [13/Aug/2019:03:12:38 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"
恢复正常