kubernetes 基础集群排障

排错指南

在排错过程中,kubectl 是最重要的工具,通常也是定位错误的起点。这里也列出一些常用的命令,在后续的各种排错过程中都会经常用到。

查看 Pod 状态以及运行节点

[root@vm_0_10_centos sysctl.d]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-2217866662-nhkqz   1/1       Running   0          15s       172.16.0.5   10.0.0.10

[root@vm_0_10_centos sysctl.d]# kubectl -n kube-system get pods -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP           NODE
kube-dns-3162619857-mmspm           3/3       Running   0          40m       172.16.0.2   10.0.0.10
l7-lb-controller-2881622555-0v0p4   1/1       Running   0          40m       172.16.0.3   10.0.0.10

查看 Pod 事件

kubectl describe pod <pod-name>

[root@vm_0_10_centos sysctl.d]# kubectl describe pod nginx-2217866662-nhkqz
Name:           nginx-2217866662-nhkqz
Namespace:      default
Node:           10.0.0.10/10.0.0.10
Start Time:     Fri, 18 May 2018 15:11:57 +0800
Labels:         pod-template-hash=2217866662
                qcloud-app=nginx
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-2217866662","uid":"c08076ec-5a6a-11e8-8f38-5254005edd62","...
Status:         Running
IP:             172.16.0.5
Created By:     ReplicaSet/nginx-2217866662
Controlled By:  ReplicaSet/nginx-2217866662
Containers:
  nginx:
    Container ID:       docker://4fb98d7d4241f908695181b124096025d1bc6ba4f74065519c82b86ea8bd635d
    Image:              nginx:latest
    Image ID:           docker-pullable://nginx@sha256:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884
    Port:               <none>
    State:              Running
      Started:          Fri, 18 May 2018 15:12:10 +0800
    Ready:              True
    Restart Count:      0
    Limits:
      cpu:      500m
      memory:   1Gi
    Requests:
      cpu:              250m
      memory:           256Mi
    Environment:        <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-1vnmn (ro)
Conditions:
  Type          Status
  Initialized   True
  Ready         True
  PodScheduled  True
Volumes:
  default-token-1vnmn:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-1vnmn
    Optional:   false
QoS Class:      Burstable
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath           Type            Reason                  Message
  ---------     --------        -----   ----                    -------------           --------        ------                  -------
  1m            1m              1       default-scheduler                               Normal          Scheduled               Successfully assigned nginx-2217866662-nhkqz to 10.0.0.10
  1m            1m              1       kubelet, 10.0.0.10                              Normal          SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-1vnmn"
  1m            1m              1       kubelet, 10.0.0.10      spec.containers{nginx}  Normal          Pulling                 pulling image "nginx:latest"
  1m            1m              1       kubelet, 10.0.0.10      spec.containers{nginx}  Normal          Pulled                  Successfully pulled image "nginx:latest"
  1m            1m              1       kubelet, 10.0.0.10      spec.containers{nginx}  Normal          Created                 Created container
  1m            1m              1       kubelet, 10.0.0.10      spec.containers{nginx}  Normal          Started                 Started container

查看 Node 状态

[root@VM_0_10_centos ~]# kubectl get nodes
NAME        STATUS    AGE       VERSION
10.0.0.10   Ready     1h        v1.7.8-qcloud
[root@VM_0_10_centos ~]# kubectl describe node 10.0.0.10
Name:                   10.0.0.10
Role:
Labels:                 beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/instance-type=QCLOUD
                        beta.kubernetes.io/os=linux
                        failure-domain.beta.kubernetes.io/region=gz
                        failure-domain.beta.kubernetes.io/zone=100002
                        kubernetes.io/hostname=10.0.0.10
Annotations:            node.alpha.kubernetes.io/ttl=0
                        volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:                 <none>
CreationTimestamp:      Fri, 18 May 2018 14:35:14 +0800
Conditions:
  Type                  Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----                  ------  -----------------                       ------------------                      ------                          -------
  NetworkUnavailable    False   Fri, 18 May 2018 14:35:23 +0800         Fri, 18 May 2018 14:35:23 +0800         RouteCreated                    RouteController created a route
  OutOfDisk             False   Fri, 18 May 2018 15:38:31 +0800         Fri, 18 May 2018 14:35:16 +0800         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  MemoryPressure        False   Fri, 18 May 2018 15:38:31 +0800         Fri, 18 May 2018 14:35:16 +0800         KubeletHasSufficientMemory      kubelet has sufficient memory available
  DiskPressure          False   Fri, 18 May 2018 15:38:31 +0800         Fri, 18 May 2018 14:35:16 +0800         KubeletHasNoDiskPressure        kubelet has no disk pressure
  Ready                 True    Fri, 18 May 2018 15:38:31 +0800         Fri, 18 May 2018 14:35:36 +0800         KubeletReady                    kubelet is posting ready status
Addresses:
  InternalIP:   10.0.0.10
  ExternalIP:   119.29.5.204
  Hostname:     10.0.0.10
Capacity:
 cpu:           2
 memory:        1883712Ki
 pods:          110
Allocatable:
 cpu:           1930m
 memory:        1453632Ki
 pods:          110
System Info:
 Machine ID:                    f9d400c5e1e8c3a8209e990d887d4ac1
 System UUID:                   A84ED043-ECAE-46CE-BB9D-7BCF16C5C59E
 Boot ID:                       f8d75ea2-2f4e-4fa5-91a5-13aceefa94dd
 Kernel Version:                3.10.0-514.26.2.el7.x86_64
 OS Image:                      CentOS Linux 7 (Core)
 Operating System:              linux
 Architecture:                  amd64
 Container Runtime Version:     docker://1.12.6
 Kubelet Version:               v1.7.8-qcloud
 Kube-Proxy Version:            v1.7.8-qcloud
PodCIDR:                        172.16.0.0/24
ExternalID:                     ins-1fctj7ds
Non-terminated Pods:            (4 in total)
  Namespace                     Name                                            CPU Requests    CPU Limits      Memory Requests Memory Limits
  ---------                     ----                                            ------------    ----------      --------------- -------------
  default                       curl                                            0 (0%)          0 (0%)          0 (0%)          0 (0%)
  default                       nginx-2217866662-nhkqz                          250m (12%)      500m (25%)      256Mi (18%)     1Gi (72%)
  kube-system                   kube-dns-3162619857-mmspm                       260m (13%)      0 (0%)          110Mi (7%)      170Mi (11%)
  kube-system                   l7-lb-controller-2881622555-0v0p4               0 (0%)          0 (0%)          0 (0%)          0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ------------  ----------      --------------- -------------
  510m (26%)    500m (25%)      366Mi (25%)     1194Mi (84%)
Events:         <none>

查看k8s-dns状态

(用户购买的节点关键组件只有kube-proxy kubelet kube-proxy-dns )

[root@VM_0_10_centos ~]# PODNAME=$(kubectl -n kube-system get pod -l k8s-app=kube-dns -o jsonpath='{.items[0].metadata.name}')
[root@VM_0_10_centos ~]# kubectl -n kube-system logs $PODNAME -c kubedns
I0518 06:35:55.598577       1 dns.go:48] version: 1.14.3-4-gee838f6
I0518 06:35:55.599872       1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I0518 06:35:55.599911       1 server.go:113] FLAG: --alsologtostderr="false"
I0518 06:35:55.599918       1 server.go:113] FLAG: --config-dir="/kube-dns-config"
I0518 06:35:55.599922       1 server.go:113] FLAG: --config-map=""
I0518 06:35:55.599925       1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0518 06:35:55.599928       1 server.go:113] FLAG: --config-period="10s"
I0518 06:35:55.599931       1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0518 06:35:55.599934       1 server.go:113] FLAG: --dns-port="10053"
I0518 06:35:55.599938       1 server.go:113] FLAG: --domain="cluster.local."
I0518 06:35:55.599943       1 server.go:113] FLAG: --federations=""
I0518 06:35:55.599946       1 server.go:113] FLAG: --healthz-port="8081"
I0518 06:35:55.599949       1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0518 06:35:55.599952       1 server.go:113] FLAG: --kube-master-url=""
I0518 06:35:55.599955       1 server.go:113] FLAG: --kubecfg-file=""
I0518 06:35:55.599958       1 server.go:113] FLAG: --log-backtrace-at=":0"
I0518 06:35:55.599963       1 server.go:113] FLAG: --log-dir=""
I0518 06:35:55.599966       1 server.go:113] FLAG: --log-flush-frequency="5s"
I0518 06:35:55.599969       1 server.go:113] FLAG: --logtostderr="true"
I0518 06:35:55.599971       1 server.go:113] FLAG: --nameservers=""
I0518 06:35:55.599973       1 server.go:113] FLAG: --stderrthreshold="2"
I0518 06:35:55.599976       1 server.go:113] FLAG: --v="2"
I0518 06:35:55.599979       1 server.go:113] FLAG: --version="false"
I0518 06:35:55.599983       1 server.go:113] FLAG: --vmodule=""
I0518 06:35:55.600088       1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0518 06:35:55.600261       1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0518 06:35:55.600272       1 dns.go:147] Starting endpointsController
I0518 06:35:55.600275       1 dns.go:150] Starting serviceController
I0518 06:35:55.600371       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0518 06:35:55.600381       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0518 06:35:56.100587       1 dns.go:171] Initialized services and endpoints from apiserver
I0518 06:35:56.100603       1 server.go:129] Setting up Healthz Handler (/readiness)
I0518 06:35:56.100610       1 server.go:134] Setting up cache handler (/cache)
I0518 06:35:56.100615       1 server.go:120] Status HTTP port 8081

Kubelet 日志

[root@VM_0_10_centos ~]# journalctl -l -u kubelet
-- Logs begin at Fri 2018-05-18 14:34:03 CST, end at Fri 2018-05-18 15:45:01 CST. --
May 18 14:35:15 VM_0_10_centos systemd[1]: Starting kubelet...
May 18 14:35:15 VM_0_10_centos sh[10677]: iptables: Bad rule (does a matching rule exist in that chain?).
May 18 14:35:15 VM_0_10_centos sh[10677]: iptables: Bad rule (does a matching rule exist in that chain?).
May 18 14:35:15 VM_0_10_centos sh[10677]: iptables: Bad rule (does a matching rule exist in that chain?).
May 18 14:35:15 VM_0_10_centos sh[10677]: iptables: Bad rule (does a matching rule exist in that chain?).
May 18 14:35:15 VM_0_10_centos sh[10677]: iptables: Bad rule (does a matching rule exist in that chain?).
May 18 14:35:15 VM_0_10_centos sh[10677]: iptables: Bad rule (does a matching rule exist in that chain?).
May 18 14:35:15 VM_0_10_centos systemd[1]: Started kubelet.
May 18 14:35:15 VM_0_10_centos kubelet[10676]: Flag --network-plugin-dir has been deprecated, Use --cni-bin-dir instead. This flag will be removed in a future version.
May 18 14:35:15 VM_0_10_centos kubelet[10676]: Flag --register-schedulable has been deprecated, will be removed in a future version
May 18 14:35:15 VM_0_10_centos kubelet[10676]: I0518 14:35:15.642678   10676 feature_gate.go:144] feature gates: map[]
May 18 14:35:15 VM_0_10_centos kubelet[10676]: I0518 14:35:15.642996   10676 qcloud.go:89] config:%v{gz 100002 vpc-mnjsqwgx   }
May 18 14:35:15 VM_0_10_centos kubelet[10676]: I0518 14:35:15.643065   10676 server.go:439] Successfully initialized cloud provider: "qcloud" from the config file: "/etc/kubernetes/qcloud.conf"
May 18 14:35:15 VM_0_10_centos kubelet[10676]: I0518 14:35:15.675757   10676 server.go:740] cloud provider determined current node name to be 10.0.0.10
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.011547   10676 client.go:72] Connecting to docker on unix:///var/run/docker.sock
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.011589   10676 client.go:92] Start docker client with request timeout=2m0s
May 18 14:35:16 VM_0_10_centos kubelet[10676]: W0518 14:35:16.015658   10676 cni.go:189] Unable to update cni config: No networks found in /usr/bin
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.024427   10676 server.go:740] cloud provider determined current node name to be 10.0.0.10
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.024499   10676 manager.go:143] cAdvisor running in container: "/system.slice/kubelet.service"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: W0518 14:35:16.027122   10676 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.028669   10676 fs.go:117] Filesystem partitions: map[/dev/vda1:{mountpoint:/var/lib/docker/overlay2 major:253 minor:1 fsType:ext3 blockSize:0}]
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.029405   10676 manager.go:198] Machine: {NumCores:2 CpuFrequency:2394454 MemoryCapacity:1928921088 MachineID:f9d400c5e1e8c3a8209e990d887d4ac1 SystemUUID:A84ED043-ECAE-46CE-BB9D-7BCF16C5C59E BootID:f8d7
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.029932   10676 manager.go:204] Version: {KernelVersion:3.10.0-514.26.2.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.12.6 DockerAPIVersion:1.24 CadvisorVersion: CadvisorRevision:}
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.030332   10676 server.go:550] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.031749   10676 container_manager_linux.go:246] container manager verified user specified cgroup-root exists: /
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.031766   10676 container_manager_linux.go:251] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.031940   10676 server.go:740] cloud provider determined current node name to be 10.0.0.10
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.031958   10676 server.go:926] Using root directory: /var/lib/kubelet
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.032007   10676 kubelet.go:332] cloud provider determined current node name to be 10.0.0.10
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.032025   10676 kubelet.go:275] Watching apiserver
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.036656   10676 kubelet.go:511] Hairpin mode set to "promiscuous-bridge"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.080957   10676 plugins.go:194] Loaded network plugin "kubenet"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: W0518 14:35:16.082413   10676 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.100643   10676 plugins.go:194] Loaded network plugin "kubenet"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.100668   10676 docker_service.go:208] Docker cri networking managed by kubenet
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.102031   10676 docker_service.go:225] Setting cgroupDriver to cgroupfs
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.102679   10676 docker_legacy.go:151] No legacy containers found, stop performing legacy cleanup.
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.102726   10676 kubelet.go:598] Starting the GRPC server for the docker CRI shim.
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.102742   10676 docker_server.go:51] Start dockershim grpc server
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.104074   10676 remote_runtime.go:42] Connecting to runtime service unix:///var/run/dockershim.sock
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105393   10676 kuberuntime_manager.go:166] Container runtime docker initialized, version: 1.12.6, apiVersion: 1.24.0
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105671   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/aws-ebs"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105687   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/empty-dir"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105695   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/gce-pd"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105702   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/git-repo"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105710   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/host-path"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105718   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/nfs"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105726   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/secret"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105735   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/iscsi"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105745   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/glusterfs"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105752   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/rbd"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105761   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/cinder"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105767   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/quobyte"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105774   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/cephfs"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105785   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/downward-api"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105792   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/fc"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105800   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/flocker"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105808   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/azure-file"
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.105815   10676 plugins.go:370] Loaded volume plugin "kubernetes.io/configmap"
lines 1-60
May 18 14:35:16 VM_0_10_centos kubelet[10676]: I0518 14:35:16.031766   10676 container_manager_linux.go:251] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS

Kube-proxy 日志

Kube-proxy

[root@VM_0_10_centos ~]# journalctl -l -u kube-proxy
-- Logs begin at Fri 2018-05-18 14:34:03 CST, end at Fri 2018-05-18 15:50:50 CST. --
May 18 14:35:15 VM_0_10_centos systemd[1]: [/usr/lib/systemd/system/kube-proxy.service:8] Empty path in command line, ignoring: -
May 18 14:35:15 VM_0_10_centos systemd[1]: [/usr/lib/systemd/system/kube-proxy.service:8] Empty path in command line, ignoring: -
May 18 14:35:15 VM_0_10_centos systemd[1]: Started kube-proxy.
May 18 14:35:15 VM_0_10_centos systemd[1]: Starting kube-proxy...
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: W0518 14:35:15.745190   10715 server.go:190] WARNING: all flags other than --config, --write-config-to, and --cleanup-iptables are deprecated. Please begin using a config file ASAP.
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.752245   10715 server.go:478] Using iptables Proxier.
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: W0518 14:35:15.765324   10715 server.go:787] Failed to retrieve node info: nodes "vm_0_10_centos" not found
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: W0518 14:35:15.765402   10715 proxier.go:483] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: W0518 14:35:15.765410   10715 proxier.go:488] clusterCIDR not specified, unable to distinguish between internal and external traffic
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.765487   10715 server.go:513] Tearing down userspace rules.
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.777026   10715 server.go:621] setting OOM scores is unsupported in this build
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.779733   10715 server.go:630] Running in resource-only container "/kube-proxy"
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.779914   10715 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.779937   10715 conntrack.go:52] Setting nf_conntrack_max to 131072
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.780164   10715 conntrack.go:83] Setting conntrack hashsize to 32768
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.780304   10715 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.780318   10715 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.783627   10715 config.go:202] Starting service config controller
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.783653   10715 controller_utils.go:994] Waiting for caches to sync for service config controller
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.783789   10715 config.go:102] Starting endpoints config controller
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.783795   10715 controller_utils.go:994] Waiting for caches to sync for endpoints config controller
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.883800   10715 controller_utils.go:1001] Caches are synced for service config controller
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.883944   10715 proxier.go:997] Not syncing iptables until Services and Endpoints have been received from master
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.883986   10715 controller_utils.go:1001] Caches are synced for endpoints config controller
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.884066   10715 proxier.go:320] Adding new service port "default/kubernetes:https" at 172.16.255.1:443/TCP
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.884105   10715 proxier.go:320] Adding new service port "kube-system/kube-dns:dns" at 172.16.255.226:53/UDP
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.884116   10715 proxier.go:320] Adding new service port "kube-system/kube-dns:dns-tcp" at 172.16.255.226:53/TCP
May 18 14:35:15 VM_0_10_centos kube-proxy[10715]: I0518 14:35:15.884123   10715 proxier.go:320] Adding new service port "kube-system/hpa-metrics-service:" at 172.16.255.24:443/TCP
May 18 14:36:10 VM_0_10_centos kube-proxy[10715]: I0518 14:36:10.802251   10715 proxier.go:1013] Stale udp service kube-system/kube-dns:dns -> 172.16.255.226
May 18 14:36:10 VM_0_10_centos kube-proxy[10715]: I0518 14:36:10.815722   10715 conntrack.go:36] Deleting connection tracking state for service IP 172.16.255.226
May 18 15:11:57 VM_0_10_centos kube-proxy[10715]: I0518 15:11:57.733635   10715 proxier.go:320] Adding new service port "default/nginx:tcp-80-80-fneqk" at 172.16.255.64:80/TCP
May 18 15:11:57 VM_0_10_centos kube-proxy[10715]: I0518 15:11:57.745443   10715 proxier.go:1718] Opened local port "nodePort for default/nginx:tcp-80-80-fneqk" (:32148/tcp)
May 18 15:12:18 VM_0_10_centos kube-proxy[10715]: I0518 15:12:18.635145   10715 proxier.go:322] Updating existing service port "default/nginx:tcp-80-80-fneqk" at 172.16.2

原创声明,本文系作者授权云+社区发表,未经许可,不得转载。

如有侵权,请联系 yunjia_community@tencent.com 删除。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

扫码关注云+社区

领取腾讯云代金券