首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >ContainerCreating:来自服务器的错误(BadRequest):容器"kubedns“

ContainerCreating:来自服务器的错误(BadRequest):容器"kubedns“
EN

Stack Overflow用户
提问于 2017-04-25 05:47:24
回答 2查看 9.9K关注 0票数 3

我已经设置了这个3节点集群(http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/vagrant/)。

在重新启动节点之后。KubeDNS服务未启动。日志没有显示太多信息。

获取下面的消息

代码语言:javascript
复制
$  kubectl logs --namespace=kube-system kube-dns-v19-sqx9q -c kubedns
Error from server (BadRequest): container "kubedns" in pod "kube-dns-v19-sqx9q" is waiting to start: ContainerCreating

节点正在运行。

代码语言:javascript
复制
$ kubectl get nodes
NAME            STATUS                     AGE       VERSION
172.18.18.101   Ready,SchedulingDisabled   2d        v1.6.0
172.18.18.102   Ready                      2d        v1.6.0
172.18.18.103   Ready                      2d        v1.6.0


$ kubectl get pods --namespace=kube-system
NAME                                        READY     STATUS              RESTARTS   AGE
calico-node-6rhb9                           2/2       Running             4          2d
calico-node-mbhk7                           2/2       Running             93         2d
calico-node-w9sjq                           2/2       Running             6          2d
calico-policy-controller-2425378810-rd9h7   1/1       Running             0          25m
kube-dns-v19-sqx9q                          0/3       ContainerCreating   0          25m
kubernetes-dashboard-2457468166-rs0tn       0/1       ContainerCreating   0          25m

如何查找DNS服务的问题所在?

感谢SR

更多详细信息

代码语言:javascript
复制
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  31m       31m     1   kubelet, 172.18.18.102          Warning     FailedSync  Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 87bd5c4bc5b9d81468170cc840ba9203988bb259aa0c025372ee02303d9e8d4b"

  31m   31m 1   kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: d091593b55eb9e16e09c5bc47f4701015839d83d23546c4c6adc070bc37ad60d"

  30m   30m 1   kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 69a1fa33f26b851664b2ad10def1eb37b5e5391ca33dad2551a2f98c52e05d0d
  30m   30m 1   kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: c3b7c06df3bea90e4d12c0b7f1a03077edf5836407206038223967488b279d3d"

  28m   28m 1   kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 467d54496eb5665c5c7c20b1adb0cc0f01987a83901e4b54c1dc9ccb4860f16d"

  28m   28m 1   kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1cd8022c9309205e61d7e593bc7ff3248af17d731e2a4d55e74b488cbc115162
  27m   27m 1   kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1ed4174aba86124055981b7888c9d048d784e98cef5f2763fd1352532a0ba85d
  26m   26m 1   kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 444693b4ce06eb25f3dbd00aebef922b72b291598fec11083cb233a0f9d5e92d"

  25m   25m 1   kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 736df24a9a6640300d62d542e5098e03a5a9fde4f361926e2672880b43384516
  8m    8m  1   kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 8424dbdf92b16602c7d5a4f61d21cd602c5da449c6ec3449dafbff80ff5e72c4
  2h    1m  49  kubelet, 172.18.18.102      Warning FailedSync  (events with common reason combined)
  2h    2s  361 kubelet, 172.18.18.102      Warning FailedSync  Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-v19-sqx9q_kube-system\" network: the server has asked for the client to provide credentials (get pods kube-dns-v19-sqx9q)"

  2h    1s  406 kubelet, 172.18.18.102      Normal  SandboxChanged  Pod sandbox changed, it will be killed and re-created.

描述输出的pod

代码语言:javascript
复制
Name:       kube-dns-v19-sqx9q
Namespace:  kube-system
Node:       172.18.18.102/172.18.18.102
Start Time: Mon, 24 Apr 2017 17:34:22 -0400
Labels:     k8s-app=kube-dns
        kubernetes.io/cluster-service=true
        version=v19
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kube-dns-v19","uid":"dac3d892-278c-11e7-b2b5-0800...
        scheduler.alpha.kubernetes.io/critical-pod=
        scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status:     Pending
IP:     
Controllers:    ReplicationController/kube-dns-v19
Containers:
  kubedns:
    Container ID:   
    Image:      gcr.io/google_containers/kubedns-amd64:1.7
    Image ID:       
    Ports:      10053/UDP, 10053/TCP
    Args:
      --domain=cluster.local
      --dns-port=10053
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Limits:
      cpu:  100m
      memory:   170Mi
    Requests:
      cpu:      100m
      memory:       70Mi
    Liveness:       http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:      http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
  dnsmasq:
    Container ID:   
    Image:      gcr.io/google_containers/kube-dnsmasq-amd64:1.3
    Image ID:       
    Ports:      53/UDP, 53/TCP
    Args:
      --cache-size=1000
      --no-resolv
      --server=127.0.0.1#10053
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
  healthz:
    Container ID:   
    Image:      gcr.io/google_containers/exechealthz-amd64:1.1
    Image ID:       
    Port:       8080/TCP
    Args:
      -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
      -port=8080
      -quiet
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Limits:
      cpu:  10m
      memory:   50Mi
    Requests:
      cpu:      10m
      memory:       50Mi
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  default-token-r5xws:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-r5xws
    Optional:   false
QoS Class:  Burstable
Node-Selectors: <none>
Tolerations:    <none>
EN

回答 2

Stack Overflow用户

发布于 2017-05-03 15:02:15

从机密default-token-r5xws装载服务帐户/var/run/secrets/kubernetes.io/serviceaccount失败。检查此密钥创建失败的日志。

票数 1
EN

Stack Overflow用户

发布于 2019-05-22 06:03:35

我通过登录到计算机上运行的Docker Desktop解决了这个问题。

(我通过minikube在我的电脑上运行Kubernetes )

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/43598259

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档