首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >Kubernetes不使用DNS - pod无法与外部世界通信

Kubernetes不使用DNS - pod无法与外部世界通信
EN

Stack Overflow用户
提问于 2018-04-19 01:16:56
回答 2查看 4.5K关注 0票数 2

我可能有kubernetes DNS的问题,因为我的服务无法与外部世界(bitbucker.org)通信。实际上我找到了这个页面:https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

并在我的集群上验证它(没有minikube):

zordon@megazord:~$ kubectl exec busybox cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

和:

zordon@megazord:~$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1

您知道如何解决从pod内部连接到外部世界的问题吗?

这可能与Flannel有关,因为只有通过docker运行的镜像连接才可用。值得一提的是,我已经用这个例子运行了我的集群:https://blog.alexellis.io/kubernetes-in-10-minutes/

我还修改了https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml并使用我的wifi卡传递参数--iface,它可以访问互联网,但kube-flannel-ds无法从:

args:
        - --ip-masq
        - --kube-subnet-mgr

至:

args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=wlan0ec5


zordon@megazord:~$ kubectl get pods  -n kube-system
NAME                               READY     STATUS    RESTARTS   AGE
etcd-megazord                      1/1       Running   1          21m
kube-apiserver-megazord            1/1       Running   1          21m
kube-controller-manager-megazord   1/1       Running   1          22m
kube-dns-86f4d74b45-8gh6q          3/3       Running   5          22m
kube-flannel-ds-2wqqr              1/1       Running   1          17m
kube-flannel-ds-59txb              1/1       Running   1          15m
kube-proxy-bdxb4                   1/1       Running   1          15m
kube-proxy-mg44x                   1/1       Running   1          22m
kube-scheduler-megazord            1/1       Running   1          22m


zordon@megazord:~$ kubectl get svc  -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   23m

zordon@megazord:~$ kubectl describe service kube-dns -n kube-system
Name:              kube-dns
Namespace:         kube-system
Labels:            k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=KubeDNS
Annotations:       <none>
Selector:          k8s-app=kube-dns
Type:              ClusterIP
IP:                10.96.0.10
Port:              dns  53/UDP
TargetPort:        53/UDP
Endpoints:         10.244.0.27:53
Port:              dns-tcp  53/TCP
TargetPort:        53/TCP
Endpoints:         10.244.0.27:53
Session Affinity:  None
Events:            <none>

zordon@megazord:~$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
I0419 17:40:11.473047       1 dns.go:48] version: 1.14.8
I0419 17:40:11.473975       1 server.go:71] Using configuration read from directory: /kube-dns-config with period 10s
I0419 17:40:11.474024       1 server.go:119] FLAG: --alsologtostderr="false"
I0419 17:40:11.474032       1 server.go:119] FLAG: --config-dir="/kube-dns-config"
I0419 17:40:11.474037       1 server.go:119] FLAG: --config-map=""
I0419 17:40:11.474041       1 server.go:119] FLAG: --config-map-namespace="kube-system"
I0419 17:40:11.474044       1 server.go:119] FLAG: --config-period="10s"
I0419 17:40:11.474049       1 server.go:119] FLAG: --dns-bind-address="0.0.0.0"
I0419 17:40:11.474053       1 server.go:119] FLAG: --dns-port="10053"
I0419 17:40:11.474058       1 server.go:119] FLAG: --domain="cluster.local."
I0419 17:40:11.474063       1 server.go:119] FLAG: --federations=""
I0419 17:40:11.474067       1 server.go:119] FLAG: --healthz-port="8081"
I0419 17:40:11.474071       1 server.go:119] FLAG: --initial-sync-timeout="1m0s"
I0419 17:40:11.474074       1 server.go:119] FLAG: --kube-master-url=""
I0419 17:40:11.474079       1 server.go:119] FLAG: --kubecfg-file=""
I0419 17:40:11.474082       1 server.go:119] FLAG: --log-backtrace-at=":0"
I0419 17:40:11.474087       1 server.go:119] FLAG: --log-dir=""
I0419 17:40:11.474091       1 server.go:119] FLAG: --log-flush-frequency="5s"
I0419 17:40:11.474094       1 server.go:119] FLAG: --logtostderr="true"
I0419 17:40:11.474098       1 server.go:119] FLAG: --nameservers=""
I0419 17:40:11.474101       1 server.go:119] FLAG: --stderrthreshold="2"
I0419 17:40:11.474104       1 server.go:119] FLAG: --v="2"
I0419 17:40:11.474107       1 server.go:119] FLAG: --version="false"
I0419 17:40:11.474113       1 server.go:119] FLAG: --vmodule=""
I0419 17:40:11.474190       1 server.go:201] Starting SkyDNS server (0.0.0.0:10053)
I0419 17:40:11.488125       1 server.go:220] Skydns metrics enabled (/metrics:10055)
I0419 17:40:11.488170       1 dns.go:146] Starting endpointsController
I0419 17:40:11.488180       1 dns.go:149] Starting serviceController
I0419 17:40:11.488348       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0419 17:40:11.488407       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0419 17:40:11.988549       1 dns.go:170] Initialized services and endpoints from apiserver
I0419 17:40:11.988609       1 server.go:135] Setting up Healthz Handler (/readiness)
I0419 17:40:11.988641       1 server.go:140] Setting up cache handler (/cache)
I0419 17:40:11.988649       1 server.go:126] Status HTTP port 8081


zordon@megazord:~$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
I0419 17:44:35.785171       1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0419 17:44:35.785336       1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0419 17:44:35.876534       1 nanny.go:119]
W0419 17:44:35.876572       1 nanny.go:120] Got EOF from stdout
I0419 17:44:35.876578       1 nanny.go:116] dnsmasq[26]: started, version 2.78 cachesize 1000
I0419 17:44:35.876615       1 nanny.go:116] dnsmasq[26]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0419 17:44:35.876632       1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0419 17:44:35.876642       1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0419 17:44:35.876653       1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0419 17:44:35.876666       1 nanny.go:116] dnsmasq[26]: reading /etc/resolv.conf
I0419 17:44:35.876677       1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0419 17:44:35.876691       1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0419 17:44:35.876701       1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0419 17:44:35.876709       1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.53#53
I0419 17:44:35.876717       1 nanny.go:116] dnsmasq[26]: read /etc/hosts - 7 addresses

**zordon@megazord:~$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar**
I0419 17:45:06.726670       1 main.go:51] Version v1.14.8
I0419 17:45:06.726781       1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
I0419 17:45:06.726842       1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}
I0419 17:45:06.726927       1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}

主节点:

zordon@megazord:~$ ip -d route
unicast default via 192.168.1.1 dev wlp32s0 proto static scope global metric 600
unicast 10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
unicast 10.244.1.0/24 via 10.244.1.0 dev flannel.1 proto boot scope global onlink
unicast 169.254.0.0/16 dev wlp32s0 proto boot scope link metric 1000
unicast 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
unicast 192.168.1.0/24 dev wlp32s0 proto kernel scope link src 192.168.1.110 metric 600
zordon@megazord:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp30s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether 4c:cc:6a:f8:7e:4b brd ff:ff:ff:ff:ff:ff
3: wlp32s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ec:08:6b:0c:9c:27 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.110/24 brd 192.168.1.255 scope global wlp32s0
       valid_lft forever preferred_lft forever
    inet6 fe80::f632:2f08:9caa:2c82/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:32:19:f7:5a brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:32ff:fe19:f75a/64 scope link
       valid_lft forever preferred_lft forever
6: vethf9de74d@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ba:af:58:a0:4a:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::b8af:58ff:fea0:4a74/64 scope link
       valid_lft forever preferred_lft forever
7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether a6:d1:45:73:c3:31 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::a4d1:45ff:fe73:c331/64 scope link
       valid_lft forever preferred_lft forever
8: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::24f5:4cff:fee9:a32d/64 scope link
       valid_lft forever preferred_lft forever
9: veth58367f89@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 7a:29:e9:c8:bf:3f brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::7829:e9ff:fec8:bf3f/64 scope link
       valid_lft forever preferred_lft forever

节点工作者:

zordon@k8s-minion-one:~$ ip -d route
unicast default via 192.168.1.1 dev enp0s25 proto dhcp scope global src 192.168.1.111 metric 100
unicast 10.244.0.0/24 via 10.244.0.0 dev flannel.1 proto boot scope global onlink
unicast 10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 linkdown
unicast 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
unicast 192.168.1.0/24 dev enp0s25 proto kernel scope link src 192.168.1.111
unicast 192.168.1.1 dev enp0s25 proto dhcp scope link src 192.168.1.111 metric 100
zordon@k8s-minion-one:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 18:03:73:45:75:71 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.111/24 brd 192.168.1.255 scope global enp0s25
       valid_lft forever preferred_lft forever
    inet6 fe80::1a03:73ff:fe45:7571/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:38:3e:a3:94 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:38ff:fe3e:a394/64 scope link
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 7a:d0:2a:b4:73:43 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::78d0:2aff:feb4:7343/64 scope link
       valid_lft forever preferred_lft forever
5: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 0a:58:0a:f4:01:01 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::7440:12ff:fefa:f55/64 scope link
       valid_lft forever preferred_lft forever
EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2018-04-24 02:41:36

我找到问题了。当我部署busybox并尝试使用名称ping外部服务器时出现问题。使用IP地址时不存在问题。所以问题出在DNS和域名解析上。在ping过程中,我查看了dns日志,发现了问题。帮助我配置dn的映射:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  upstreamNameservers: |-
    ["8.8.8.8", "8.8.4.4"]
票数 2
EN

Stack Overflow用户

发布于 2018-04-20 00:24:08

这肯定和法兰绒子系统有关。但是在调试flannel之前,了解kube-dns pod的情况会很有用。

尝试使用以下命令检查kube-dns pod和服务状态:

确保kube-dns pod的所有pod状态为1/1或3/3

$ kubectl get pods  -n kube-system
NAME                                   READY     STATUS    RESTARTS   AGE
etcd-kube-flannel                      1/1       Running   0          41m
kube-apiserver-kube-flannel            1/1       Running   0          41m
kube-controller-manager-kube-flannel   1/1       Running   0          41m
kube-dns-86f4d74b45-569vs              3/3       Running   0          42m
kube-flannel-ds-j482l                  1/1       Running   0          38m
kube-proxy-4jjjz                       1/1       Running   0          42m
kube-scheduler-kube-flannel            1/1       Running   0          41m

检查服务状态

$ kubectl get svc  -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   42m

查看服务详情

$ kubectl describe service kube-dns -n kube-system
Name:              kube-dns
Namespace:         kube-system
Labels:            k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=KubeDNS
Annotations:       <none>
Selector:          k8s-app=kube-dns
Type:              ClusterIP
IP:                10.96.0.10
Port:              dns  53/UDP
TargetPort:        53/UDP
Endpoints:         10.244.0.2:53
Port:              dns-tcp  53/TCP
TargetPort:        53/TCP
Endpoints:         10.244.0.2:53
Session Affinity:  None
Events:            <none>

检查Debugging DNS Resolution中提到的kube-dns日志

$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar

最后一条命令显示kube-dns pod的健康检查状态。

这应该足以理解已经破坏了什么。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/49905482

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档