我使用两个裸金属服务器(1个主服务器和1个工作服务器)安装了一个k8s集群,使用具有默认设置(kube_proxy_mode:iptables和dns_mode:coredns)的kubespray),我希望在其中运行绑定DNS服务器来管理多个域名。
我用helm 3部署了一个helloworld web应用程序进行测试。一切都很有魅力(HTTP、HTTPs,让我们对证书管理器进行加密)。
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:24:46Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 22d v1.16.7
k8sslave Ready <none> 21d v1.16.7
我使用Helm 3图表部署了默认名称空间中的绑定DNS服务器(已命名)的映像;该服务公开了绑定应用程序容器的端口53。
我已经用pod和bind服务测试了DNS解析;它运行得很好。下面是来自主节点的绑定k8s服务的测试:
kubectl -n default get svc bind -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
bind ClusterIP 10.233.31.255 <none> 53/TCP,53/UDP 4m5s app=bind,release=bind
kubectl get endpoints bind
NAME ENDPOINTS AGE
bind 10.233.75.239:53,10.233.93.245:53,10.233.75.239:53 + 1 more... 4m12s
export SERVICE_IP=`kubectl get services bind -o go-template='{{.spec.clusterIP}}{{"\n"}}'`
nslookup www.example.com ${SERVICE_IP}
Server: 10.233.31.255
Address: 10.233.31.255#53
Name: www.example.com
Address: 176.31.XXX.XXX
因此,绑定DNS应用程序被部署,并且通过绑定k8s服务工作得很好。
接下来,我按照https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/文档设置了Nginx ( configmap和service),以便在端口53上处理tcp/udp请求,并将它们重定向到绑定DNS应用程序。
当我从外部计算机测试名称解析时,它不起作用:
nslookup www.example.com <IP of the k8s master>
;; connection timed out; no servers could be reached
我深入到k8s配置、日志等中,并在kube日志中找到一条警告消息:
ps auxw | grep kube-proxy
root 19984 0.0 0.2 141160 41848 ? Ssl Mar26 19:39 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8smaster
journalctl --since "2 days ago" | grep kube-proxy
<NOTHING RETURNED>
KUBEPROXY_FIRST_POD=`kubectl get pods -n kube-system -l k8s-app=kube-proxy -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | head -n 1`
kubectl logs -n kube-system ${KUBEPROXY_FIRST_POD}
I0326 22:26:03.491900 1 node.go:135] Successfully retrieved node IP: 91.121.XXX.XXX
I0326 22:26:03.491957 1 server_others.go:150] Using iptables Proxier.
I0326 22:26:03.492453 1 server.go:529] Version: v1.16.7
I0326 22:26:03.493179 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0326 22:26:03.493647 1 config.go:131] Starting endpoints config controller
I0326 22:26:03.493663 1 config.go:313] Starting service config controller
I0326 22:26:03.493669 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0326 22:26:03.493679 1 shared_informer.go:197] Waiting for caches to sync for service config
I0326 22:26:03.593986 1 shared_informer.go:204] Caches are synced for endpoints config
I0326 22:26:03.593992 1 shared_informer.go:204] Caches are synced for service config
E0411 17:02:48.113935 1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-udp" (91.121.XXX.XXX:53/udp), skipping this externalIP: listen udp 91.121.XXX.XXX:53: bind: address already in use
E0411 17:02:48.119378 1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-tcp" (91.121.XXX.XXX:53/tcp), skipping this externalIP: listen tcp 91.121.XXX.XXX:53: bind: address already in use
然后我找谁已经在使用53号港口了..。
netstat -lpnt | grep 53
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1682/systemd-resolv
tcp 0 0 87.98.XXX.XXX:53 0.0.0.0:* LISTEN 19984/kube-proxy
tcp 0 0 169.254.25.10:53 0.0.0.0:* LISTEN 14448/node-cache
tcp6 0 0 :::9253 :::* LISTEN 14448/node-cache
tcp6 0 0 :::9353 :::* LISTEN 14448/node-cache
查看proc 14448/节点缓存:
cat /proc/14448/cmdline
/node-cache-localip169.254.25.10-conf/etc/coredns/Corefile-upstreamsvccoredns
因此,编码已经在处理端口53,这是正常的,因为它是k8s内部的DNS服务。
在coredns文档(https://github.com/coredns/coredns/blob/master/README.md)中,他们讨论了使用不同端口的-dns.port
选项.但是,当我查看kubespray (它有3个jinja模板https://github.com/kubernetes-sigs/kubespray/tree/release-2.12/roles/kubernetes-apps/ansible/templates,用于创建类似于https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns的coredns configmap、服务等)时,所有东西都是用端口53硬编码的。
因此,我的问题是:是否有k8s集群配置/解决方案,以便我可以运行自己的DNS服务器并将其公开到端口53?
也许吧?
致以敬意,
克里斯
发布于 2020-04-12 16:38:47
我想你的女招待不像预期的那样有效。您需要将负载均衡器提供者(如MetalLB )加载到裸金属k8s集群中,以便在端口53上接收外部连接。您不需要在绑定中使用nginx-ingress,只需将bind
Service
类型从ClusterIP
更改为LoadBalancer
,并确保在此服务上获得了外部IP。您的舵机图表手册可能有助于切换到LoadBalancer
。
https://stackoverflow.com/questions/61172266
复制相似问题