我首先在CentOS 8上构建了一个kubernetes集群。我按照这里提供的方法操作:https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/
然后我构建了一个Ubuntu 18.04虚拟机,并在上面安装了Rancher。我可以很好地访问牧场主网站,除了我不能将kubernetes集群添加到其中之外,一切似乎都在牧场主一侧工作。
当我使用"Add Cluster“功能时,我选择了"Other Cluster”选项,为其命名,然后单击create。然后,我将不安全的“集群注册命令”复制到主节点。它似乎可以很好地接受命令。
在故障排除过程中,我发出了以下命令: kubectl system logs -l -n =logs cluster-agent
我得到的输出如下:
INFO: Environment: CATTLE_ADDRESS=10.42.0.1 CATTLE_CA_CHECKSUM=94ad10e756d390cdf8b25465f938c04344a396b16b4ff6c0922b9cd6b9fc454c CATTLE_CLUSTER=true CATTLE_CLUSTER_REGISTRY= CATTLE_FEATURES= CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-7b9df685cf-9kr4p CATTLE_SERVER=https://192.168.188.189:8443
INFO: Using resolv.conf: nameserver 10.96.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5
ERROR: https://192.168.188.189:8443/ping is not accessible (Failed to connect to 192.168.188.189 port 8443: No route to host)
INFO: Environment: CATTLE_ADDRESS=10.40.0.0 CATTLE_CA_CHECKSUM=94ad10e756d390cdf8b25465f938c04344a396b16b4ff6c0922b9cd6b9fc454c CATTLE_CLUSTER=true CATTLE_CLUSTER_REGISTRY= CATTLE_FEATURES= CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-7bc7687557-tkvzt CATTLE_SERVER=https://192.168.188.189:8443
INFO: Using resolv.conf: nameserver 10.96.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5
ERROR: https://192.168.188.189:8443/ping is not accessible (Failed to connect to 192.168.188.189 port 8443: No route to host)
[root@k8s-master ~]# ping 192.168.188.189
PING 192.168.188.189 (192.168.188.189) 56(84) bytes of data.
64 bytes from 192.168.188.189: icmp_seq=1 ttl=64 time=0.432 ms
64 bytes from 192.168.188.189: icmp_seq=2 ttl=64 time=0.400 ms
^C
--- 192.168.188.189 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.400/0.416/0.432/0.016 ms
如您所见,我收到一条"No route to host“错误消息。但是,我可以使用rancher VM的IP地址来ping它。
它似乎试图在群集内使用resolv.conf,并希望使用10.96.0.10来解析192.168.188.189 (我的牧场主虚拟机)的ip地址。但它似乎未能解决这个问题。
我想我有某种类型的DNS问题,使我无法使用主机名。不过,我已经在主节点和工作节点上编辑了/etc/hosts文件,以包含每个设备的条目。我可以使用设备的主机名对其执行ping操作,但使用:无法访问pod。当我尝试这样做的时候,我也得到了"No route to host“的错误信息。请看这里:
[root@k8s-master ~]# ping k8s-worker1
PING k8s-worker1 (192.168.188.191) 56(84) bytes of data.
64 bytes from k8s-worker1 (192.168.188.191): icmp_seq=1 ttl=64 time=0.478 ms
64 bytes from k8s-worker1 (192.168.188.191): icmp_seq=2 ttl=64 time=0.449 ms
^C
--- k8s-worker1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.449/0.463/0.478/0.025 ms
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world NodePort 10.103.5.49 <none> 8080:30370/TCP 45m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.97.172.245 <none> 80:30205/TCP 3h43m
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-world-7884c6997d-2dc9z 1/1 Running 0 28m 10.40.0.4 k8s-worker3 <none> <none>
hello-world-7884c6997d-562lh 1/1 Running 0 28m 10.35.0.8 k8s-worker2 <none> <none>
hello-world-7884c6997d-78dmm 1/1 Running 0 28m 10.36.0.3 k8s-worker1 <none> <none>
hello-world-7884c6997d-7vt4f 1/1 Running 0 28m 10.40.0.6 k8s-worker3 <none> <none>
hello-world-7884c6997d-bpq5g 1/1 Running 0 49m 10.36.0.2 k8s-worker1 <none> <none>
hello-world-7884c6997d-c529d 1/1 Running 0 28m 10.35.0.6 k8s-worker2 <none> <none>
hello-world-7884c6997d-ddk7k 1/1 Running 0 28m 10.36.0.5 k8s-worker1 <none> <none>
hello-world-7884c6997d-fq8hx 1/1 Running 0 28m 10.35.0.7 k8s-worker2 <none> <none>
hello-world-7884c6997d-g5lxs 1/1 Running 0 28m 10.40.0.3 k8s-worker3 <none> <none>
hello-world-7884c6997d-kjb7f 1/1 Running 0 49m 10.35.0.3 k8s-worker2 <none> <none>
hello-world-7884c6997d-nfdpc 1/1 Running 0 28m 10.40.0.5 k8s-worker3 <none> <none>
hello-world-7884c6997d-nnd6q 1/1 Running 0 28m 10.36.0.7 k8s-worker1 <none> <none>
hello-world-7884c6997d-p6gxh 1/1 Running 0 49m 10.40.0.1 k8s-worker3 <none> <none>
hello-world-7884c6997d-p7v4b 1/1 Running 0 28m 10.35.0.4 k8s-worker2 <none> <none>
hello-world-7884c6997d-pwpxr 1/1 Running 0 28m 10.36.0.4 k8s-worker1 <none> <none>
hello-world-7884c6997d-qlg9h 1/1 Running 0 28m 10.40.0.2 k8s-worker3 <none> <none>
hello-world-7884c6997d-s89c5 1/1 Running 0 28m 10.35.0.5 k8s-worker2 <none> <none>
hello-world-7884c6997d-vd8ch 1/1 Running 0 28m 10.40.0.7 k8s-worker3 <none> <none>
hello-world-7884c6997d-wvnh7 1/1 Running 0 28m 10.36.0.6 k8s-worker1 <none> <none>
hello-world-7884c6997d-z57kx 1/1 Running 0 49m 10.36.0.1 k8s-worker1 <none> <none>
nginx-6799fc88d8-gm5ls 1/1 Running 0 4h11m 10.35.0.1 k8s-worker2 <none> <none>
nginx-6799fc88d8-k2jtw 1/1 Running 0 4h11m 10.44.0.1 k8s-worker1 <none> <none>
nginx-6799fc88d8-mc5mz 1/1 Running 0 4h12m 10.36.0.0 k8s-worker1 <none> <none>
nginx-6799fc88d8-qn6mh 1/1 Running 0 4h11m 10.35.0.2 k8s-worker2 <none> <none>
[root@k8s-master ~]# curl k8s-worker1:30205
curl: (7) Failed to connect to k8s-worker1 port 30205: No route to host
我怀疑这是我不能加入集群到牧场主的根本原因。
编辑:我想向此问题添加更多详细信息。我的每个节点(主节点和工作节点)都在防火墙上打开了以下端口:
firewall-cmd --list-ports
6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp 6783/tcp 6783/udp 6784/udp
对于CNI,Kubernetes集群使用Weavenet。
在它们的网络配置中,每个节点(master & worker)都被配置为使用我的主主DNS服务器(它也是一个active directory域控制器)。我已经为DNS服务器中的每个节点创建了AAA记录。节点未加入域。但是,我还编辑了每个节点的/etc/hosts文件,使其包含以下记录:
# more /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.188.190 k8s-master
192.168.188.191 k8s-worker1
192.168.188.192 k8s-worker2
192.168.188.193 k8s-worker3
我发现我可以使用"curl k8s-worker1.mydomain.com:30370“,成功率约为33%。但是我认为/etc/hosts文件会优先于使用我的本地DNS服务器。
最后,我注意到了一个额外的异常。我发现集群不能跨三个工作节点进行负载均衡。如上所示,我正在基于具有20个副本的bashofmann/rancher-demo映像运行一个名为"hello-world“的部署。我还为hello-world创建了一个nodeport服务,它将nodeport 30370映射到每个pod上的8080端口。
如果我打开我的web浏览器并转到http://192.168.188.191:30370,那么它将加载网站,但只有k8s-worker1上的pod提供。它永远不会加载由任何其他工作节点上的任何pod提供的网站。这就解释了为什么我只能获得33%的成功率,只要它是由我在url中指定的同一工作节点提供的。
发布于 2021-07-28 01:19:27
操作员确认该问题是由防火墙规则引起的。这是通过禁用防火墙进行调试的,这会导致所需的操作(集群添加)成功。
为了使nodePort服务正常工作,应该可以在群集的所有节点上访问端口范围30000 - 32767
。
https://stackoverflow.com/questions/68535678
复制相似问题