我正在使用calico作为我的kubernetes CNI插件,但是当我从kubernetes荚中ping服务时,它是failed.First,我找到了服务ip:
[root@localhost ~]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
prometheus-1594471894-kube-state-metrics ClusterIP 10.20.39.193 <none> 8080/TCP 3h16m app.kubernetes.io/instance=prometheus-1594471894,app.kubernetes.io/name=kube-state-metrics
然后将此ip从任何豆荚(已经登录到荚):
root@k8sslave1:/# ping 10.20.39.193
PING 10.20.39.193 (10.20.39.193) 56(84) bytes of data.
没有回应。然后使用traceroute检查路径:
root@k8sslave1:/# traceroute 10.20.39.193
traceroute to 10.20.39.193 (10.20.39.193), 64 hops max
1 192.168.31.1 0.522ms 0.539ms 0.570ms
2 192.168.1.1 1.171ms 0.877ms 0.920ms
3 100.81.0.1 3.918ms 3.917ms 3.602ms
4 117.135.40.145 4.768ms 4.337ms 4.232ms
5 * * *
6 * * *
这个包裹是通往互联网的路线,而不是转发给库伯奈特斯service.Why,这会发生吗?我该怎么做才能修好它?该吊舱可以接入互联网,也可以成功地接入其他豆荚ip。
root@k8sslave1:/# ping 10.11.157.67
PING 10.11.157.67 (10.11.157.67) 56(84) bytes of data.
64 bytes from 10.11.157.67: icmp_seq=1 ttl=64 time=0.163 ms
64 bytes from 10.11.157.67: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 10.11.157.67: icmp_seq=3 ttl=64 time=0.036 ms
64 bytes from 10.11.157.67: icmp_seq=4 ttl=64 time=0.102 ms
这是我在安装kubernetes集群时的ip配置:
kubeadm init \
--apiserver-advertise-address 0.0.0.0 \
--apiserver-bind-port 6443 \
--cert-dir /etc/kubernetes/pki \
--control-plane-endpoint 192.168.31.29 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version 1.18.2 \
--pod-network-cidr 10.11.0.0/16 \
--service-cidr 10.20.0.0/16 \
--service-dns-domain cluster.local \
--upload-certs \
--v=6
这是dns resolv.conf:
cat /etc/resolv.conf
nameserver 10.20.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
这是豆荚的核心路由表:
[root@localhost ~]# kubectl exec -it shell-demo /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@k8sslave1:/# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.31.1 0.0.0.0 UG 100 0 0 enp1s0
10.11.102.128 192.168.31.29 255.255.255.192 UG 0 0 0 tunl0
10.11.125.128 192.168.31.31 255.255.255.192 UG 0 0 0 tunl0
10.11.157.64 0.0.0.0 255.255.255.192 U 0 0 0 *
10.11.157.66 0.0.0.0 255.255.255.255 UH 0 0 0 cali4ac004513e1
10.11.157.67 0.0.0.0 255.255.255.255 UH 0 0 0 cali801b80f5d85
10.11.157.68 0.0.0.0 255.255.255.255 UH 0 0 0 caliaa7c2766183
10.11.157.69 0.0.0.0 255.255.255.255 UH 0 0 0 cali83957ce33d2
10.11.157.71 0.0.0.0 255.255.255.255 UH 0 0 0 calia012ca8e3b0
10.11.157.72 0.0.0.0 255.255.255.255 UH 0 0 0 cali3e6b175ded9
10.11.157.73 0.0.0.0 255.255.255.255 UH 0 0 0 calif042b3edac7
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.31.0 0.0.0.0 255.255.255.0 U 100 0 0 enp1s0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
发布于 2020-07-11 18:47:19
这是一个非常常见的问题,这需要我对CIDR is进行全面迁移。
这个问题很可能是关于Pods CIDR (用于为服务和豆荚分配IP的IP池)和CIDR之间重叠的问题。
在这种情况下,每个节点(VM)的路由表将确保:
sudo route -n
因为您没有提供足够的日志,我将在这里帮助您解决这个问题。如果您得到了我猜到的相同问题,您将需要更改从Step3开始解释的CIDR豆荚的范围。
Step1 :将calicoctl安装为Kubernetes pod
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
Step2 :检查Calico实例的状态.
calicoctl node status
# Sample of output ###################
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 172.17.8.102 | node-to-node mesh | up | 23:30:04 | Established |
+--------------+-------------------+-------+----------+-------------+
如果您在这一步中有问题,请停在这里并修复它。
否则,你可以继续。
Step3:列出现有池
calicoctl get ippool -o wide
Step4:创建新池
确保它不与您的网络CIDR重叠。
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: pool-c
spec:
cidr: 10.244.0.0/16
ipipMode: Always
natOutgoing: true
EOF
新的池名为 pool 。
Step5:删除当前池:
# get all pools
calicoctl get ippool -o yaml > pools.yaml
# edit the file pools.yaml and remove the current pool.
# file editing ... save & quit
# then apply changes
calicoctl apply -f -<<EOF
# Here, Must be the new content of the file pools.yaml
EOF
Step6:检查分配给每个工作负载(Pod)的SDN:
calicoctl get wep --all-namespaces
继续重新启动旧吊舱,重新创建旧服务,直到确保从新池分配了所有资源IP。
https://stackoverflow.com/questions/62851739
复制相似问题