我已经使用Kubeadm设置了Kubernetes HA集群(堆叠ETCD)。当我故意关闭一个主节点时,整个集群都会关闭,我得到的错误信息如下:
[vagrant@k8s-master01 ~]$ kubectl get nodes
Error from server: etcdserver: request timed out
我使用Nginx作为LB来负载均衡Kubeapi
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01 Ready master 27d v1.19.2 192.168.30.5 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
k8s-master02 Ready master 27d v1.19.2 192.168.30.6 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
k8s-worker01 Ready <none> 27d v1.19.2 192.168.30.10 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
k8s-worker02 Ready <none> 27d v1.19.2 192.168.30.11 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
[vagrant@k8s-master01 ~]$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-wkknl 0/1 Running 9 27d
coredns-f9fd979d6-wp854 1/1 Running 8 27d
etcd-k8s-master01 1/1 Running 46 27d
etcd-k8s-master02 1/1 Running 10 27d
kube-apiserver-k8s-master01 1/1 Running 60 27d
kube-apiserver-k8s-master02 1/1 Running 13 27d
kube-controller-manager-k8s-master01 1/1 Running 20 27d
kube-controller-manager-k8s-master02 1/1 Running 15 27d
kube-proxy-7vn9l 1/1 Running 7 26d
kube-proxy-9kjrj 1/1 Running 7 26d
kube-proxy-lbmkz 1/1 Running 8 27d
kube-proxy-ndbp5 1/1 Running 9 27d
kube-scheduler-k8s-master01 1/1 Running 20 27d
kube-scheduler-k8s-master02 1/1 Running 15 27d
weave-net-77ck8 2/2 Running 21 26d
weave-net-bmpsf 2/2 Running 24 27d
weave-net-frchk 2/2 Running 27 27d
weave-net-zqjzf 2/2 Running 22 26d
[vagrant@k8s-master01 ~]$
Nginx配置:
stream {
upstream apiserver_read {
server 192.168.30.5:6443;
server 192.168.30.6:6443;
}
server {
listen 6443;
proxy_pass apiserver_read;
}
}
Nginx日志:
2020/10/19 09:12:01 [error] 1215#0: *12460 no live upstreams while connecting to upstream, client: 192.168.30.11, server: 0.0.0.0:6443, upstream: "apiserver_read", bytes from/to client:0/0, bytes from/to upstream:0/0
2020/10/19
2020/10/19 09:12:01 [error] 1215#0: *12465 no live upstreams while connecting to upstream, client: 192.168.30.5, server: 0.0.0.0:6443, upstream: "apiserver_read", bytes from/to client:0/0, bytes from/to upstream:0/0
2020/10/19 09:12:02 [error] 1215#0: *12466 no live upstreams while connecting to upstream, client: 192.168.30.10, server: 0.0.0.0:6443, upstream: "apiserver_read", bytes from/to client:0/0, bytes from/to upstream:0/0
2020/10/19 09:12:02 [error] 1215#0: *12467 no live upstreams while connecting to upstream, client: 192.168.30.11, server: 0.0.0.0:6443, upstream: "apiserver_read", bytes from/to client:0/0, bytes from/to upstream:0/0
2020/10/19 09:12:02 [error] 1215#0: *12468 no live upstreams while connecting to upstream, client: 192.168.30.5, server: 0.0.0.0:6443, upstream: "apiserver_read", bytes from/to client:0/0, bytes from/to upstream:0/0
发布于 2021-09-13 19:09:23
ETCD
超时的原因是因为它是分布式键值数据库,需要仲裁才能正常运行。这基本上意味着ETCD
集群的所有成员都对某些决策进行投票,然后多数人决定要做什么。当您有3个节点时,您总是可以丢失1,因为2个节点仍然占多数
有2个节点的问题是,当1个节点关闭时,最后一个ETCD
节点在决定任何事情之前等待多数票,这永远不会发生。
这就是为什么您总是需要在Kubernetes cluster
上使用不相等数量的主节点。
发布于 2021-09-09 11:48:19
我有相同的设置(堆叠的etcd,但使用的是keepalived和HAProxy而不是nginx),我也有同样的问题。
您至少需要3 (!)控制平面节点。只有这样,您才能在不丢失功能的情况下关闭三个控制平面节点中的一个。
3个控制平面节点中的3个向上:
$ kubectl get pods -n kube-system
[...list of pods...]
3个控制平面节点中的2个向上:
$ kubectl get pods -n kube-system
[...list of pods...]
3个控制平面中的1个节点向上:
$ kubectl get pods -n kube-system
Error from server: etcdserver: request timed out
再一次,3次中有2次:
$ kubectl get pods -n kube-system
[...list of pods...]
https://stackoverflow.com/questions/64424416
复制相似问题