首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >由于http://localhost:10248/healthz连接被拒绝,kubeadm连接失败

由于http://localhost:10248/healthz连接被拒绝,kubeadm连接失败
EN

Stack Overflow用户
提问于 2018-09-02 20:37:06
回答 1查看 14.7K关注 0票数 4

我正在尝试在三个VM上安装kubernetes (来自centos7的教程),不幸的是,工作人员的连接失败了。我希望有人已经有了这个问题(在网络上发现了两次却没有答案),或者可能猜出出了什么问题。

以下是我通过kubeadm join获得的信息:

代码语言:javascript
运行
复制
[preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0902 20:31:15.401693    2032 kernel_validator.go:81] Validating kernel version
I0902 20:31:15.401768    2032 kernel_validator.go:96] Validating kernel config
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.1.30:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.30:6443"
[discovery] Requesting info from "https://192.168.1.30:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.30:6443"
[discovery] Successfully established connection with API Server "192.168.1.30:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.

尽管kublet正在运行:

代码语言:javascript
运行
复制
[root@k8s-worker1 nodesetup]# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since So 2018-09-02 20:31:15 CEST; 19min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 2093 (kubelet)
    Tasks: 7
   Memory: 12.1M
   CGroup: /system.slice/kubelet.service
           └─2093 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni

Sep 02 20:31:15 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Sep 02 20:31:15 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440010    2093 server.go:408] Version: v1.11.2
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440314    2093 plugins.go:97] No cloud provider specified.
[root@k8s-worker1 nodesetup]# 

据我所见,工作人员可以连接到主服务器,但它试图对一些尚未出现的本地servlet运行健康检查。有什么想法吗?

下面是我为配置我的员工所做的工作:

代码语言:javascript
运行
复制
exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux


echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=6783/tcp
firewall-cmd --reload


echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables


echo "disable swap"
swapoff -a
echo "### You need to edit /etc/fstab and comment the swapline!! ###"


echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2

echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

echo "install docker ce"
yum install -y docker-ce

echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y

echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet

echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"

echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
#  old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl daemon-reload
systemctl restart kubelet

# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

# Make changes effective
sudo sysctl --system

谢谢你提前提供帮助。

更新I

来自工作人员的日志输出:

代码语言:javascript
运行
复制
[root@k8s-worker1 ~]# journalctl -xeu kubelet
Sep 02 21:19:56 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kubelet.service has finished starting up.
-- 
-- The start-up result is done.
Sep 02 21:19:56 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kubelet.service has begun starting up.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788059    3082 server.go:408] Version: v1.11.2
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788214    3082 plugins.go:97] No cloud provider specified.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469    3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Sep 02 21:19:56 k8s-worker1 systemd[1]: Unit kubelet.service entered failed state.
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service failed.

主端的get吊舱导致:

代码语言:javascript
运行
复制
[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-79n2m             0/1       Pending   0          1d
kube-system   coredns-78fcdf6894-tlngr             0/1       Pending   0          1d
kube-system   etcd-k8s-master                      1/1       Running   3          1d
kube-system   kube-apiserver-k8s-master            1/1       Running   0          1d
kube-system   kube-controller-manager-k8s-master   0/1       Evicted   0          1d
kube-system   kube-proxy-2x8cx                     1/1       Running   3          1d
kube-system   kube-scheduler-k8s-master            1/1       Running   0          1d
[root@k8s-master ~]# 

更新II作为下一步,我在主端生成了一个新令牌,并在join命令中使用了这个令牌。虽然主令牌列表将令牌显示为有效令牌,但工作节点坚持认为主节点不知道该令牌,或者它是expired....stop!是时候重新开始了,从主设置开始。

我就是这样做的:

1)重新设置主VM,这意味着在virtualbox上安装了一个新的centos7 (CentOS7-x86_64-Minimum-1804.iso)。配置:将adapter1作为NAT配置到主机系统(用于安装该组件),将adapter2配置为内部网络(与kubernetes网络的主节点和工作节点同名)。

2)安装了新映像后,基本接口enp0s3没有配置为在启动时运行(因此ifup enp03s,并在/etc/sysconfig/network中重新配置以在启动时运行)。

3)为内部kubernetes网络配置第二个接口:

/etc/主机:

代码语言:javascript
运行
复制
#!/bin/sh
echo '192.168.1.30 k8s-master' >> /etc/hosts
echo '192.168.1.40 k8s-worker1' >> /etc/hosts
echo '192.168.1.50 k8s-worker2' >> /etc/hosts

通过"ip -color -human addr“标识了我的第二个接口,在我的例子中向我展示了enp0S8,因此:

代码语言:javascript
运行
复制
#!/bin/sh
echo "Setting up internal Interface"
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
IPADDR=192.168.1.30
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
NAME=enp0s8
EOF

echo "Activate interface"
ifup enp0s8

4)主机名、交换、禁用SELinux

代码语言:javascript
运行
复制
#!/bin/sh
echo "Setting hostname und deactivate SELinux"
hostnamectl set-hostname 'k8s-master'
exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

echo "disable swap"
swapoff -a

echo "### You need to edit /etc/fstab and comment the swapline!! ###"

这里有一些备注:我重新启动,因为我看到稍后的预运行检查似乎解析/etc/fstab,以确保交换不存在。而且,似乎centos会重新激活SElinux (稍后我需要检查它),作为解决办法,每次重新启动之后,我都会再次禁用它。

5)建立所需的防火墙设置

代码语言:javascript
运行
复制
#!/bin/sh
echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload

echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

6)添加kubernetes存储库

代码语言:javascript
运行
复制
#!/bin/sh
echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

7)安装所需的软件包并配置服务

代码语言:javascript
运行
复制
#!/bin/sh

echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2

echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

echo "install docker ce"
yum install -y docker-ce

echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y

echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet

echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"
echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
#  old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl daemon-reload
systemctl restart kubelet

# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

# Make changes effective
sudo sysctl --system

8)分组

代码语言:javascript
运行
复制
#!/bin/sh
echo "Init kubernetes. Check join cmd in initProtocol.txt"
kubeadm init --apiserver-advertise-address=192.168.1.30 --pod-network-cidr=192.168.1.0/16 | tee initProtocol.txt

这里要验证的是这个命令的结果:

代码语言:javascript
运行
复制
Init kubernetes. Check join cmd in initProtocol.txt
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
I0904 21:53:15.271999    1526 kernel_validator.go:81] Validating kernel version
I0904 21:53:15.272165    1526 kernel_validator.go:96] Validating kernel config
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.30]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.30 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.504792 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[bootstraptoken] using token: n4yt3r.3c8tuj11nwszts2d
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.30:6443 --token n4yt3r.3c8tuj11nwszts2d --discovery-token-ca-cert-hash sha256:466e7972a4b6997651ac1197fdde68d325a7bc41f2fccc2b1efc17515af61172

备注:到目前为止,我看上去还不错,不过我有点担心最新的对接-ce版本会在这里引起麻烦.

9)部署吊舱网络

代码语言:javascript
运行
复制
#!/bin/bash

echo "Configure demo cluster usage as root"
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# Deploy-Network using flanel
# Taken from first matching two tutorials on the web
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

# taken from https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml

echo "Try to run kubectl get pods --all-namespaces"
echo "After joining nodes: try to run kubectl get nodes to verify the status"

下面是这个命令的输出:

代码语言:javascript
运行
复制
Configure demo cluster usage as root
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
Try to run kubectl get pods --all-namespaces
After joining nodes: try to run kubectl get nodes to verify the status

所以我尝试了kubectl得到的吊舱--所有的名称空间,我得到了

代码语言:javascript
运行
复制
[root@k8s-master nodesetup]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-pflhc             0/1       Pending   0          33m
kube-system   coredns-78fcdf6894-w7dxg             0/1       Pending   0          33m
kube-system   etcd-k8s-master                      1/1       Running   0          27m
kube-system   kube-apiserver-k8s-master            1/1       Running   0          27m
kube-system   kube-controller-manager-k8s-master   0/1       Evicted   0          27m
kube-system   kube-proxy-stfxm                     1/1       Running   0          28m
kube-system   kube-scheduler-k8s-master            1/1       Running   0          27m

代码语言:javascript
运行
复制
[root@k8s-master nodesetup]# kubectl get nodes
NAME         STATUS     ROLES     AGE       VERSION
k8s-master   NotReady   master    35m       v1.11.2

嗯.我主人怎么了?

一些意见:

有时,我在一开始运行kubectl时被拒绝了连接,我发现服务建立需要几分钟时间。但由于这个原因,我在/var/log/firewalld中查找了许多这样的内容:

代码语言:javascript
运行
复制
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?).

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?).

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -F DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -X DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -n -L DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN' failed: iptables: Bad rule (does a matching rule exist in that chain?).

码头版本错了吗?码头安装设置似乎被破坏了。

还有什么我可以在主那边查的..。明天很晚了,我正试图再次加入我的工人(在最初的标记期的24小时内)。

更新III (解决码头问题后)

代码语言:javascript
运行
复制
[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-pflhc             0/1       Pending   0          10h
kube-system   coredns-78fcdf6894-w7dxg             0/1       Pending   0          10h
kube-system   etcd-k8s-master                      1/1       Running   0          10h
kube-system   kube-apiserver-k8s-master            1/1       Running   0          10h
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          10h
kube-system   kube-flannel-ds-amd64-crljm          0/1       Pending   0          1s
kube-system   kube-flannel-ds-v6gcx                0/1       Pending   0          0s
kube-system   kube-proxy-l2dck                     0/1       Pending   0          0s
kube-system   kube-scheduler-k8s-master            1/1       Running   0          10h
[root@k8s-master ~]# 

师父现在看起来很开心

代码语言:javascript
运行
复制
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    10h       v1.11.2
[root@k8s-master ~]# 

保持tuned...after工作--我也在修复工作人员的端口/防火墙--并且将再次尝试加入集群(如果需要的话,现在知道如何发出一个新令牌)。所以更新IV将在大约10小时后跟进

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2018-09-04 11:51:34

按照附加的kubeadm token日志,您的kubelet似乎已经过期。

9月02 21:19:56 k8s-工人1 kubelet3082: F0902 21:19:56.814469 3082 server.go:262]未能运行Kubelet:无法创建证书签名请求:未经授权

在命令kubeadm init发布24小时后,TTL将保持不变,请查看此链接以获得更多信息。

主节点的系统运行时组件看起来不健康,不确定集群是否运行良好。尽管CoreDNS服务处于挂起状态,但请查看kubeadm故障排除文档,以检查集群中是否安装了任何豆荚网络提供程序。

我建议重新构建集群,以便从头开始刷新kubeadm token和引导集群系统模块。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/52140852

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档