Kubernetes 最初源于谷歌内部的 Borg,提供了面向应用的容器集群部署和管理系统。Kubernetes 的目标旨在消除编排物理 / 虚拟计算,网络和存储基础设施的负担,并使应用程序运营商和开发人员完全将重点放在以容器为中心的原语上进行自助运营。Kubernetes 也提供稳定、兼容的基础(平台),用于构建定制化的 workflows 和更高级的自动化任务。 Kubernetes 具备完善的集群管理能力,包括多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和服务发现机制、内建负载均衡器、故障发现和自我修复能力、服务滚动升级和在线扩容、可扩展的资源自动调度机制、多粒度的资源配额管理能力。 Kubernetes 还提供完善的管理工具,涵盖开发、部署测试、运维监控等各个环节。
Borg 是谷歌内部的大规模集群管理系统,负责对谷歌内部很多核心服务的调度和管理。Borg 的目的是让用户能够不必操心资源管理的问题,让他们专注于自己的核心业务,并且做到跨多个数据中心的资源利用率最大化。
Borg 主要由 BorgMaster、Borglet、borgcfg 和 Scheduler 组成,如下图所示
Kubernetes 借鉴了 Borg 的设计理念,比如 Pod、Service、Labels 和单 Pod 单 IP 等。Kubernetes 的整体架构跟 Borg 非常像,如下图所示
Kubernetes 主要由以下几个核心组件组成:
除了核心组件,还有一些推荐的 Add-ons:
Kubernetes 设计理念和功能其实就是一个类似 Linux 的分层架构,如下图所示
关于分层架构,可以关注下 Kubernetes 社区正在推进的 Kubernetes architectural roadmap, 中文参考 :https://feisky.gitbooks.io/kubernetes/。
简单总结:
最近在部署k8s 1.9集群遇到一些问题,整理记录,或许有助需要的朋友。
因为kubeadm 简单便捷,所以集群基于该项目部署,目前bete版本不支持ha部署,github说2018年预计发布ha 版本,可我们等不及了 呼之欲来。。。
环境 | 版本 |
---|---|
Centos | CentOS Linux release 7.3.1611 (Core) |
Kernel | Linux etcd-host1 3.10.0-514.el7.x86_64 |
yum base repo | http://mirrors.aliyun.com/repo/Centos-7.repo |
yum epel repo | http://mirrors.aliyun.com/repo/epel-7.repo |
kubectl | v1.9.0 |
kubeadmin | v1.9.0 |
docker | 1.12.6 |
docker localre | devhub.beisencorp.com |
主机名称 | 相关信息 | 备注 |
---|---|---|
etcd-host1 | 10.129.6.211 | master和etcd |
etcd-host2 | 10.129.6.212 | master和etcd |
etcd-host3 | 10.129.6.213 | master和etcd |
Vip-keepalive | 10.129.6.220 | vip用于高可用 |
hostnamectl set-hostname etcd-host1
systemctl stop firewalld
systemctl disable firewalld
systemctl disable firewalld
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
getenforce
echo nameserver 114.114.114.114>>/etc/resolv.conf
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.conf
#若问题
执行sysctl -p 时出现:
sysctl -p
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
解决方法:
modprobe br_netfilter
ls /proc/sys/net/bridge
yum install -y keepalived
cat >/etc/keepalived/keepalived.conf <<EOL
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
script "curl -k https://10.129.6.220:6443"
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 61
# 主节点权重最高 依次减少
priority 120
advert_int 1
#修改为本地IP
mcast_src_ip 10.129.6.211
nopreempt
authentication {
auth_type PASS
auth_pass sqP05dQgMSlzrxHj
}
unicast_peer {
#注释掉本地IP
#10.129.6.211
10.129.6.212
10.129.6.213
}
virtual_ipaddress {
10.129.6.220/24
}
track_script {
CheckK8sMaster
}
}
EOL
systemctl enable keepalived && systemctl restart keepalived
[root@etcd-host1 k8s]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availabilitymonitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-01-19 10:27:58 CST; 8h ago
Main PID: 1158 (keepalived)
CGroup: /system.slice/keepalived.service
├─1158 /usr/sbin/keepalived -D
├─1159 /usr/sbin/keepalived -D
└─1161 /usr/sbin/keepalived -D
Jan 19 10:28:00 etcd-host1 Keepalived_vrrp[1161]: Sending gratuitous ARP on ens32 for 10.129.6.220
Jan 19 10:28:05 etcd-host1 Keepalived_vrrp[1161]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens32 for 10.129.6.220
#机器名称
etcd-host1:10.129.6.211
etcd-host2:10.129.6.212
etcd-host3:10.129.6.213
#部署环境变量
export NODE_NAME=etcd-host3 #当前部署的机器名称(随便定义,只要能区分不同机器即可)
export NODE_IP=10.129.6.213 # 当前部署的机器 IP
export NODE_IPS="10.129.6.211 10.129.6.212 10.129.6.213" # etcd 集群所有机器 IP
# etcd 集群间通信的IP和端口
export ETCD_NODES=etcd-host1=https://10.129.6.211:2380,etcd-host2=https://10.129.6.212:2380,etcd-host3=https://10.129.6.213:2380
创建 CA 证书和秘钥
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
==创建 etcd 证书签名请求:==
cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"10.129.6.211",
"10.129.6.212",
"10.129.6.213"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成 etcd 证书和私钥:
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
ls etcd*
mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/
#
#其他node
rm -rf /etc/etcd/ssl/*
scp -r /etc/etcd/ssl root@10.129.6.211:/etc/etcd/
scp -r root@10.129.6.211:/root/k8s/etcd/etcd-v3.3.0-rc.1-linux-amd64.tar.gz /root
将生成好的etcd.pem和etcd-key.pem以及ca.pem三个文件拷贝到目标主机的/etc/etcd/ssl目录下。
到 https://github.com/coreos/etcd/releases 页面下载最新版本的二进制文件:
wget http://github.com/coreos/etcd/releases/download/v3.1.10/etcd-v3.1.10-linux-amd64.tar.gz
tar -xvf etcd-v3.1.10-linux-amd64.tar.gz
mv etcd-v3.1.10-linux-amd64/etcd* /usr/local/bin
mkdir -p /var/lib/etcd # 必须先创建工作目录
cat > etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
--name=${NODE_NAME} \\
--cert-file=/etc/etcd/ssl/etcd.pem \\
--key-file=/etc/etcd/ssl/etcd-key.pem \\
--peer-cert-file=/etc/etcd/ssl/etcd.pem \\
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
--trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--initial-advertise-peer-urls=https://${NODE_IP}:2380 \\
--listen-peer-urls=https://${NODE_IP}:2380 \\
--listen-client-urls=https://${NODE_IP}:2379,http://127.0.0.1:2379 \\
--advertise-client-urls=https://${NODE_IP}:2379 \\
--initial-cluster-token=etcd-cluster-0 \\
--initial-cluster=${ETCD_NODES} \\
--initial-cluster-state=new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
mv etcd.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
etcdctl \
--endpoints=https://${NODE_IP}:2379 \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
cluster-health
预期结果:
[root@node02 ~]# etcdctl --endpoints=https://${NODE_IP}:2379 --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health
member 18699a64c36a7e7b is healthy: got healthy result from https://10.129.6.213:2379
member 5dbd6a0b2678c36d is healthy: got healthy result from https://10.129.6.211:2379
member 6b1bf02f85a9e68f is healthy: got healthy result from https://10.129.6.212:2379
cluster is healthy
systemctl stop etcd
rm -Rf /var/lib/etcd
rm -Rf /var/lib/etcd-cluster
mkdir -p /var/lib/etcd
systemctl start etcd
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
mkdir -p /root/k8s/rpm
cd /root/k8s/rpm
#安装同步工具
yum install -y yum-utils
#同步本地镜像
yumdownloader kubelet kubeadm kubectl kubernetes-cni docker
scp root@10.129.6.224:/root/k8s/rpm/* /root/k8s/rpm
mkdir -p /root/k8s/rpm
scp root@10.129.6.211:/root/k8s/rpm/* /root/k8s/rpm
yum install /root/k8s/rpm/*.rpm -y
#restart
systemctl enable docker && systemctl restart docker
systemctl enable kubelet && systemctl restart kubelet
#国内可以使用daocloud加速器下载相关镜像,然后通过docker save、docker load把本地下载的镜像放到kubernetes集群的所在机器上,daocloud加速器链接如下:
https://www.daocloud.io/mirror#accelerator-doc
#pull 获取
docker pull gcr.io/google_containers/kube-proxy-amd64:v1.9.0
#导出
mkdir -p docker-images
docker save -o docker-images/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.9.0
#导入
docker load -i /root/kubeadm-ha/docker-images/kube-proxy-amd64
自谋生路,天机屋漏
kubelet 修改 配置以使用本地自定义pause镜像 devhub.beisencorp.com/google_containers/pause-amd64:3.0 替换你的环境镜像
cat > /etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf <<EOF
[Service]
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=devhub.beisencorp.com/google_containers/pause-amd64:3.0"
EOF
systemctl daemon-reload
systemctl restart kubelet
cat <<EOF > config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
endpoints:
- https://10.129.6.211:2379
- https://10.129.6.212:2379
- https://10.129.6.213:2379
caFile: /etc/etcd/ssl/ca.pem
certFile: /etc/etcd/ssl/etcd.pem
keyFile: /etc/etcd/ssl/etcd-key.pem
dataDir: /var/lib/etcd
networking:
podSubnet: 10.244.0.0/16
kubernetesVersion: 1.9.0
api:
advertiseAddress: "10.129.6.220"
token: "b99a00.a144ef80536d4344"
tokenTTL: "0s"
apiServerCertSANs:
- etcd-host1
- etcd-host2
- etcd-host3
- 10.129.6.211
- 10.129.6.212
- 10.129.6.213
- 10.129.6.220
featureGates:
CoreDNS: true
imageRepository: "devhub.beisencorp.com/google_containers"
EOF
kubeadm init --config config.yaml
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
as root:
kubeadm join --token b99a00.a144ef80536d4344 10.129.6.220:6443 --discovery-token-ca-cert-hash sha256:ebc2f64e9bcb14639f26db90288b988c90efc43828829c557b6b66bbe6d68dfa
[root@etcd-host1 k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
etcd-host1 noReady master 5h v1.9.0
[root@etcd-host1 k8s]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
如果使用kubeadm初始化集群,启动过程可能会卡在以下位置,那么可能是因为cgroup-driver参数与docker的不一致引起
[apiclient] Created API client, waiting for the control plane to become ready
journalctl -t kubelet -S '2017-06-08'查看日志,发现如下错误
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd"
需要修改KUBELET_CGROUP_ARGS=--cgroup-driver=systemd为KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
systemctl daemon-reload && systemctl restart kubelet
wget https://github.com/cloudnativelabs/kube-router/blob/master/daemonset/kubeadm-kuberouter.yaml
kubectl apply -f kubeadm-kuberouter.yaml
[root@etcd-host1 k8s]# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-546545bc84-zc5dx 1/1 Running 0 6h
kube-system kube-apiserver-etcd-host1 1/1 Running 0 6h
kube-system kube-controller-manager-etcd-host1 1/1 Running 0 6h
kube-system kube-proxy-pfj7x 1/1 Running 0 6h
kube-system kube-router-858b7 1/1 Running 0 37m
kube-system kube-scheduler-etcd-host1 1/1 Running 0 6h
[root@etcd-host1 k8s]#
#拷贝pki 证书
mkdir -p /etc/kubernetes/pki
scp -r root@10.129.6.211:/etc/kubernetes/pki /etc/kubernetes
#拷贝初始化配置
scp -r root@10.129.6.211://root/k8s/config.yaml /etc/kubernetes/config.yaml
#初始化
kubeadm init --config /etc/kubernetes/config.yaml
默认情况下,为了保证master的安全,master是不会被调度到app的。你可以取消这个限制通过输入:
kubectl taint nodes --all node-role.kubernetes.io/master-
录制终端验证 结果
-验证
[zeming@etcd-host1 k8s]$ kubectl get node
NAME STATUS ROLES AGE VERSION
etcd-host1 Ready master 6h v1.9.0
etcd-host2 Ready master 5m v1.9.0
etcd-host3 Ready master 49s v1.9.0
[zeming@etcd-host1 k8s]$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx01-d87b4fd74-2445l 1/1 Running 0 1h
default nginx01-d87b4fd74-7966r 1/1 Running 0 1h
default nginx01-d87b4fd74-rcbhw 1/1 Running 0 1h
kube-system coredns-546545bc84-zc5dx 1/1 Running 0 3d
kube-system kube-apiserver-etcd-host1 1/1 Running 0 3d
kube-system kube-apiserver-etcd-host2 1/1 Running 0 3d
kube-system kube-apiserver-etcd-host3 1/1 Running 0 3d
kube-system kube-controller-manager-etcd-host1 1/1 Running 0 3d
kube-system kube-controller-manager-etcd-host2 1/1 Running 0 3d
kube-system kube-controller-manager-etcd-host3 1/1 Running 0 3d
kube-system kube-proxy-gk95d 1/1 Running 0 3d
kube-system kube-proxy-mrzbq 1/1 Running 0 3d
kube-system kube-proxy-pfj7x 1/1 Running 0 3d
kube-system kube-router-bbgpq 1/1 Running 0 3h
kube-system kube-router-v2jbh 1/1 Running 0 3h
kube-system kube-router-w4cbb 1/1 Running 0 3h
kube-system kube-scheduler-etcd-host1 1/1 Running 0 3d
kube-system kube-scheduler-etcd-host2 1/1 Running 0 3d
kube-system kube-scheduler-etcd-host3 1/1 Running 0 3d
[zeming@etcd-host1 k8s]$
while true; do sleep 1; kubectl get node;date; done
#观察当Master01主节点关闭后,被节点VIP状态 BACKUP 切换到 MASTER
[root@etcd-host2 net.d]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2018-01-22 13:54:17 CST; 21s ago
Jan 22 13:54:17 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 22 13:54:17 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Received advert with higher priority 120, ours 110
Jan 22 13:54:17 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Entering BACKUP STATE
#切换到 MASTER
[root@etcd-host2 net.d]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2018-01-22 13:54:17 CST; 4min 6s ago
Jan 22 14:03:02 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 22 14:03:03 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 22 14:03:03 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) setting protocol VIPs.
Jan 22 14:03:03 etcd-host2 Keepalived_vrrp[15908]: Sending gratuitous ARP on ens32 for 10.129.6.220
#观察 master01 关机后状态变成NotReady
[root@etcd-host3 ~]# while true; do sleep 1; kubectl get node;date; done
Tue Jan 22 14:03:16 CST 2018
NAME STATUS ROLES AGE VERSION
etcd-host1 Ready master 19m v1.9.0
etcd-host2 Ready master 3d v1.9.0
etcd-host3 Ready master 3d v1.9.0
Tue Jan 22 14:03:17 CST 2018
NAME STATUS ROLES AGE VERSION
etcd-host1 NotReady master 19m v1.9.0
etcd-host2 Ready master 3d v1.9.0
etcd-host3 Ready master 3d v1.9.0
#恢复Master主节点后,出现VIP偏移过来,api恢复
The connection to the server 10.129.6.220:6443 was refused - did you specify the right host or port?
Tue Jan 23 14:14:05 CST 2018
The connection to the server 10.129.6.220:6443 was refused - did you specify the right host or port?
Tue Jan 23 14:14:07 CST 2018
Tue Jan 23 14:14:18 CST 2018
NAME STATUS ROLES AGE VERSION
etcd-host1 NotReady master 29m v1.9.0
etcd-host2 Ready master 3d v1.9.0
etcd-host3 Ready master 3d v1.9.0
Tue Jan 23 14:14:20 CST 2018
NAME STATUS ROLES AGE VERSION
etcd-host1 Ready master 29m v1.9.0
etcd-host2 Ready master 3d v1.9.0
etcd-host3 Ready master 3d v1.9.0
#k8s 官方文档
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
#kubeadm ha 项目文档
https://github.com/indiketa/kubeadm-ha
https://github.com/cookeem/kubeadm-ha/blob/master/README_CN.md
https://medium.com/@bambash/ha-kubernetes-cluster-via-kubeadm-b2133360b198
#kubespray 之前的kargo ansible项目
https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md
#若有问题或转载请注明出处 By Zeming