layout: "post"
title: "2020-07-22-腾讯云-slb-kubeadm高可用集群搭建"
date: "2020-07-22 16:00:00"
category: "kubernetes"
tags: "kubernetes1.18.6 kuberadm 高可用 ha"
author: duiniwukenaihe
集群配置:
centos7.7 64位
ip | 主机名 |
---|---|
10.0.4.20 | vip |
10.0.4.27 | sh-master-01 |
10.0.4.46 | sh-master-02 |
10.0.4.47 | sh-master-02 |
10.0.4.14 | sh-node-01 |
10.0.4.2 | sh-node-02 |
10.0.4.6 | sh-node-03 |
10.0.4.4 | sh-node-04 |
10.0.4.13 | sh-node-05 |
centos7默认内核为3.10版本,一般是建议把内核更新一下。
##导入key
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
##添加elrepo源
rpm -ivh https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
##查看可更新kernel版本
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
##关于kernel的版本 ml(mainline,主线最新版) lt(长期支持版本)可参照https://www.cnblogs.com/clsn/p/10925653.html。
yum --enablerepo=elrepo-kernel -y install kernel-lt
sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
##修改grub2.conf使内核生效
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot
##验证内核
uname -a
##删除旧内核
package-cleanup --oldkernels
swapoff -a
sed -i 's/.swap./#&/' /etc/fstab
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_watches=89100
fs.may_detach_mounts = 1
fs.file-max = 52706963
fs.nr_open = 52706963
vm.overcommit_memory=1
vm.panic_on_oom=0
vm.swappiness = 0
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf
注意:由于kube-proxy使用ipvs的话为了防止timeout需要设置下tcp参数
cat <<EOF >> /etc/sysctl.d/k8s.conf
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
EOF
sysctl --system
:> /etc/modules-load.d/ipvs.conf
module=(
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
br_netfilter
)
for kernel_module in ${module@};do
/sbin/modinfo -F filename $kernel_module |& grep -qv ERROR && echo $kernel_module >> /etc/modules-load.d/ipvs.conf || :
done
启动该模块管理服务
systemctl daemon-reload
systemctl enable --now systemd-modules-load.service
lsmod | grep ip_v
sed -ri 's/^\$ModLoad imjournal/#&/' /etc/rsyslog.conf
sed -ri 's/^\$IMJournalStateFile/#&/' /etc/rsyslog.conf
sed -ri 's/^#(DefaultLimitCORE)=/\1=100000/' /etc/systemd/system.conf
sed -ri 's/^#(DefaultLimitNOFILE)=/\1=100000/' /etc/systemd/system.conf
sed -ri 's/^#(UseDNS )yes/\1no/' /etc/ssh/sshd_config
journalctl --vacuum-size=20M
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
kubernetes
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装依赖包
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
安装bash命令提示
yum install -y bash-argsparse bash-completion bash-#completion-extras
安装docker kubeadm:
yum install docker-ce -y
#配置镜像加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": "https://lrpol8ec.mirror.aliyuncs.com",
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
添加个日志最多值,否则有的苦了,入坑体验过了。docker要不要开机启动呢?我后面安装rook ceph 开机重新启动了老有错误,因为没有将节点设置为cordon,但是也懒了, 我就没有设置为开机启动。故开机启动后在启动docker了
#查看yum源中可支持版本
yum list --showduplicates kubeadm --disableexcludes=kubernetes
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
##可指定自己要安装的版本
#yum install -y kubelet-1.18.6 kubeadmt-1.18.6 kubectlt-1.18.6 --disableexcludes=kubernetes
systemctl enable kubelet
注:slb内网传统型负载均衡使用了。尝试了两种方式:
![slb](https://ask.qcloudimg.com/http-save/1006587/t4ar7atmw7.png)
![slb1](https://ask.qcloudimg.com/http-save/1006587/ahh78gvbre.png)
#### 2. kuberadm master安装
master1节点
cat <<EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
serviceSubnet: "172.251.0.0/16" #设置svc网段
podSubnet: "172.252.0.0/16" #设置Pod网段
dnsDomain: "layabox.sh"
kubernetesVersion: "v1.18.6" #设置安装版本
controlPlaneEndpoint: "10.0.4.20:6443" #设置相关API VIP地址
dns:
type: CoreDNS
apiServer:
certSANs:
- sh-master-01undefinedapiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs #使用ipvs方式
EOF
- sh-master-02
- sh-master-03
- sh-master.k8s.io
- 127.0.0.1
- 10.0.4.27
- 10.0.4.46
- 10.0.4.47
- 10.0.4.20
timeoutForControlPlane: 4m0s
certificatesDir: "/etc/kubernetes/pki"
imageRepository: "ccr.ccs.tencentyun.com/k8s\_containers" #国内貌似没有最新的镜像库,自己同步到自己镜像仓库了,开始没有将namespace设置为公开,后期无法设置对外,抱歉。
etcd:
local:
dataDir: /var/lib/etcd
kubeadm init --config kubeadm-config.yaml
mkdir -p $HOME/.kube
sudo \cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
按照输出master02 ,master03节点加入集群
将master01 /etc/kubernetes/pki目录下ca. _sa._ front-proxy-ca. _etcd/ca_ 打包分发到master02,master03 /etc/kubernetes/pki目录下
kubeadm join 10.0.4.20:6443 --token jiprvz.0rkovt1gx3d658j --discovery-token-ca-cert-hash sha256:5d631bb4bdce033163037ef21f663c88e058e70c6c362c9c5ccb1a92095 --control-plane --certificate-key
然后同master01一样执行一遍下面的命令:
mkdir -p $HOME/.kube
sudo \cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
![master1](https://ask.qcloudimg.com/http-save/1006587/bfhrfwikf9.png)
注: key都胡乱输入的这里没有用自己的。此时任意一台master执行kubectl get nodes STATUS一列应该都是NOTReady.
#### 3. 配置flannel插件
wget [https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml](https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml)
修改配置文件中Network 为自己设置的子网,我这里是172.252.0.0/16
kubectl apply -f kube-flannel.yml
然后基本发现 master节点都已经redeay
### 二 . work节点j加入集群
kubeadm join 192.168.3.9:6443 --token 3o6dy0.9gbbfuf55xiloe9d --discovery-token-ca-cert-hash sha256:5d631bb4bdce01dcad51163037ef21f663c88e058e70c6c362c9c5ccb1a92095
OK集群算是初始搭建完了,不知道跑一遍咋样,我的是正常跑起来了。
> 确认集群节点是否ready。常见问题,集群开启了ipvs,但是我iptables没有关闭,然后节点一直加入不了,看了眼防火墙开着呢没有关闭规则。由于主机都是云主机,就开启了安全组策略,把防火墙都关闭了。如果是其他环境一定记得检查防火墙策略。
## 集群搭建成功上下图:
![status](https://ask.qcloudimg.com/http-save/1006587/ppfvt477eo.png)
![status1](https://duiniwukenaihe.github.io/assets/images/2020/07/kubernetes1.18.6/status1.png)
## 后记
#### 1. 如果kubeadm-config.yaml配置文件忘了设置ipvs了开启下ipvs.这里记得在
kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
#修改如下
kind: MasterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha1
...
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs" #修改
kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
貌似应该就跑起来了,然后后面应该还要做的:
本文系转载,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。
本文系转载,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。