专栏首页运维开发故事基于K3S构建高可用Rancher集群

基于K3S构建高可用Rancher集群

如果你是条船,漂泊就是你的命运,可别靠岸 北岛

K3S简述:

K3s (轻量级 Kubernetes): 和 RKE 类似,也是经过认证的 Kubernetes 发行版。它比 RKE 更新,更易用且更轻量化,全部组件都在一个小于 100 MB 的二进制文件中。从 Rancher v2.4 开始,Rancher 可以安装在 K3s 集群上。

详情见:https://rancher2.docs.rancher.cn/docs/installation/_index

Rancher简述:

Rancher 是为使用容器的公司打造的容器管理平台。Rancher 简化了使用 Kubernetes 的流程,开发者可以随处运行 Kubernetes(Run Kubernetes Everywhere),满足 IT 需求规范,赋能 DevOps 团队。

详情见:https://rancher2.docs.rancher.cn/docs/overview/_index

使用环境:

操作系统

主机名

IP地址

节点

配置

CentOS 7 1810

nginx-master

192.168.111.21

Nginx主服务器

2C4G

CentOS 7 1810

nginx-backup

192.168.111.22

Nginx备服务器

2C4G

ubuntu-18.04.3-live-server

k3s-node1

192.168.111.50

k3s节点1

4C8G

ubuntu-18.04.3-live-server

k3s-node2

192.168.111.51

k3s节点2

4C8G

CentOS 7 1810

k3s-mysql

192.168.111.52

mysql

4C8G

部署前系统环境准备:

关闭防火墙和SeLinux

为防止因端口问题造成集群组建失败,我们在这里提前关闭防火墙以及selinux

  • centos : systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
  • Ubuntu: sudo ufw disable
  • 节点及Docker功能调优 https://rancher2.docs.rancher.cn/docs/best-practices/optimize/os/_index

配置host文件:

192.168.111.21 nginx-master
192.168.111.22 nginx-backup
192.168.111.50 k3s-node1
192.168.111.51 k3s-node2
192.168.111.52 k3s-mysql
  • 配置host文件,并确保每台机器上都可以通过主机名互通

需要用到的工具:

此安装需要以下 CLI 工具。请确保这些工具已经安装并在$PATH中可用

CLI工具的安装在k3s节点上进行

  • kubectl - Kubernetes 命令行工具.
  • helm - Kubernetes 的软件包管理工具。 请参阅Helm 版本要求选择 Helm 的版本来安装 Rancher。

开始部署:

安装 Kubectl:

  • 安装参考K8S官网,由于某些特殊原因,此处我们使用snap sudo apt-get install snapd sudo snap install kubectl --classic # 此处安装较慢,请耐心等待 # 验证安装 kubectl help

安装 Helm:

  • 安装参考Helm官网,Helm是Kubernetes的包管理器,Helm的版本需要高于v3 # 下载安装包 wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz # 解压 tar zxvf helm-v3.2.1-linux-amd64.tar.gz # 将二进制文件移动至/usr/local/bin/ sudo mv linux-amd64/helm /usr/local/bin/helm # 验证安装 helm help

创建 Nginx+Keepalived 集群:

此处在CentOS节点上进行

  • 安装 Nginx # 下载Nginx安装包 wget http://nginx.org/download/nginx-1.17.10.tar.gz # 解压安装包 tar zxvf nginx-1.17.10.tar.gz # 安装编译时必备的软件包 yum install -y gcc gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel libnl3-devel # 进入nginx目录,此处我们需要使用https,所有在编译时选择 --with-http_ssl_module 模块 cd nginx-1.17.10 mkdir -p /usr/local/nginx ./configure --prefix=/usr/local/nginx --with-stream # 安装nginx make && make install # 创建nginx命令软连接 ln -s /usr/local/nginx/sbin/nginx /usr/local/bin/nginx # 验证安装 nginx -V # 启动nginx nginx

安装 Keepalived

# 下载安装包
wget https://www.keepalived.org/software/keepalived-2.0.20.tar.gz
# 解压安装包
tar zxvf keepalived-2.0.20.tar.gz
# 编译安装keepalived
cd keepalived-2.0.20
mkdir /usr/local/keepalived
./configure --prefix=/usr/local/keepalived/
make && make install
# 配置 keepalived 为系统服务
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/keepalived
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/keepalived
touch /etc/init.d/keepalived
chmod +x /etc/init.d/keepalived # keepalived 中的内容见下文
vim /etc/init.d/keepalived
# 配置 keepalived
mkdir /etc/keepalived/
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
vim /etc/keepalived/keepalived.conf #keepalived.conf 中的内容见下文
# 启动keepalived
systemctl start keepalived
systemctl enable keepalived
# 验证
systemctl status keepalived
# 此时keepalived应该是运行,一个为master,一个为backup, master上执行 ip addr 命令时,应该存在一个虚拟ip地址,backup上不应该有
# 访问 https://192.168.111.20 验证配置
# /etc/init.d/keepalived文件内容
#!/bin/sh
#
# Startup script for the Keepalived daemon
#
# processname: keepalived
# pidfile: /var/run/keepalived.pid
# config: /etc/keepalived/keepalived.conf
# chkconfig: - 21 79
# description: Start and stop Keepalived

# Source function library
. /etc/rc.d/init.d/functions

# Source configuration file (we set KEEPALIVED_OPTIONS there)
. /etc/sysconfig/keepalived

RETVAL=0

prog="keepalived"

start() {
    echo -n $"Starting $prog: "
    daemon keepalived ${KEEPALIVED_OPTIONS}
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
}

stop() {
    echo -n $"Stopping $prog: "
    killproc keepalived
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
}

reload() {
    echo -n $"Reloading $prog: "
    killproc keepalived -1
    RETVAL=$?
    echo
}

# See how we were called.
case "$1" in
    start)
        start
        ;;
    stop)
        stop
        ;;
    reload)
        reload
        ;;
    restart)
        stop
        start
        ;;
    condrestart)
        if [ -f /var/lock/subsys/$prog ]; then
            stop
            start
        fi
        ;;
    status)
        status keepalived
        RETVAL=$?
        ;;
    *)
        echo "Usage: $0 {start|stop|reload|restart|condrestart|status}"
        RETVAL=1
esac

exit $RETVAL
# /etc/keepalived/keepalived.conf 中的内容
! Configuration File for keepalived

global_defs {
   router_id 192.168.111.21 # 此id在网络中有且只有一个,不应有重复的id
}

vrrp_script chk_nginx {     #因为要检测nginx服务状态,所以创建一个检查脚本
    script "/usr/local/keepalived/check_ng.sh"
    interval 3
}

vrrp_instance VI_1 {
    state MASTER    # 配置此节点为master,备机上设置为BACKUP
    interface ens33    # 设置绑定的网卡
    virtual_router_id 51    # vrrp 组, 主备的vrrp组应该一样
    priority 120    # 优先级,优先级大的为主
    advert_int 1    # 检查间隔
    authentication { # 认证
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress { # 虚拟IP
        192.168.111.20
    }
    track_script {    # 执行脚本
        chk_nginx
    }
}
# /usr/local/keepalived/check_ng.sh 中的内容
#!/bin/bash
d=`date --date today +%Y%m%d_%H:%M:%S`
n=`ps -C nginx --no-heading|wc -l`
if [ $n -eq "0" ]; then
        nginx
        n2=`ps -C nginx --no-heading|wc -l`
        if [ $n2 -eq "0"  ]; then
                echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log
                systemctl stop keepalived
        fi
fi

安装 docker-ce :

此处在RKE节点上进行

# 移除旧版本Docker
sudo apt-get remove docker docker-engine docker.io containerd runc
# 安装工具包
sudo apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
# 添加 Docker官方 GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# 添加 stable apt 源
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
# 安装 Docker-ce
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
# 验证安装
docker info
# 将当前用户加入"docker"用户组,加入到该用户组的账号在随后安装过程会用到。用于节点访问的SSH用户必须是节点上docker组的成员
sudo usermod -aG docker $USER

配置四层负载均衡

此处在Nginx集群操作

# 更新nginx配置文件
# vim /usr/local/nginx/conf/nginx.conf

#user  nobody;
worker_processes  4;
worker_rlimit_nofile 40000;

events {
    worker_connections  8192;
}

stream {
    upstream rancher_servers_http {
        least_conn;
        server 192.168.111.50:80 max_fails=3 fail_timeout=5s;
        server 192.168.111.51:80 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     80;
        proxy_pass rancher_servers_http;
    }

    upstream rancher_servers_https {
        least_conn;
        server 192.168.111.50:443 max_fails=3 fail_timeout=5s;
        server 192.168.111.51:443 max_fails=3 fail_timeout=5s;
    }

    server {
        listen     443;
        proxy_pass rancher_servers_https;
    }
}

部署 MySQL 5.7

# 下载地址:https://dev.mysql.com/downloads/mysql/5.7.html#downloads
# 创建运行MySQL数据库的用户和用户组
groupadd -r mysql
useradd -r -g mysql mysql
# 解压安装包,更改目录权限,创建数据库目录
tar zxvf mysql-5.7.30-linux-glibc2.12-x86_64.tar.gz
mkdir -p /app/mysql/data
mv mysql-5.7.30-linux-glibc2.12-x86_64/* /app/mysql/
chown -R mysql:mysql /app/mysql
# 初始化数据库
cd /app/mysql
./bin/mysqld --initialize \
--user=mysql --basedir=/app/mysql/ \
--datadir=/app/mysql/data/
# !!注意最后一行的初始密码
7Jlhi:gg?rE0
# 创建RSA private key
./bin/mysql_ssl_rsa_setup --datadir=/app/mysql/data/
# 添加 MySQL 到开机启动,修改/etc/init.d/mysqld中的basedir和datadir
cp support-files/mysql.server /etc/init.d/mysqld

basedir=/app/mysql
datadir=/app/mysql/data

chkconfig mysqld on
# 修改环境变量
vim /etc/profile
# 添加
export PATH=/app/mysql/bin:$PATH
# 使环境变量生效
source /etc/profile

# 备份系统自带的/etc/my.cnf,在/app/mysql/目录新建my.cnf,并且将文件属性调整为mysql:mysql
mv /etc/my.cnf /etc/my.cnf.bak
touch /app/mysql/my.cnf    # 具体内容见下文

# 启动mysql
/etc/init.d/mysqld start
# 创建mysql.sock软链接
ln -s /app/mysql/mysql.sock /tmp/mysql.sock
# 使用初始密码登陆
mysql -uroot -p

# 登陆成功后修改密码
alter user 'root'@'localhost' identified by "12345678";
flush privileges;

# 配置数据库远程登录
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '12345678' WITH GRANT OPTION;
flush privileges;
# 验证 略
# my.cnf
[mysqld]
character-set-server=utf8
datadir=/app/mysql/data
socket=/app/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

[client]
character-set-server=utf8
socket=/app/mysql/mysql.sock

[mysql]
character-set-server=utf8
socket=/app/mysql/mysql.sock

部署k3s:

# 启动 k3s Server
# !注意,所有k3s节点上都要运行此命令
curl -sfL https://docs.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -s - server \
--datastore-endpoint="mysql://root:12345678@tcp(192.168.111.52:3306)/k3s"
# 验证
sudo k3s kubectl get nodes

# 在每个 Rancher Server 节点上安装 K3s 时,会在节点上/etc/rancher/k3s/k3s.yaml位置创建一个kubeconfig文件。该文件包含用于完全访问集群的凭据。# 复制 k3s.yaml 到 ~/.kube/config
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
# 验证 kubectl
sudo kubectl get pods --all-namespaces

kube-system   coredns-8655855d6-c26h8                  1/1     Running     0          11m
kube-system   metrics-server-7566d596c8-v65fd          1/1     Running     0          11m
kube-system   helm-install-traefik-ttrfg               0/1     Completed   0          11m
kube-system   svclb-traefik-hxmzw                      2/2     Running     0          8m16s
kube-system   svclb-traefik-zxmg2                      2/2     Running     0          8m16s
kube-system   traefik-758cd5fc85-xsxbm                 1/1     Running     0          8m16s
kube-system   local-path-provisioner-6d59f47c7-497rl   1/1     Running     0          11m

部署 Rancher:

  • 添加 Helm Chart 仓库 helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
  • 为 Rancher 创建 Namespace sudo kubectl create namespace cattle-system
  • 生成证书 mkdir certs cd certs touch ~/.rnd cp /usr/lib/ssl/openssl.cnf ./ # openssl.cnf内容有改动,详情见下文 vim openssl.cnf openssl genrsa -out cakey.pem 2048 openssl req -x509 -new -nodes -key cakey.pem \ -days 36500 \ -out cacerts.pem \ -extensions v3_ca \ -subj "/CN=rancher.local.com" \ -config ./openssl.cnf openssl genrsa -out server.key 2048 openssl req -new -key server.key \ -out server.csr \ -subj "/CN=rancher.local.com" \ -config ./openssl.cnf openssl x509 -req -in server.csr \ -CA cacerts.pem \ -CAkey cakey.pem \ -CAcreateserial -out server.crt \ -days 36500 -extensions v3_req \ -extfile ./openssl.cnf openssl x509 -noout -in server.crt -text | grep DNS cp server.crt tls.crt cp server.key tls.key
  • openssl修改部分 [req] distinguished_name = req_distinguished_name req_extetions = v3_req x509_extensions = v3_ca [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth, serverAuth subjectAltName = @alt_names [alt_names] DNS.1 = rancher.local.com [ v3_ca ] subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer:always basicConstraints = critical,CA:true subjectAltName = @alt_names
  • ca证书密文 sudo kubectl -n cattle-system create secret tls tls-rancher-ingress \ --cert=./tls.crt --key=./tls.key sudo kubectl -n cattle-system create secret generic tls-ca \ --from-file=cacerts.pem
  • 部署 Rancher 集群 sudo helm install rancher rancher-stable/rancher \ --namespace cattle-system \ --set hostname=rancher.local.com \ --set ingress.tls.source=secret \ --set privateCA=true
  • 等待 Rancher 集群运行 sudo kubectl -n cattle-system rollout status deploy/rancher Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... deployment "rancher" successfully rolled out
  • 如果看到以下错误:error: deployment "rancher" exceeded its progress deadline, 可以通过运行以下命令来检查 deployment 的状态 sudo kubectl -n cattle-system get deploy rancher
  • 搭建完成,在你的hosts文件中,把域名解析到负载均衡器, 访问 https://rancher.local.com

本文分享自微信公众号 - 运维开发故事(mygsdcsf),作者:刘大仙

原文出处及转载信息见文内详细说明,如有侵权,请联系 yunjia_community@tencent.com 删除。

原始发表时间:2020-05-25

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • ZABBIX 数据库高可用部署

    本篇文章只介绍 ZABBIX 数据库高可用的实现方式,ZABBIX前端的高可用将在后续文章中实现

    没有故事的陈师傅
  • ZABBIX对接飞书实现报警通知

    飞书提供了丰富的api来实现消息的通知,包括文本消息、图片消息、富文本消息,本次介绍使用飞书api发送文本消息,以下是实现思路 飞书API地址:https://...

    没有故事的陈师傅
  • Zabbix监控lnmp(附模板)

    需要用到ngx_http_stub_status_module模块,提供对基本状态信息的访问默认情况下不构建此模块,应使用—with-http_stub_sta...

    没有故事的陈师傅
  • mysql操作命令梳理(5)-执行sql语句查询即mysql状态说明

    在日常mysql运维中,经常要查询当前mysql下正在执行的sql语句及其他在跑的mysql相关线程,这就用到mysql processlist这个命令了。 m...

    洗尽了浮华
  • docker-compose搭建 Nginx+PHP+MySQL 环境

    到 https://code.aliyun.com/ 创建一个项目,如Dockerfile。之后我们把wordpress环境的所有相关Dockerfile及配置...

    菲宇
  • 使用Electron开发桌面级程序——J.A.R.V.I.S诞生记

    现在是凌晨一点,可能是在夜里的时候人会变得比较感性,所以突然想到了王小波在黄金时代中写下的这段话,没有理由的在这篇技术文章中将它作为引言。希望大家在自己的黄金时...

    李文杨
  • 工具 | 大数据系列(5)——Hadoop集群MYSQL的安装

    文|指尖流淌 前言 有一段时间没写文章了,最近事情挺多的,现在咱们回归正题,经过前面四篇文章的介绍,已经通过VMware安装了Hadoop的集群环境,相关的两款...

    小莹莹
  • 图解Java设计模式之访问者模式

    1)将人分为男人和女人,对歌手进行测评,看完某个歌手表演后,得到他们对该歌手的不同评价(评价有不同的种类,比如成功、失败等) 2)传统方案

    海仔
  • 使用electron开发桌面级小程序自动部署系统

    支持部署小程序开发者工具的坑,接下来我将此次开发过程的思考和问题进行总结,从多个角度来介绍本项目。

    李文杨
  • 一文带你安装Linux下的Mysql

    https://dev.mysql.com/downloads/mysql/5.7.html#downloads

    23号杂货铺

扫码关注云+社区

领取腾讯云代金券