最近为项目奔波,都没有多少时间写博文了。。。不过这大半个月在客户现场处理了大量kubernetes集群部署运营的相关工作,这里总结一下。
kubernetes大规模集群优化
增大部分内核选项,在/etc/sysctl.conf
文件中添加下面片断:
fs.file-max=1000000
# max-file 表示系统级别的能够打开的文件句柄的数量, 一般如果遇到文件句柄达到上限时,会碰到
# "Too many open files"或者Socket/File: Can’t open so many files等错误。
# 配置arp cache 大小
net.ipv4.neigh.default.gc_thresh1=1024
# 存在于ARP高速缓存中的最少层数,如果少于这个数,垃圾收集器将不会运行。缺省值是128。
net.ipv4.neigh.default.gc_thresh2=4096
# 保存在 ARP 高速缓存中的最多的记录软限制。垃圾收集器在开始收集前,允许记录数超过这个数字 5 秒。缺省值是 512。
net.ipv4.neigh.default.gc_thresh3=8192
# 保存在 ARP 高速缓存中的最多记录的硬限制,一旦高速缓存中的数目高于此,垃圾收集器将马上运行。缺省值是1024。
# 以上三个参数,当内核维护的arp表过于庞大时候,可以考虑优化
net.netfilter.nf_conntrack_max=10485760
# 允许的最大跟踪连接条目,是在内核内存中netfilter可以同时处理的“任务”(连接跟踪条目)
net.netfilter.nf_conntrack_tcp_timeout_established=300
net.netfilter.nf_conntrack_buckets=655360
# 哈希表大小(只读)(64位系统、8G内存默认 65536,16G翻倍,如此类推)
net.core.netdev_max_backlog=10000
# 每个网络接口接收数据包的速率比内核处理这些包的速率快时,允许送到队列的数据包的最大数目。
fs.inotify.max_user_instances=524288
# 默认值: 128 指定了每一个real user ID可创建的inotify instatnces的数量上限
fs.inotify.max_user_watches=524288
# 默认值: 8192 指定了每个inotify instance相关联的watches的上限
fio
测量磁盘实际顺序 IOPS。
fio -filename=/dev/sda1 -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=4k -size=60G -numjobs=64 -runtime=10 -group_reporting -name=file--quota-backend-bytes
参数增大空间配额,最大支持 8G。
--quota-backend-bytes 8589934592/etc/docker/daemon.json
中添加以下配置:
"max-concurrent-downloads": 10/etc/docker/daemon.json
中添加以下配置:
"data-root": "/ssd_mount_dir"--serialize-image-pulls=false
, 该选项配置串行拉取镜像,默认值时true,配置为false可以增加并发度。但是如果docker daemon 版本小于 1.9,且使用 aufs 存储则不能改动该选项。--image-pull-progress-deadline=30
, 配置镜像拉取超时。默认值时1分,对于大镜像拉取需要适量增大超时时间。--max-pods=110
(默认是 110,可以根据实际需要设置)--apiserver-count
和 --endpoint-reconciler-type
,可使得多个 kube-apiserver 实例加入到 Kubernetes Service 的 endpoints 中,从而实现高可用。--max-requests-inflight
和 --max-mutating-requests-inflight
,默认是 200 和 400。 节点数量在 1000 - 3000 之间时,推荐:
--max-requests-inflight=1500 --max-mutating-requests-inflight=500
节点数量大于 3000 时,推荐:
--max-requests-inflight=3000 --max-mutating-requests-inflight=1000--target-ram-mb
配置kube-apiserver的内存,按以下公式得到一个合理的值:
--target-ram-mb=node_nums * 60在运行Pod的时候也需要注意遵循一些最佳实践。
BestEffort > Burstable > Guaranteed
。备份数据前先找到etcd集群当前的leader:
ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt endpoint --cluster status | grep -v 'false' | awk -F '[/ :]' '{print $4}'
然后登录到leader节点,备份snapshot db文件:
rsync -avp /var/lib/etcd/member/snap/db /tmp/etcd_db.bak
将备份的snaphost db文件上传到各个etcd节点,使用以下命令还原数据:
ETCDCTL_API=3 etcdctl snapshot restore \
/tmp/etcd_db.bak \
--endpoints=192.168.0.11:2379 \
--name=192.168.0.11 \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--initial-advertise-peer-urls=https://192.168.0.11:2380 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=192.168.0.11=https://192.168.0.11:2380,192.168.0.12=https://192.168.0.12:2380,192.168.0.13=https://192.168.0.13:2380 \
--data-dir=/var/lib/etcd/ \
--skip-hash-check=true
如果使用harbor作为镜像仓库与chart仓库,可使用脚本将harbor中所有的镜像和chart导入导出。
#!/bin/bash
harborUsername='admin'
harborPassword='Harbor12345'
harborRegistry='registry.test.com'
harborBasicAuthToken=$(echo -n "${harborUsername}:${harborPassword}" | base64)
docker login --username ${harborUsername} --password ${harborPassword} ${harborRegistry}
rm -f dist/images.list
rm -f dist/charts.list
# list projects
projs=`curl -s -k -H "Authorization: Basic ${harborBasicAuthToken}" "https://${harborRegistry}"'/api/projects?page=1&page_size=1000' | jq -r '.[] | "\(.project_id)=\(.name)"'`
for proj in ${projs[*]}; do
projId=`echo $proj|cut -d '=' -f 1`
projName=`echo $proj|cut -d '=' -f 2`
# list repos in one project
repos=`curl -s -k -H "Authorization: Basic ${harborBasicAuthToken}" "https://${harborRegistry}"'/api/repositories?page=1&page_size=1000&project_id='"${projId}" | jq -r '.[] | "\(.id)=\(.name)"'`
for repo in ${repos[*]}; do
repoId=`echo $repo|cut -d '=' -f 1`
repoName=`echo $repo|cut -d '=' -f 2`
# list tags in one repo
tags=`curl -s -k -H "Authorization: Basic ${harborBasicAuthToken}" "https://${harborRegistry}"'/api/repositories/'"${repoName}"'/tags?detail=1' | jq -r '.[].name'`
for tag in ${tags[*]}; do
#echo ${tag};
# pull image
docker pull ${harborRegistry}/${repoName}:${tag}
# tag image
docker tag ${harborRegistry}/${repoName}:${tag} ${repoName}:${tag}
# save image
mkdir -p $(dirname dist/${repoName})
docker save -o dist/${repoName}:${tag}.tar ${repoName}:${tag}
# record image to list file
echo "${repoName}:${tag}" >> dist/images.list
done
done
# list charts in one project
charts=`curl -s -k -H "Authorization: Basic ${harborBasicAuthToken}" "https://${harborRegistry}"'/api/chartrepo/'"${projName}"'/charts' | jq -r '.[].name'`
for chart in ${charts[*]}; do
#echo ${chart}
# list download urls in one chart
durls=`curl -s -k -H "Authorization: Basic ${harborBasicAuthToken}" "https://${harborRegistry}"'/api/chartrepo/'"${projName}"'/charts/'"${chart}" | jq -r '.[].urls[0]'`
#echo ${durl[*]}
for durl in ${durls[*]}; do
#echo ${durl};
# download chart
mkdir -p $(dirname dist/${projName}/${durl})
curl -s -k -H "Authorization: Basic ${harborBasicAuthToken}" -o dist/${projName}/${durl} "https://${harborRegistry}/chartrepo/${projName}/${durl}"
# record chart to list file
echo "${projName}/${durl}" >> dist/charts.list
done
done
done
#!/bin/bash
harborUsername='admin'
harborPassword='Harbor12345'
harborRegistry='registry.test.com'
harborBasicAuthToken=$(echo -n "${harborUsername}:${harborPassword}" | base64)
docker login --username ${harborUsername} --password ${harborPassword} ${harborRegistry}
while IFS="" read -r image || [ -n "$image" ]
do
projName=${image%%/*}
# echo ${projName}
# create harbor project
curl -k -X POST -H "Authorization: Basic ${harborBasicAuthToken}" "https://${harborRegistry}/api/projects" -H "accept: application/json" -H "Content-Type: application/json" -d '{ "project_name": "'"$projName"'", "metadata": { "public": "true" }}'
# load image
docker load -i dist/${image}.tar
# tag image
docker tag ${image} ${harborRegistry}/${image}
# push image
docker push ${harborRegistry}/${image}
done < dist/images.list
while IFS="" read -r chart || [ -n "$chart" ]
do
projName=${chart%%/*}
# echo ${projName}
# create harbor project
curl -k -X POST -H "Authorization: Basic ${harborBasicAuthToken}" "https://${harborRegistry}/api/projects" -H "accept: application/json" -H "Content-Type: application/json" -d '{ "project_name": "'"$projName"'", "metadata": { "public": "true" }}'
# upload chart
curl -s -k -H "Authorization: Basic ${harborBasicAuthToken}" -X POST "https://${harborRegistry}/api/chartrepo/${projName}/charts" -H "accept: application/json" -H "Content-Type: multipart/form-data" -F "chart=@dist/${chart};type=application/gzip"
done < dist/charts.list
集群中其它应用数据一般是保存在pvc对应的存储卷中的。
首先根据pvc找到对应的pv:
kubectl -n test get pvc demo-pvc -o jsonpath='{.spec.volumeName}'
找到pv的挂载目录:
mount | grep pvc-xxxxxxxxxxxxxxxxxxx
使用rsync命令备份数据:
rsync -avp --delete /var/lib/kubelet/pods/xxxxxx/volumes/xxxxxxx/ /tmp/pvc-data-bak/test/demo-pvc/
首先根据pvc找到对应的pv:
kubectl -n test get pvc demo-pvc -o jsonpath='{.spec.volumeName}'
找到pv的挂载目录:
mount | grep pvc-xxxxxxxxxxxxxxxxxxx
使用rsync命令备份数据:
rsync -avp --delete /tmp/pvc-data-bak/test/demo-pvc/ /var/lib/kubelet/pods/xxxxxx/volumes/xxxxxxx/
所有备份出的数据可以存放在一个目录下,并使用restic工具保存到多个后端存储系统上,如:
# 初始化备份仓库
restic --repo sftp:user@host:/srv/restic-repo init
# 将目录备份到备份仓库
restic --repo sftp:user@host:/srv/restic-repo backup /data/k8s-all-data
DONE.