本文将演示在arm架构的机器上离线部署k8s 1.32.7+ks4.1.3,若有其他需要可添加我微信好友sd_zdhr
。
ks4免费许可与ks3.版本不同,商业用途时注意查看ks4的license。
kt
是基于kk
二次开发的产物,具备kk
的所有功能。二开主要为适配信创国产化环境、简化arm
部署过程和国产化环境离线部署。支持arm64
和amd64
架构国产操作系统,已适配芯片+操作系统 如上。
kt新增功能点
./kt init-os
一条命令完成操作系统依赖安装和初始化操作。30000-32767
端口,其他k8s端口添加到节点白名单。./kt firewall
一条命令自动获取节点信息开白名单和防火墙。基础组件版本信息
离线制品地址
服务器基本信息
主机名 | 架构 | OS | 配置 | IP |
---|---|---|---|---|
node1 | arm64 | openEuler 22.03 | 8核16G | 192.168.0.121 |
将[kt_arm.tar.gz]
上传至每个节点
操作系统不需要安装docker等,全新的操作系统即可。
解压kt
文件后执行./kt init-os
已适配操作系统和架构见1.说明
该命令kt
会自动判断操作系统和架构以安装依赖项和进行必要的初始化配置。
将[kt_x86.tar.gz](``[https://pan.quark.cn/s/43079afd65de](https://pan.quark.cn/s/43079afd65de)``。 "kt_x86版本")
上传至可联网节点上操作
export KKZONE=cn
./kt create manifest --with-kubernetes v1.32.7 --arch arm64 --with-registry
vi manifest-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: sample
spec:
arches:
- arm64
operatingSystems: []
kubernetesDistributions:
- type: kubernetes
version: v1.32.7
components:
helm:
version: v3.14.3
cni:
version: v1.7.1
etcd:
version: v3.5.22
containerRuntimes:
- type: docker
version: 28.2.2
- type: containerd
version: 2.0.6
calicoctl:
version: v3.30.2
crictl:
version: v1.33.0
docker-registry:
version: "2"
harbor:
version: v2.13.2
docker-compose:
version: v2.29.1
images:
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.9
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.32.7
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.32.7
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.32.7
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.32.7
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
- dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.30.2
- dockerhub.kubekey.local/kubesphereio/cni:v3.30.2
- dockerhub.kubekey.local/kubesphereio/node:v3.30.2
- dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.30.2
- dockerhub.kubekey.local/kubesphereio/typha:v3.30.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3
- dockerhub.kubekey.local/kubesphereio/hybridnet:v0.8.6
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8
- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable
- registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0
## ks-core
- swr.cn-south-1.myhuaweicloud.com/gjing1st/kubesphere/ks-extensions-museum:v1.1.6
- swr.cn-south-1.myhuaweicloud.com/gjing1st/kubesphere/ks-controller-manager:v4.1.3
- swr.cn-south-1.myhuaweicloud.com/gjing1st/kubesphere/ks-apiserver:v4.1.3
- swr.cn-south-1.myhuaweicloud.com/gjing1st/kubesphere/ks-console:v4.1.3
- swr.cn-south-1.myhuaweicloud.com/gjing1st/kubesphere/kubectl:v1.27.16
- swr.cn-south-1.myhuaweicloud.com/gjing1st/kubesphere/redis:7.2.4-alpine
- swr.cn-south-1.myhuaweicloud.com/gjing1st/kubesphere/haproxy:2.9.6-alpine
registry:
auths: {}
./kt artifact export -m manifest-sample.yaml -o artifact-arm-k8s1327-ks413.tar.gz
可以看到下载了arm64
版本的harbor
,由于harbor
官方不支持arm版本,因此kk
也不支持arm
版本harbor
。此版本harbor
和kk
为自己制作,由于harbor:v2.8.0
版本之后不支持扩展helm
功能,我司需要用其helm
管理应用,所以这里使用v2.7.1
版本。
安装helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
下载 KubeSphere Core Helm Chart
VERSION=1.1.5 # Chart 版本
helm fetch https://charts.kubesphere.io/main/ks-core-${VERSION}.tgz
将kt
和离线制品上传至服务器
主要修改相关节点和harbor信息
./kt create config --with-kubernetes v1.32.7
根据实际服务器信息,配置到生成的config-sample.yaml
中
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 192.168.0.121, internalAddress: 192.168.0.121, user: root, password: "123456",arch: "arm64"}
roleGroups:
etcd:
- node1
control-plane:
- node1
worker:
- node1
# 如需使用 kk 自动部署镜像仓库,请设置该主机组 (建议仓库与集群分离部署,减少相互影响)
# 如果需要部署 harbor 并且 containerManager 为 containerd 时,由于部署 harbor 依赖 docker,建议单独节点部署 harbor
registry:
- node1
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.32.7
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: flannel
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
type: harbor
registryMirrors: []
insecureRegistries: []
privateRegistry: "dockerhub.kubekey.local"
namespaceOverride: "kubesphereio"
auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
"dockerhub.kubekey.local":
username: "admin"
password: Harbor@123 # 此处可自定义,kk3.1.8新特性
skipTLSVerify: true # Allow contacting registries over HTTPS with failed TLS verification.
plainHTTP: false # Allow contacting registries over HTTP.
certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local"
addons: []
说明: 这里写一下注意事项
config-sample.yaml
配置文件的 spec:hosts 参数
官方介绍
注:若部署的服务器在同一区域,内网IP可以互通,address 和internalAddress 都使用内网IP。之前遇到过有的小伙伴买的云服务器做测试,address使用公网IP,由于部署过程通信拷贝文件走了address,公网带宽又低。导致创建集群时异常的慢。
./kt init registry -f config-sample.yaml -a artifact-arm-k8s1327-ks413.tar.gz
此命令会自动安装docker和docker-compose
验证登录
docker login -u admin -p Harbor@123 dockerhub.kubekey.local
说明:
Harbor 管理员账号:admin,密码:Harbor@123。密码同步使用配置文件中的对应password
harbor 安装文件在 <font style="background-color:rgb(255,245,235);">/opt/harbor</font>
目录下,可在该目录下对 harbor 进行运维。
vi create_project_harbor.sh
#!/usr/bin/env bash
url="https://dockerhub.kubekey.local"# 或修改为实际镜像仓库地址
user="admin"
passwd="Harbor@123"
harbor_projects=(
ks
kubesphere
kubesphereio
tx1st
gjing1st
)
for project in"${harbor_projects[@]}"; do
echo"creating $project"
curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json""${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k # 注意在 curl 命令末尾加上 -k
done
创建 Harbor 项目
chmod +x create_project_harbor.sh
./create_project_harbor.sh
执行以下命令创建 Kubernetes 集群:
./kt create cluster -f config-sample.yaml -a artifact-k8s1333-ks413.tar.gz --with-local-storage
此命令kt会自动将离线制品中的镜像推送到harbor
私有仓库
执行后会有如下提示,输入y
继续执行
等待一段时间可以看到成功的消息
helm upgrade --install -n kubesphere-system --create-namespace ks-core ks-core-1.1.5.tgz \
--set global.imageRegistry=dockerhub.kubekey.local/gjing1st \
--set extension.imageRegistry=dockerhub.kubekey.local/gjing1st \
--set ksExtensionRepository.image.tag=v1.1.6 \
--set ha.enabled=true \
--set redisHA.enabled=true \
--debug \
--wait
等待大概1分钟左右看到成功消息
登录页面
首页
集群管理
概览
节点信息
引用链接
[1]
离线制品: https://pan.quark.cn/s/019d887f2b25。
[2]
kt压缩包: https://pan.xunlei.com/s/VOX82u9Jqv4RZV5P1gfpb3EWA1?pwd=wjcu#。