前往小程序,Get更优阅读体验!
立即前往
发布
社区首页 >专栏 >Kubernetes(k8s)-安装k8s(containerd版)

Kubernetes(k8s)-安装k8s(containerd版)

作者头像
运维小路
发布2024-12-23 15:38:29
发布2024-12-23 15:38:29
32500
代码可运行
举报
文章被收录于专栏:运维小路运维小路
运行总次数:0
代码可运行

我们上一章介绍了Docker基本情况,目前在规模较大的容器集群基本都是Kubernetes,但是Kubernetes涉及的东西和概念确实是太多了,而且随着版本迭代功能在还增加,笔者有些功能也确实没用过,所以只能按照我自己的理解来讲解。

上一小节我们安装了docker版本的k8s版本是1.23.12,但是目前的k8s版本截止2024年12月,已经发布到 1.32.0,所以我们这里的版本也安装1.32.0版本(我写这个文章在1年前,发布的版本到了1.28.6,源也只更新到了1.28.2,1年后这个源还是1.28.2),所以后续的版本还是1.28。由于中途很多步骤都比较类似,所以这里部分地方就比较省略。

IP地址

角色

主机名

192.168.31.213

master

master01

192.168.31.214

node

node01

准备工作

1.操作系统

参考:Kubernetes(k8s)-安装k8s(docker版)

2.初始化
参考:Kubernetes(k8s)-安装k8s(docker版)

部署

1.安装containerd

虽然不安装docker,但是由于containerd是docker开源并捐赠给CNCF的,所以containerd和docker实际用的是一个源。

代码语言:javascript
代码运行次数:0
复制
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install -y containerd.io

2.配置containerd

当然这里还可以修改更多的内容,其实我并没有生产环境大规模部署和使用过containerd所以这方面经验有所欠缺,有兴趣可以下去研究。

代码语言:javascript
代码运行次数:0
复制
#生成全量的配置文件,默认里面几乎没什么配置
containerd config default > /etc/containerd/config.toml

#修改镜像地址
#默认是registry.k8s.io/pause:3.6,我写原始文档的时候才3.8,现在3.10。
sandbox_image = "registry.aliyuncs.com/google_containers/pause:10"

3.启动containerd

代码语言:javascript
代码运行次数:0
复制
systemctl start containerd && systemctl enable containerd

4.部署Kubernetes

代码语言:javascript
代码运行次数:0
复制
#配置源
vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=kubernetes
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
#安装必要的软件
yum -y install kubelet kubeadm kubectl

5.启动kubelet

代码语言:javascript
代码运行次数:0
复制
systemctl start kubelet && systemctl enable kubelet

6.创建集群

代码语言:javascript
代码运行次数:0
复制
[root@master01 ~]# kubeadm init   --apiserver-advertise-address=192.168.31.213   --image-repository registry.aliyuncs.com/google_containers   --service-cidr=10.10.0.0/16   --pod-network-cidr=172.16.0.0/16   --kubernetes-version v1.28.2   --v=5
I1220 20:45:50.175353   31666 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I1220 20:45:50.175408   31666 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
I1220 20:45:50.177802   31666 checks.go:563] validating Kubernetes and kubeadm version
I1220 20:45:50.177815   31666 checks.go:168] validating if the firewall is enabled and active
I1220 20:45:50.181463   31666 checks.go:203] validating availability of port 6443
I1220 20:45:50.181530   31666 checks.go:203] validating availability of port 10259
I1220 20:45:50.181540   31666 checks.go:203] validating availability of port 10257
I1220 20:45:50.181549   31666 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1220 20:45:50.181555   31666 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1220 20:45:50.181559   31666 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1220 20:45:50.181562   31666 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1220 20:45:50.181568   31666 checks.go:430] validating if the connectivity type is via proxy or direct
I1220 20:45:50.181586   31666 checks.go:469] validating http connectivity to first IP address in the CIDR
I1220 20:45:50.181598   31666 checks.go:469] validating http connectivity to first IP address in the CIDR
I1220 20:45:50.181603   31666 checks.go:104] validating the container runtime
I1220 20:45:50.196043   31666 checks.go:639] validating whether swap is enabled or not
I1220 20:45:50.196097   31666 checks.go:370] validating the presence of executable crictl
I1220 20:45:50.196118   31666 checks.go:370] validating the presence of executable conntrack
I1220 20:45:50.196126   31666 checks.go:370] validating the presence of executable ip
I1220 20:45:50.196134   31666 checks.go:370] validating the presence of executable iptables
I1220 20:45:50.196142   31666 checks.go:370] validating the presence of executable mount
I1220 20:45:50.196149   31666 checks.go:370] validating the presence of executable nsenter
I1220 20:45:50.196158   31666 checks.go:370] validating the presence of executable ebtables
I1220 20:45:50.196163   31666 checks.go:370] validating the presence of executable ethtool
I1220 20:45:50.196170   31666 checks.go:370] validating the presence of executable socat
I1220 20:45:50.196176   31666 checks.go:370] validating the presence of executable tc
I1220 20:45:50.196182   31666 checks.go:370] validating the presence of executable touch
I1220 20:45:50.196191   31666 checks.go:516] running all checks
I1220 20:45:50.200409   31666 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I1220 20:45:50.200514   31666 checks.go:605] validating kubelet version
I1220 20:45:50.228325   31666 checks.go:130] validating if the "kubelet" service is enabled and active
I1220 20:45:50.232508   31666 checks.go:203] validating availability of port 10250
I1220 20:45:50.232554   31666 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1220 20:45:50.232586   31666 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1220 20:45:50.232596   31666 checks.go:203] validating availability of port 2379
I1220 20:45:50.232606   31666 checks.go:203] validating availability of port 2380
I1220 20:45:50.232616   31666 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1220 20:45:50.232703   31666 checks.go:828] using image pull policy: IfNotPresent
I1220 20:45:50.245859   31666 checks.go:846] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
I1220 20:45:50.259051   31666 checks.go:846] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
I1220 20:45:50.272114   31666 checks.go:846] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
I1220 20:45:50.293045   31666 checks.go:846] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
W1220 20:45:50.308144   31666 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.10" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
I1220 20:45:50.322515   31666 checks.go:846] image exists: registry.aliyuncs.com/google_containers/pause:3.9
I1220 20:45:50.335660   31666 checks.go:846] image exists: registry.aliyuncs.com/google_containers/etcd:3.5.9-0
I1220 20:45:50.348960   31666 checks.go:846] image exists: registry.aliyuncs.com/google_containers/coredns:v1.10.1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1220 20:45:50.349013   31666 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1220 20:45:50.509139   31666 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.10.0.1 192.168.31.213]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1220 20:45:50.793697   31666 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1220 20:45:50.853565   31666 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1220 20:45:50.931644   31666 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1220 20:45:51.026435   31666 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [192.168.31.213 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [192.168.31.213 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1220 20:45:51.231194   31666 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1220 20:45:51.344477   31666 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1220 20:45:51.385406   31666 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1220 20:45:51.534822   31666 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1220 20:45:51.621462   31666 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1220 20:45:51.770394   31666 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1220 20:45:51.770412   31666 manifests.go:102] [control-plane] getting StaticPodSpecs
I1220 20:45:51.770507   31666 certs.go:519] validating certificate period for CA certificate
I1220 20:45:51.770537   31666 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1220 20:45:51.770541   31666 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I1220 20:45:51.770544   31666 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1220 20:45:51.770866   31666 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1220 20:45:51.770873   31666 manifests.go:102] [control-plane] getting StaticPodSpecs
I1220 20:45:51.770974   31666 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1220 20:45:51.770979   31666 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I1220 20:45:51.770982   31666 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1220 20:45:51.770984   31666 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1220 20:45:51.770987   31666 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1220 20:45:51.772037   31666 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1220 20:45:51.772051   31666 manifests.go:102] [control-plane] getting StaticPodSpecs
I1220 20:45:51.772233   31666 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1220 20:45:51.772533   31666 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I1220 20:45:51.772545   31666 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I1220 20:45:51.813388   31666 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.001252 seconds
I1220 20:45:55.815724   31666 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1220 20:45:55.827411   31666 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1220 20:45:55.835539   31666 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node
I1220 20:45:55.835567   31666 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "master01" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: kv1zhw.zppidspgtkmx7oa7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1220 20:45:56.871704   31666 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I1220 20:45:56.871932   31666 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I1220 20:45:56.872029   31666 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I1220 20:45:56.875916   31666 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I1220 20:45:56.881003   31666 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1220 20:45:56.881444   31666 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
I1220 20:45:57.259468   31666 request.go:629] Waited for 118.243416ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.31.213:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.213:6443 --token kv1zhw.zppidspgtkmx7oa7 \
	--discovery-token-ca-cert-hash sha256:09e8d758444179d1eff42107578fd8b49c39bc7a36ffa4d74a3bda3d1ad7fc47 

后续的操作步骤是一样的,就不一一演示了,把遇到的几个问题总结下

1.没有加载内核模块br_netfilter,导致内核参数修改未生效,安装失败。

代码语言:javascript
代码运行次数:0
复制
#我这里只是为了写文章,就手工跳过
#这个命令以前将内核模块的时候也讲过
modprobe br_netfilter

2.我1年前原始文档还有个修改cgroup的操作,修改了反而安装不下去。所以前面的步骤并没有写这个参数。

3.由于是用的containerd,我们前面的用docker镜像,如果仓库可用的情况下,镜像是具有通用性的。但是如果镜像仓库不可用,则需要把通过docker命令打包,ctr命令导入,ctr命令功能又较少,加上我自己用的也比较少,折腾了好久。还有个crictl命令具有基本和docker类似的功能,如果你对containerd了解不多,慎重考虑。

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2024-12-21,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 运维小路 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 准备工作
    • 1.操作系统
    • 2.初始化
    • 参考:Kubernetes(k8s)-安装k8s(docker版)
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档