前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >K8s集群环境搭建

K8s集群环境搭建

作者头像
Alone-林
发布2022-11-22 17:19:04
1.3K0
发布2022-11-22 17:19:04
举报
文章被收录于专栏:Linux云运维

K8s集群环境搭建

1、环境规划

1.1 集群类型

Kubernetes集群大体上分为两类:一主多从和多主多从

  • 一主多从:一台master节点和多台node节点,搭建简单,但是有单机故障风险,适用于测试环境
  • 多主多从:多台master节点和多台node节点,搭建麻烦,安全性高,适用于生产环境

1.2 安装方式

Kubernetes有多种部署方式,目前主流的方式有kubeadmminikube二进制包

说明:现在需要安装kubernetes的集群环境,但是又不想过于麻烦,所有选择使用kubeadm方式

1.3 准备环境

角色

ip地址

组件

master

192.168.111.100

docker,kubectl,kubeadm,kubelet

node1

192.168.111.101

docker,kubectl,kubeadm,kubelet

node2

192.168.111.102

docker,kubectl,kubeadm,kubelet

2、环境搭建

说明: 本次环境搭建需要安装三台Linux系统(一主二从),内置centos7.5系统,然后在每台linux中分别安装docker。kubeadm(1.25),kubelet(1.25.4),kubelet(1.25.4).

2.1 主机安装

  • 安装虚拟机过程中注意下面选项的设置:
  • 操作系统环境:cpu2个 内存2G 硬盘50G centos7+
  • 语言:中文简体/英文
  • 软件选择:基础设施服务器
  • 分区选择:自动分区/手动分区
  • 网络配置:按照下面配置网络地址信息
  • 网络地址:192.168.100.(100、10、20)
  • 子网掩码:255.255.255.0
  • 默认网关:192.168.100.254
  • DNS:8.8.8.8
  • 主机名设置:
  • Master节点:master
  • Node节点:node1
  • Node节点:node2

2.2 环境初始化

查看操作系统的版本

代码语言:javascript
复制
# 此方式下安装kubernetes集群要求Centos版本要在7.5或之上
[root@master ~]#cat /etc/redhat-release
CentOS Stream release 8

主机名解析 (三个节点都做)

代码语言:javascript
复制
# 为了方便集群节点间的直接调用,在这个配置一下主机名解析,企业中推荐使用内部DNS服务器
[root@master ~]#cat >> /etc/hosts << EOF
> 192.168.111.100 master.example.com master
> 192.168.111.101 node1.example.com node1
> 192.168.111.102 node2.example.com node2
> EOF

[root@master ~]#scp /etc/hosts root@192.168.111.101:/etc/hosts
The authenticity of host '192.168.111.101 (192.168.111.101)' can't be established.
ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.111.101' (ECDSA) to the list of known hosts.
root@192.168.111.101's password: 
hosts                                                                                       100%  280   196.1KB/s   00:00    
[root@master ~]#scp /etc/hosts root@192.168.111.102:/etc/hosts
The authenticity of host '192.168.111.102 (192.168.111.102)' can't be established.
ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.111.102' (ECDSA) to the list of known hosts.
root@192.168.111.102's password: 
hosts                            

时钟同步

代码语言:javascript
复制
# kubernetes要求集群中的节点时间必须精确一致,这里使用chronyd服务从网络同步时间,企业中建议配置内部的时间同步服务器
-master节点
[root@master ~]#vim /etc/chrony.conf
local stratum 10
[root@master ~]#systemctl restart chronyd.service
[root@master ~]#systemctl enable chronyd.service
[root@master ~]#hwclock -w

-node1节点
[root@node1 ~]#vim /etc/chrony.conf
server master.example.com iburst
...
[root@node1 ~]#systemctl restart chronyd.service 
[root@node1 ~]#systemctl enable chronyd.service 
[root@node1 ~]#hwclock -w

-node2节点
[root@node2 ~]#vim /etc/chrony.conf 
server master.example.com iburst
...
[root@node2 ~]#systemctl restart chronyd.service 
[root@node2 ~]#systemctl enable chronyd.service
[root@node2 ~]#hwclock -w

禁用firewalld、selinux、postfix(三个节点都做)

代码语言:javascript
复制
# 关闭防火墙、selinux,postfix----3台主机都配置
-master节点
[root@master ~]#systemctl disable --now firewalld
[root@master ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@master ~]#setenforce 0
[root@master ~]#systemctl stop postfix
[root@master ~]#systemctl disable postfix

-node1节点
[root@node1 ~]#systemctl disable --now firewalld
[root@node1 ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@node1 ~]#setenforce 0
[root@node1 ~]#systemctl stop postfix
[root@node1 ~]#systemctl disable postfix

-node2节点
[root@node2 ~]#systemctl disable --now firewalld
[root@node2 ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@node2 ~]#setenforce 0
[root@node2 ~]#systemctl stop postfix
[root@node2 ~]#systemctl disable postfix

禁用swap分区(三个节点都做)

代码语言:javascript
复制
-master节点
[root@master ~]#vim /etc/fstab # 注释掉swap分区那一行
[root@master ~]#swapoff -a

-node1节点
[root@node1 ~]#vim /etc/fstab # 注释掉swap分区那一行
[root@node1 ~]#swapoff -a

-node1节点
[root@node2 ~]#vim /etc/fstab # 注释掉swap分区那一行
[root@node2 ~]#swapoff -a

开启IP转发,和修改内核信息---三个节点都需要配置

代码语言:javascript
复制
-master节点
[root@master ~]#vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@master ~]#modprobe   br_netfilter
[root@master ~]#sysctl -p  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

-node1节点
[root@node1 ~]#vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@node1 ~]#modprobe   br_netfilter
[root@node1 ~]#sysctl -p  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

-node1节点
[root@node2 ~]#vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@node2 ~]#modprobe   br_netfilter
[root@node2 ~]#sysctl -p  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

配置IPVS功能(三个节点都做)

代码语言:javascript
复制
-master节点
[root@master ~]#vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh

[root@master ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master ~]#bash /etc/sysconfig/modules/ipvs.modules
[root@master ~]#lsmod | grep -e ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          172032  1 ip_vs
nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
[root@master ~]#reboot

-node1节点
[root@node1 ~]#vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh

[root@node1 ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
[root@node1 ~]#bash /etc/sysconfig/modules/ipvs.modules
[root@node1 ~]#lsmod | grep -e ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          172032  1 ip_vs
nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
[root@node1 ~]#reboot

-node2节点
[root@node2 ~]#vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh

[root@node2 ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
[root@node2 ~]#bash /etc/sysconfig/modules/ipvs.modules
[root@node2 ~]#lsmod | grep -e ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          172032  1 ip_vs
nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
[root@node2 ~]#reboot

ssh免密认证

代码语言:javascript
复制
[root@master ~]#ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:VcZ6m+gceBJxwysFWwM08526KiBoSt9qdbDQoMSx3kU root@master
The key's randomart image is:
+---[RSA 3072]----+
|...  E .*+o.o    |
| o...   .*==..   |
|... o.  .+o+o    |
|.....o  o.o..    |
| o .. o S+.o o   |
|.o. .o .o +.o    |
|+ ..o..  =..     |
|.  o ..  .o      |
|  ...  ..        |
+----[SHA256]-----+
[root@master ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node1 (192.168.111.101)' can't be established.
ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node1'"
and check to make sure that only the key(s) you wanted were added.

[root@master ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node2 (192.168.111.102)' can't be established.
ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node2'"
and check to make sure that only the key(s) you wanted were added.

2.3、安装docker

切换镜像源

代码语言:javascript
复制
-master节点
[root@master /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@master /etc/yum.repos.d]# dnf -y install epel-release
[root@master /etc/yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

-node1节点
[root@node1 /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@node1 /etc/yum.repos.d]# dnf -y install epel-release
[root@node1 /etc/yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

-node2节点
[root@node2 /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@node2 /etc/yum.repos.d]# dnf -y install epel-release
[root@node2 /etc/yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker-ce

代码语言:javascript
复制
-master节点
[root@master ~]# dnf -y install docker-ce --allowerasing
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker

-node1节点
[root@node1 ~]# dnf -y install docker-ce --allowerasing
[root@node1 ~]# systemctl restart docker
[root@node1 ~]# systemctl enable docker

-node2节点
[root@node2 ~]# dnf -y install docker-ce --allowerasing
[root@node2 ~]# systemctl restart docker
[root@node2 ~]# systemctl enable docker

添加一个配置文件,配置docker仓库加速器

代码语言:javascript
复制
-master节点
[root@master ~]#cat > /etc/docker/daemon.json << EOF
 {
   "registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {
     "max-size": "100m"
   },
   "storage-driver": "overlay2"
 }
 EOF
[root@master ~]#systemctl daemon-reload
[root@master ~]#systemctl  restart docker

-node1节点
[root@node1 ~]#cat > /etc/docker/daemon.json << EOF
 {
   "registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {
     "max-size": "100m"
   },
   "storage-driver": "overlay2"
 }
 EOF
[root@node1 ~]#systemctl daemon-reload
[root@node1 ~]#systemctl  restart docker

-node2节点
[root@node2 ~]#cat > /etc/docker/daemon.json << EOF
 {
   "registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {
     "max-size": "100m"
   },
   "storage-driver": "overlay2"
 }
 EOF
[root@node2 ~]#systemctl daemon-reload
[root@node2 ~]#systemctl  restart docker

2.4 安装kubernetes组件

由于kubernetes的镜像在国外,速度比较慢,这里切换成国内的镜像源

代码语言:javascript
复制
-master节点
[root@master ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

-node1节点
[root@node1 ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

-node2节点
[root@node2 ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm kubelet kubectl工具

代码语言:javascript
复制
-master节点
[root@master ~]#dnf  -y  install kubeadm  kubelet  kubectl
[root@master ~]#systemctl  restart  kubelet
[root@master ~]#systemctl  enable  kubelet

-node1节点
[root@node1 ~]#dnf  -y  install kubeadm  kubelet  kubectl
[root@node1 ~]#systemctl  restart  kubelet
[root@node1 ~]#systemctl  enable  kubelet

-node2节点
[root@node2 ~]#dnf  -y  install kubeadm  kubelet  kubectl
[root@node2 ~]#systemctl  restart  kubelet
[root@node2 ~]#systemctl  enable  kubelet

配置containerd

代码语言:javascript
复制
# 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
-master节点
[root@master ~]#containerd config default > /etc/containerd/config.toml
# 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
[root@master ~]#vim /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
# 然后重启并设置containerd服务
[root@master ~]#systemctl   restart  containerd
[root@master ~]#systemctl   enable  containerd

# 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
-node1节点
[root@node1 ~]#containerd config default > /etc/containerd/config.toml
# 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
[root@node1 ~]#vim /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
# 然后重启并设置containerd服务
[root@node1 ~]#systemctl   restart  containerd
[root@node1 ~]#systemctl   enable  containerd

# 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
-node2节点
[root@node2 ~]#containerd config default > /etc/containerd/config.toml
# 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
[root@node2 ~]#vim /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
# 然后重启并设置containerd服务
[root@node2 ~]#systemctl   restart  containerd
[root@node2 ~]#systemctl   enable  containerd

部署k8s的master节点

代码语言:javascript
复制
-master节点
[root@master ~]#kubeadm init \
  --apiserver-advertise-address=192.168.111.100 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.25.4 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16
# 建议将初始化内容保存在某个文件中
[root@master ~]#vim k8s 
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
	--discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 

[root@master ~]#vim /etc/profile.d/k8s.sh
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@master ~]#source /etc/profile.d/k8s.sh

安装pod网络插件

代码语言:javascript
复制
-master节点
[root@master ~]#wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
[root@master ~]#kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]#kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   6m41s   v1.25.4
[root@master ~]#kubectl get nodes
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   7m10s   v1.25.4

将node节点加入到k8s集群中

代码语言:javascript
复制
-node1节点
[root@node1 ~]#kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
> --discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 

-node2节点
[root@node2 ~]#kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
> --discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 

kubectl get nodes 查看node状态

代码语言:javascript
复制
-master节点
[root@master ~]#kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   Ready      control-plane   9m37s   v1.25.4
node1    NotReady   <none>          51s     v1.25.4
node2    NotReady   <none>          31s     v1.25.4
[root@master ~]#kubectl get nodes
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   9m57s   v1.25.4
node1    Ready    <none>          71s     v1.25.4
node2    Ready    <none>          51s     v1.25.4

使用k8s集群创建一个pod,运行nginx容器,然后进行测试

代码语言:javascript
复制
[root@master ~]#kubectl create  deployment  nginx  --image nginx
deployment.apps/nginx created
[root@master ~]#kubectl  get  pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-76d6c9b8c-z7p4l   1/1     Running   0          35s
[root@master ~]#kubectl  expose  deployment  nginx  --port 80  --type NodePort
service/nginx exposed
[root@master ~]#kubectl  get  pods  -o  wide
NAME                    READY   STATUS    RESTARTS   AGE    IP           NODE    NOMINATED NODE   READINESS GATES
nginx-76d6c9b8c-z7p4l   1/1     Running   0          119s   10.244.1.2   node1   <none>           <none>
[root@master ~]#kubectl  get  services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        15m
nginx        NodePort    10.109.37.202   <none>        80:31125/TCP   17s

测试访问

image-20221117165705706
image-20221117165705706

修改默认网页

代码语言:javascript
复制
[root@master ~]# kubectl exec -it pod/nginx-76d6c9b8c-z7p4l -- /bin/bash
root@nginx-76d6c9b8c-z7p4l:/# cd /usr/share/nginx/html/
echo "zhaoshulin" > index.html
image-20221117170031199
image-20221117170031199
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2022-11-17,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • K8s集群环境搭建
    • 1、环境规划
      • 1.1 集群类型
      • 1.2 安装方式
      • 1.3 准备环境
    • 2、环境搭建
      • 2.1 主机安装
      • 2.2 环境初始化
      • 2.3、安装docker
      • 2.4 安装kubernetes组件
相关产品与服务
容器镜像服务
容器镜像服务(Tencent Container Registry,TCR)为您提供安全独享、高性能的容器镜像托管分发服务。您可同时在全球多个地域创建独享实例,以实现容器镜像的就近拉取,降低拉取时间,节约带宽成本。TCR 提供细颗粒度的权限管理及访问控制,保障您的数据安全。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档