前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Kubernetes 入门实战 Part2

Kubernetes 入门实战 Part2

作者头像
Yifans_Z
发布2023-08-23 18:58:26
1920
发布2023-08-23 18:58:26
举报

17 多节点的 Kubernetes 集群

在腾讯云 TencentOS Server 3.1 (TK4) 下测试:

  • master SA3.MEDIUM4 2 核 4GB 5Mbps
  • worker S5.SMALL2 1 核 2GB 1Mbps
  • worker S5.SMALL2 1 核 2GB 1Mbps
代码语言:javascript
复制
# 修改源 https://mirrors.cloud.tencent.com/
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
yum clean all
yum makecache

# install docker
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

yum install -y yum-utils

yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

systemctl start docker
docker -v
docker run hello-world

一些准备工作:

代码语言:javascript
复制
# 改主机名
vi /etc/hostname
# reboot
代码语言:javascript
复制
# 把 cgroup 的驱动程序改成 systemd
# 使用 Docker 作为 Kubernetes 的底层支持
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

systemctl enable docker
systemctl daemon-reload
systemctl restart docker
docker version
代码语言:javascript
复制
# https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/
# 转发 IPv4 并让 iptables 看到桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# check
lsmod | grep br_netfilter
lsmod | grep overlay

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
# net.bridge.bridge-nf-call-iptables = 1
# net.bridge.bridge-nf-call-ip6tables = 1
# net.ipv4.ip_forward = 1
代码语言:javascript
复制
# 关闭 Linux 的 swap 分区
swapoff -a
sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab
代码语言:javascript
复制
# https://developer.aliyun.com/mirror/kubernetes
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.cloud.tencent.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF

yum clean all
yum makecache

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 新版本搞不定
# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes --nogpgcheck

yum --showduplicate list kubelet
yum install -y kubelet-1.23.16-0 kubeadm-1.23.16-0 kubectl-1.23.16-0 --disableexcludes=kubernetes --nogpgcheck

systemctl enable --now kubelet

kubeadm version
kubectl version --output=yaml
kubelet --version

下载 Kubernetes 组件镜像:

代码语言:javascript
复制
# kubeadm config images list
kubeadm config images list --kubernetes-version v1.23.16

安装 Master 节点:

代码语言:javascript
复制
vim /etc/containerd/config.toml
#disabled_plugins = ["cri"]

systemctl enable containerd
systemctl restart containerd
systemctl status containerd

systemctl enable kubelet.service
systemctl restart kubelet
systemctl status kubelet

containerd config default > /etc/containerd/config.toml

yum install -y nc
nc 127.0.0.1 6443

kubeadm init -h
kubeadm reset -f
rm -rf ~/.kube/config

kubeadm init \
    --image-repository=registry.aliyuncs.com/google_containers \
    --pod-network-cidr=10.10.0.0/16 \
    --kubernetes-version=v1.23.16 \
    --v=9

# Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
#   https://kubernetes.io/docs/concepts/cluster-administration/addons/
代码语言:javascript
复制
# success
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf
代码语言:javascript
复制
kubectl get node
# NAME     STATUS     ROLES                  AGE    VERSION
# master   NotReady   control-plane,master   2m4s   v1.23.16
代码语言:javascript
复制
# dubug
systemctl restart docker
systemctl restart kubelet
systemctl restart containerd
journalctl -xeu kubelet
crictr ps -a
crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause

kubectl get pods -n kube-system
kubectl describe pods -n kube-system
代码语言:javascript
复制
# Flannel 网络插件 https://github.com/flannel-io/flannel/tree/master
# curl https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml --output kube-flannel.yml
#  net-conf.json: |
#     {
#       "Network": "10.10.0.0/16",
#       "Backend": {
#         "Type": "vxlan"
#       }
#     }
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
kubectl get node
# NAME     STATUS   ROLES                  AGE   VERSION
# master   Ready    control-plane,master   14h   v1.23.16
代码语言:javascript
复制
# show join command in control-plane
kubeadm token create --print-join-command
# work join; 云服务记得开放入站端口
telnet 172.21.0.5 6443
systemctl enable kubelet.service
kubeadm join 172.21.0.5:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx --v=9
# check in control-plane
kubectl get nodes
# NAME            STATUS   ROLES                  AGE     VERSION
# master          Ready    control-plane,master   14h     v1.23.16
# vm-0-9-centos   Ready    <none>                 3m27s   v1.23.16
代码语言:javascript
复制
# run nginx
kubectl run ngx --image=nginx:alpine
kubectl get pod -o wide
# NAME   READY   STATUS    RESTARTS   AGE   IP          NODE      NOMINATED NODE   READINESS GATES
# ngx    1/1     Running   0          52m   10.10.1.2   woker01   <none>           <none>

18 Deployment 部署应用

“单一职责”和“对象组合”。既然 Pod 管理不了自己,那么我们就再创建一个新的对象,由它来管理 Pod,采用和 Job/CronJob 一样的形式——“对象套对象”。

代码语言:javascript
复制
kubectl api-resources

export out="--dry-run=client -o yaml"
kubectl create deploy ngx-dep --image=nginx:alpine $out > ngx-dep.yml
vim ngx-dep.yml
代码语言:javascript
复制
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: ngx-dep
  name: ngx-dep
spec:
  # 定义了 Pod 的“期望数量”,Kubernetes 会自动维护 Pod 数量到正常水平
  replicas: 2
  # 定义了基于 labels 筛选 Pod 的规则,它必须与 template 里 Pod 的 labels 一致
  selector:
    matchLabels:
      app: ngx-dep
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      # 贴标签
      labels:
        app: ngx-dep
    spec:
      containers:
        - image: nginx:alpine
          name: nginx
          resources: {}
status: {}

Deployment 实际上并不“持有”Pod 对象,它只是帮助 Pod 对象能够有足够的副本数量运行。

通过标签这种设计,Kubernetes 就解除了 Deployment 和模板里 Pod 的强绑定,把组合关系变成了“弱引用”。

代码语言:javascript
复制
# replicas: 2
kubectl apply -f ngx-dep.yml

kubectl get deploy
# NAME      READY   UP-TO-DATE   AVAILABLE   AGE
# ngx-dep   2/2     2            2           57s
kubectl get pod
# NAME                      READY   STATUS    RESTARTS   AGE
# ngx-dep-bfbb5f64b-96scb   1/1     Running   0          3m20s
# ngx-dep-bfbb5f64b-qnzbh   1/1     Running   0          3m20s
  • READY:运行的 Pod 数量,当前数量/期望数量。
  • UP-TO-DATE:当前已经更新到最新状态的 Pod 数量。
  • AVAILABLE:不仅要求已经运行,还必须是健康状态,能够正常对外提供服务,它才是我们最关心的 Deployment 指标。
  • AGE:从创建到现在所经过的时间。
代码语言:javascript
复制
# 测试自启恢复
kubectl delete pod ngx-dep-bfbb5f64b-qnzbh
kubectl get pod
# NAME                      READY   STATUS    RESTARTS   AGE
# ngx-dep-bfbb5f64b-7n724   1/1     Running   0          33s
# ngx-dep-bfbb5f64b-96scb   1/1     Running   0          4m52s

# 测试伸缩
kubectl scale --replicas=5 deploy ngx-dep
kubectl get pod
# NAME                      READY   STATUS    RESTARTS   AGE
# ngx-dep-bfbb5f64b-7n724   1/1     Running   0          77s
# ngx-dep-bfbb5f64b-7xhbs   1/1     Running   0          7s
# ngx-dep-bfbb5f64b-96scb   1/1     Running   0          5m36s
# ngx-dep-bfbb5f64b-97qp5   1/1     Running   0          7s
# ngx-dep-bfbb5f64b-vjn4q   1/1     Running   0          7s
代码语言:javascript
复制
# 筛选标签 ==、!=、in、notin
kubectl get pod -l app=nginx
kubectl get pod -l 'app in (ngx, nginx, ngx-dep)'

19 DaemonSet 看门狗

在 Deployment 看来,Pod 的运行环境与功能是无关的,只要 Pod 的数量足够,应用程序应该会正常工作。

有些场景下,要在集群里的每个节点上都运行 Pod,也就是说 Pod 的数量与节点数量保持同步。防止在集群里漂移。

DaemonSet 的目标是在集群的每个节点上运行且仅运行一个 Pod。

代码语言:javascript
复制
kubectl api-resources
代码语言:javascript
复制
# export out="--dry-run=client -o yaml"
# kubectl create deploy redis-ds --image=redis:5-alpine $out
# kind modify DaemonSet, delete spec.replicas
# https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/daemonset/
# vim redis-ds.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: redis-ds
  labels:
    app: redis-ds
spec:
  # 和 deplayment 比没有 replicas
  selector:
    matchLabels:
      name: redis-ds
  template:
    metadata:
      labels:
        name: redis-ds
    spec:
      containers:
        - image: redis:5-alpine
          name: redis
          ports:
            - containerPort: 6379
代码语言:javascript
复制
kubectl apply -f redis-ds.yml
kubectl get ds
# NAME       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
# redis-ds   2         2         2       2            2           <none>          30m
kubectl get pod -o wide
# 两个 worker 的场景
# NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
# redis-ds-9r96k            1/1     Running   0          2m52s   10.10.3.2    woker02   <none>           <none>
# redis-ds-hdl28            1/1     Running   0          21m     10.10.1.11   woker01   <none>           <none>
# Master 节点却被排除在外了

污点(taint)作用也是给节点“贴标签”。容忍度(toleration)Pod 能否“容忍”污点。

代码语言:javascript
复制
kubectl describe node master
# Taints:             node-role.kubernetes.io/master:NoSchedule
# 污点会拒绝 Pod 调度到本节点上运行
kubectl describe node woker01
# Taints:             <none>
代码语言:javascript
复制
# - 出掉 master 污点
kubectl taint node master node-role.kubernetes.io/master:NoSchedule-
# NAME       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
# redis-ds   3         3         3       3            3           <none>          31m
代码语言:javascript
复制
# Pod 添加 tolerations
# kubectl explain ds.spec.template.spec.tolerations
# vim redis-ds-t.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: redis-ds-t
  labels:
    app: redis-ds-t
spec:
  # 和 deplayment 比没有 replicas
  selector:
    matchLabels:
      name: redis-ds-t
  template:
    metadata:
      labels:
        name: redis-ds-t
    spec:
      containers:
        - image: redis:5-alpine
          name: redis
          ports:
            - containerPort: 6379
      # 容忍 node-role.kubernetes.io/master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
          operator: Exists
代码语言:javascript
复制
kubectl apply -f redis-ds-t.yml
kubectl get ds
# NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
# redis-ds     2         2         2       2            2           <none>          41m
# redis-ds-t   3         3         3       3            3           <none>          6s
# 差别在 master
kubectl get pod -o wide
# NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
# redis-ds-9r96k            1/1     Running   0          23m   10.10.3.2    woker02   <none>           <none>
# redis-ds-hdl28            1/1     Running   0          42m   10.10.1.11   woker01   <none>           <none>
# redis-ds-t-4mptv          1/1     Running   0          80s   10.10.3.4    woker02   <none>           <none>
# redis-ds-t-dpcl8          1/1     Running   0          80s   10.10.1.12   woker01   <none>           <none>
# redis-ds-t-kdjmn          1/1     Running   0          80s   10.10.0.6    master    <none>           <none>

https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/

静态 Pod:

代码语言:javascript
复制
ll -a /etc/kubernetes/manifests
# -rw------- 1 root root 2274 Feb 22 12:47 etcd.yaml
# -rw------- 1 root root 3358 Feb 22 12:47 kube-apiserver.yaml
# -rw------- 1 root root 2878 Feb 22 12:47 kube-controller-manager.yaml
# -rw------- 1 root root 1465 Feb 22 12:47 kube-scheduler.yaml

Kubernetes 的 4 个核心组件 apiserver、etcd、scheduler、controller-manager 原来都以静态 Pod 的形式存在的,这也是为什么它们能够先于 Kubernetes 集群启动的原因。

kubelet 会定期检查目录里的文件。

代码语言:javascript
复制
# flannel 就是一个 DaemonSet
kubectl get ns
kubectl get ds -n kube-flannel
# NAME              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
# kube-flannel-ds   3         3         3       3            3           <none>          3h54m

20 Service 服务发现

由 kube-proxy 控制的四层负载均衡,在 TCP/IP 协议栈上转发流量。

Pod 的生命周期很短暂,会不停地创建销毁,所以就需要用 Service 来实现负载均衡,它由 Kubernetes 分配固定的 IP 地址,能够屏蔽后端的 Pod 变化。

代码语言:javascript
复制
export out="--dry-run=client -o yaml"
kubectl expose deploy ngx-dep --port=80 --target-port=80 $ou
代码语言:javascript
复制
# vim ngx-svc.yml
apiVersion: v1
kind: Service
metadata:
  name: ngx-svc
spec:
  ports:
    - port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: ngx-dep
status:
  loadBalancer: {}

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: ngx-conf
data:
  default.conf: |
    server {
      listen 80;
      location / {
        default_type text/plain;
        return 200
          'srv : $server_addr:$server_port\nhost: $hostname\nuri : $request_method $host $request_uri\ndate: $time_iso8601\n';
      }
    }

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ngx-dep
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ngx-dep
  template:
    metadata:
      labels:
        app: ngx-dep
    spec:
      volumes:
        - name: ngx-conf-vol
          configMap:
            name: ngx-conf
      containers:
        - image: nginx:alpine
          name: nginx
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /etc/nginx/conf.d
              name: ngx-conf-vol
代码语言:javascript
复制
kubectl apply -f ngx-svc.yml
kubectl get svc
# NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
# kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   5h10m
# ngx-svc      ClusterIP   10.109.131.132   <none>        80/TCP    35s
# 虚地址 10.109.131.132
kubectl describe svc ngx-svc
# Name:              ngx-svc
# Namespace:         default
# Labels:            <none>
# Annotations:       <none>
# Selector:          app=ngx-dep
# Type:              ClusterIP
# IP Family Policy:  SingleStack
# IP Families:       IPv4
# IP:                10.109.131.132
# IPs:               10.109.131.132
# Port:              <unset>  80/TCP
# TargetPort:        80/TCP
# Endpoints:         10.10.1.13:80,10.10.3.5:80
# Session Affinity:  None
# Events:            <none>
kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP           NODE
ngx-dep-6796688696-cwm8f   1/1     Running   0          2m4s   10.10.3.5    woker02
ngx-dep-6796688696-khjnv   1/1     Running   0          2m2s   10.10.1.13   woker01
# same Endpoints

# 因为 Service、 Pod 的 IP 地址都是 Kubernetes 集群的内部网段
#   所以我们需要用 kubectl exec 进入到 Pod 内部
kubectl exec -it ngx-dep-6796688696-cwm8f -- sh
curl 10.109.131.132
# srv : 10.10.3.5:80
# host: ngx-dep-6796688696-cwm8f
# uri : GET 10.109.131.132 /
# date: 2023-02-22T10:09:49+00:00
curl 10.109.131.132
# srv : 10.10.1.13:80
# host: ngx-dep-6796688696-khjnv
# uri : GET 10.109.131.132 /
# date: 2023-02-22T10:09:50+00:00

# 测试恢复
kubectl delete pod ngx-dep-6796688696-khjnv
kubectl describe svc ngx-svc
# Endpoints:         10.10.1.14:80,10.10.3.5:80
# 之前是 10.10.1.13:80,10.10.3.5:80

# 测试扩容
kubectl scale --replicas=5 deploy ngx-dep
kubectl describe svc ngx-svc
# Endpoints:         10.10.1.14:80,10.10.1.15:80,10.10.3.5:80 + 2 more...

Service 对象的域名完全形式是 对象.名字空间.svc.cluster.local,但很多时候也可以省略后面的部分,直接写 对象.名字空间 甚至 对象名 就足够了,默认会使用对象所在的名字空间(比如这里就是 default)。

代码语言:javascript
复制
# Name:              ngx-svc
# Namespace:         default
kubectl exec -it ngx-dep-6796688696-cwm8f -- sh
curl ngx-svc
# srv : 10.10.3.5:80
# host: ngx-dep-6796688696-cwm8f
# uri : GET ngx-svc /
# date: 2023-02-22T10:19:25+00:00
curl ngx-svc.default
# srv : 10.10.3.6:80
# host: ngx-dep-6796688696-lpcfs
# uri : GET ngx-svc.default /
# date: 2023-02-22T10:19:41+00:00
curl ngx-svc.default.svc.cluster.local
# srv : 10.10.3.5:80
# host: ngx-dep-6796688696-cwm8f
# uri : GET ngx-svc.default.svc.cluster.local /
# date: 2023-02-22T10:20:04+00:00

Pod 分配了域名:IP 地址.名字空间.pod.cluster.local IP 地址 . 改成 -。

代码语言:javascript
复制
# kubectl explain svc.spec.type
# vim ngx-svc.yml
  ...
  type: NodePort
代码语言:javascript
复制
kubectl apply -f ngx-svc.yml
kubectl get svc
# NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
# kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        5h56m
# ngx-svc      NodePort    10.109.131.132   <none>        80:30916/TCP   46m
# Service 的默认类型是“ClusterIP”,只能在集群内部访问,
#   如果改成“NodePort”,就会在节点上开启一个随机端口号,让外界也能够访问内部的服务。
curl localhost:30916
# srv : 10.10.1.15:80
# host: ngx-dep-6796688696-l6skl
# uri : GET localhost /
# date: 2023-02-22T10:48:32+00:00

21 Ingress 流量总管

Service 本身是没有服务能力的,它只是一些 iptables 规则,真正配置、应用这些规则的实际上是节点里的 kube-proxy 组件。

Ingress 也只是一些 HTTP 路由规则的集合,相当于一份静态的描述文件,真正要把这些规则在集群里实施运行,还需要有另外一个东西,这就是 Ingress Controller,它的作用就相当于 Service 的 kube-proxy,能够读取、应用 Ingress 规则,处理、调度流量。

Ingress Class 是插在 Ingress 和 Ingress Controller 中间,作为流量规则和控制器的协调人,解除了 Ingress 和 Ingress Controller 的强绑定关系。

Kubernetes 用户可以转向管理 Ingress Class,用它来定义不同的业务逻辑分组,简化 Ingress 规则的复杂度。

代码语言:javascript
复制
export out="--dry-run=client -o yaml"
kubectl create ing ngx-ing --rule="ngx.test/=ngx-svc:80" --class=ngx-ink $out
代码语言:javascript
复制
# vim ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  creationTimestamp: null
  name: ngx-ing
spec:
  ingressClassName: ngx-ink
  rules:
    - host: ngx.test
      http:
        # 路径的匹配方式
        paths:
          - backend:
              service:
                name: ngx-svc
                port:
                  number: 80
            path: /
            # 精确匹配(Exact)或前缀匹配(Prefix)
            pathType: Exact
status:
  loadBalancer: {}

---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: ngx-ink
spec:
  controller: nginx.org/ingress-controller
代码语言:javascript
复制
kubectl apply -f ingress.yml
# NAME      CONTROLLER                     PARAMETERS   AGE
# ngx-ink   nginx.org/ingress-controller   <none>       15s
kubectl get ing
# NAME      CLASS     HOSTS      ADDRESS   PORTS   AGE
# ngx-ing   ngx-ink   ngx.test             80      84s
kubectl describe ing ngx-ing
# Name:             ngx-ing
# Labels:           <none>
# Namespace:        default
# Address:
# Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
# Rules:
#   Host        Path  Backends
#   ----        ----  --------
#   ngx.test
#               /   ngx-svc:80 (10.10.1.14:80,10.10.1.15:80)
# Annotations:  <none>
# Events:       <none>

在 Kubernetes 里使用 Ingress Controller:

https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/

代码语言:javascript
复制
git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v3.0.2
cd kubernetes-ingress/deployments

# Configure RBAC
# 为Ingress控制器创建一个命名空间和一个服务账户
kubectl apply -f common/ns-and-sa.yaml
# 为服务账户创建一个集群角色和集群角色绑定
kubectl apply -f rbac/rbac.yaml

# Create Common Resources
# 为NGINX的默认服务器创建一个带有TLS证书和密钥的秘密
kubectl apply -f common/default-server-secret.yaml
# 创建一个 config map,用于定制NGINX配置。
kubectl apply -f common/nginx-config.yaml
# 创建一个IngressClass资源
kubectl apply -f common/ingress-class.yaml

# Create Custom Resources
# kubectl apply -f common/crds/
vim deployment/nginx-ingress.yaml
# args add:
# -enable-custom-resources=false

# Run the Ingress Controller
kubectl apply -f deployment/nginx-ingress.yaml

# check
kubectl get pods --namespace=nginx-ingress
# NAME                             READY   STATUS    RESTARTS   AGE
# nginx-ingress-5f98f8f5f9-nnkv7   1/1     Running   0          3m14s

# Get Access to the Ingress Controller
kubectl create -f service/nodeport.yaml
kubectl get service -n nginx-ingress
# NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
# nginx-ingress   NodePort   10.111.210.52   <none>        80:31754/TCP,443:30188/TCP   5s
代码语言:javascript
复制
# debug
kubectl get IngressClass
kubectl get ing -n nginx-ingress
kubectl get deploy -n nginx-ingress
kubectl get pod -n nginx-ingress -o wide

kubectl describe service -n nginx-ingress -o wide
kubectl describe pod -n nginx-ingress
代码语言:javascript
复制
# 命令kubectl port-forward,它可以直接把本地的端口映射到 Kubernetes 集群的某个 Pod 里
kubectl port-forward -n nginx-ingress nginx-ingress-5f98f8f5f9-nnkv7 8080:80 &

22 玩转 Kubernetes 2

Kubernetes 部署 WordPress:

代码语言:javascript
复制
# vim wp-maria.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: maria-cm
data:
  DATABASE: "db"
  USER: "wp"
  PASSWORD: "123"
  ROOT_PASSWORD: "123"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: maria-dep
  name: maria-dep
spec:
  replicas: 1
  selector:
    matchLabels:
      app: maria-dep
  template:
    metadata:
      labels:
        app: maria-dep
    spec:
      containers:
        - image: mariadb:10
          name: mariadb
          ports:
            - containerPort: 3306
          envFrom:
            - prefix: "MARIADB_"
              configMapRef:
                name: maria-cm

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: maria-dep
  name: maria-svc
spec:
  ports:
    - port: 3306
      protocol: TCP
      targetPort: 3306
  selector:
    app: maria-dep
代码语言:javascript
复制
kubectl apply -f wp-maria.yml
kubectl get pod
kubectl get deploy
kubectl get svc
代码语言:javascript
复制
# vim wp-app.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: wp-cm
data:
  # DNS HOST
  HOST: "maria-svc"
  USER: "wp"
  PASSWORD: "123"
  NAME: "db"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: wp-dep
  name: wp-dep
spec:
  replicas: 2
  selector:
    matchLabels:
      app: wp-dep
  template:
    metadata:
      labels:
        app: wp-dep
    spec:
      containers:
        - image: wordpress:5
          name: wordpress
          ports:
            - containerPort: 80
          envFrom:
            - prefix: "WORDPRESS_DB_"
              configMapRef:
                name: wp-cm

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: wp-dep
  name: wp-svc
spec:
  ports:
    - name: http80
      port: 80
      protocol: TCP
      targetPort: 80
      # 指定端口
      nodePort: 30088
  selector:
    app: wp-dep
  # NodePort
  type: NodePort
代码语言:javascript
复制
kubectl apply -f wp-app.yml
kubectl get pod
kubectl get deploy
kubectl get svc

kubectl port-forward service/wp-svc 80:80 --address 0.0.0.0
代码语言:javascript
复制
# vim wp-ing.yml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: wp-ink
spec:
  controller: nginx.org/ingress-controller

---
# kubectl create ing wp-ing --rule="wp.test/=wp-svc:80" --class=wp-ink $out
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wp-ing
spec:
  ingressClassName: wp-ink
  rules:
    - host: wp.test
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: wp-svc
                port:
                  number: 80
代码语言:javascript
复制
# vim wp-kic-dep.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wp-kic-dep
  namespace: nginx-ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wp-kic-dep
  template:
    metadata:
      labels:
        app: wp-kic-dep
    spec:
      # kubectl explain Deployment.spec.template.spec.serviceAccountName
      serviceAccountName: nginx-ingress
      # kubectl explain Deployment.spec.template.spec.hostNetwork
      hostNetwork: true
      containers:
        - image: nginx/nginx-ingress:3.0.2
          imagePullPolicy: IfNotPresent
          name: nginx-ingress
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
            - name: readiness-port
              containerPort: 8081
            - name: prometheus
              containerPort: 9113
          readinessProbe:
            httpGet:
              path: /nginx-ready
              port: readiness-port
            periodSeconds: 1
          securityContext:
            allowPrivilegeEscalation: true
            runAsUser: 101 #nginx
            runAsNonRoot: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          args:
            # 默认是 nginx
            - -ingress-class=wp-ink
            - -enable-custom-resources=false
            - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
            - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret

---
apiVersion: v1
kind: Service
metadata:
  name: wp-kic-svc
  namespace: nginx-ingress

spec:
  ports:
    - port: 80
      protocol: TCP
      targetPort: 80
      nodePort: 30080

  selector:
    app: wp-kic-dep
  type: NodePort
代码语言:javascript
复制
kubectl apply -f wp-ing.yml -f wp-kic-dep.yml

kubectl get ing
kubectl get ingressclass
kubectl get pod -n=nginx-ingress
kubectl describe pod -n=nginx-ingress
kubectl get deploy -n=nginx-ingress
kubectl get svc -n=nginx-ingress
代码语言:javascript
复制
# 在服务器上
kubectl get pod -n=nginx-ingress -o=wide
# NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE
# wp-kic-dep-68579bc688-d64zs   1/1     Running   0          10m   172.21.0.9   woker01
curl 172.21.0.9 -H "HOST: wp.test"
代码语言:javascript
复制
# 在服务器外
kubectl port-forward service/wp-kic-svc -n=nginx-ingress 80:80 --address 0.0.0.0

vim /etc/hosts
[master ip] wp.test
# 游览器访问 wp.test

23 中级篇实操总结

代码语言:javascript
复制
# DaemonSet 模板生成
kubectl create deploy redis-ds --image=redis:5-alpine $out \
  | sed 's/Deployment/DaemonSet/g' - \
  | sed -e '/replicas/d' -

References

– EOF –

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2023-02-22,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 17 多节点的 Kubernetes 集群
  • 18 Deployment 部署应用
  • 19 DaemonSet 看门狗
  • 20 Service 服务发现
  • 21 Ingress 流量总管
  • 22 玩转 Kubernetes 2
  • 23 中级篇实操总结
  • References
相关产品与服务
容器服务
腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档