前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Kubernetes 1.19.0——其他控制器

Kubernetes 1.19.0——其他控制器

原创
作者头像
gz_naldo
修改2020-09-29 11:03:26
4020
修改2020-09-29 11:03:26
举报
文章被收录于专栏:CloudComputingCloudComputing

DS---------DaemonSet

是不需要指定副本数的,默认有几个节点就在每个节点上运行一个pod

应用场景

1.运行集群存储 daemon,例如在每个 Node 上运行 glusterd、ceph

2.在每个 Node 上运行日志收集 daemon,例如fluentd、logstash

3.在每个 Node 上运行监控 daemon,例如 Prometheus Node Exporter、collectd、

Datadog 代理、New Relic 代理,或 Ganglia gmond

创建一个daemonset
创建一个daemonset

和deployment只有在kind的位置不同

拷贝deployment的模板进行修改

代码语言:javascript
复制
[root@vms61 chap6-ds]# kubectl create deploy ds1 --image=nginx --dry-run=client -o yaml > ds1.yaml
[root@vms61 chap6-ds]# vi ds1.yaml 
spec:
      app: ds1
apiVersion: apps/v1
kind: DaemonSet
metadata:
  creationTimestamp: null
  labels:
    app: ds1
  name: ds1
spec:
  selector:
    matchLabels:
      app: ds1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ds1
    spec:
      containers:
      - image: nginx
        imagePullPolicy: IfNotPresent
        name: nginx
        resources: {}

创建成功
创建成功

为什么只有2个ds?因为vms61这台master机器有taint污点

RS-------ReplicationController

创建成功rc并成功扩容至5个pod
创建成功rc并成功扩容至5个pod
代码语言:javascript
复制
[root@vms61 chap6-ds]# cat rc.yaml
apiVersion: v1 
kind: ReplicationController 
metadata: 
  name: rcx 
spec: 
  replicas: 3 
  selector: 
    app: nginx 
  template: 
    metadata: 
      name: nginx 
      labels: 
        app: nginx 
    spec: 
      containers: 
      - name: nginx 
        image: nginx
        imagePullPolicy: IfNotPresent
        ports: 
        - containerPort: 80
[root@vms61 chap6-ds]# kubectl apply -f rc.yaml 
replicationcontroller/rcx created
[root@vms61 chap6-ds]# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
rcx-6vdkj   1/1     Running   0          6s
rcx-72vq8   1/1     Running   0          6s
rcx-8vrx4   1/1     Running   0          6s
[root@vms61 chap6-ds]# kubectl get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
rcx-6vdkj   1/1     Running   0          12s   10.244.116.7    vms63   <none>           <none>
rcx-72vq8   1/1     Running   0          12s   10.244.196.17   vms62   <none>           <none>
rcx-8vrx4   1/1     Running   0          12s   10.244.116.22   vms63   <none>           <none>
[root@vms61 chap6-ds]# kubectl get rc -o wide
NAME   DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES   SELECTOR
rcx    3         3         3       36s   nginx        nginx    app=nginx
[root@vms61 chap6-ds]# kubectl scale rc rcx --replicas=5
replicationcontroller/rcx scaled
[root@vms61 chap6-ds]# kubectl get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE    IP              NODE    NOMINATED NODE   READINESS GATES
rcx-6vdkj   1/1     Running   0          106s   10.244.116.7    vms63   <none>           <none>
rcx-72vq8   1/1     Running   0          106s   10.244.196.17   vms62   <none>           <none>
rcx-8vrx4   1/1     Running   0          106s   10.244.116.22   vms63   <none>           <none>
rcx-rvslz   1/1     Running   0          4s     10.244.196.22   vms62   <none>           <none>
rcx-vmwk8   1/1     Running   0          4s     10.244.116.21   vms63   <none>           <none>

RC-------ReplicaSet

创建成功rs并成功扩容10个pod
创建成功rs并成功扩容10个pod
代码语言:javascript
复制
[root@vms61 chap6-ds]# kubectl apply -f  rs1.yaml 
replicaset.apps/rs created
[root@vms61 chap6-ds]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
rs-9k4s8   1/1     Running   0          4s    10.244.196.20   vms62   <none>           <none>
rs-g5wl5   1/1     Running   0          4s    10.244.116.26   vms63   <none>           <none>
rs-pszg7   1/1     Running   0          4s    10.244.116.23   vms63   <none>           <none>
[root@vms61 chap6-ds]# cat rs1.yaml 
apiVersion: apps/v1 
kind: ReplicaSet 
metadata: 
  name: rs 
  labels: 
    app: guestbook 
spec: 
  replicas: 3 
  selector: 
    matchLabels: 
      tier: frontend 
  template: 
    metadata: 
      labels: 
        app: guestbook 
        tier: frontend 
    spec: 
      containers: 
      - name: nginx 
        image: nginx
        imagePullPolicy: IfNotPresent
[root@vms61 chap6-ds]# kubectl get rs
NAME   DESIRED   CURRENT   READY   AGE
rs     3         3         3       19s
[root@vms61 chap6-ds]# kubectl scale rs rs --replicas=10
replicaset.apps/rs scaled
[root@vms61 chap6-ds]# kubectl get pods -o wide
NAME       READY   STATUS              RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
rs-62nmp   1/1     Running             0          2s    10.244.116.25   vms63   <none>           <none>
rs-6bn4w   0/1     ContainerCreating   0          2s    <none>          vms63   <none>           <none>
rs-9k4s8   1/1     Running             0          35s   10.244.196.20   vms62   <none>           <none>
rs-d6f62   0/1     ContainerCreating   0          2s    <none>          vms62   <none>           <none>
rs-g5wl5   1/1     Running             0          35s   10.244.116.26   vms63   <none>           <none>
rs-gc8r9   1/1     Running             0          2s    10.244.116.24   vms63   <none>           <none>
rs-pszg7   1/1     Running             0          35s   10.244.116.23   vms63   <none>           <none>
rs-qknnq   0/1     ContainerCreating   0          2s    <none>          vms62   <none>           <none>
rs-v9vbx   0/1     ContainerCreating   0          2s    <none>          vms62   <none>           <none>
rs-zvk8n   0/1     ContainerCreating   0          2s    <none>          vms62   <none>           <none>
[root@vms61 chap6-ds]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
rs-62nmp   1/1     Running   0          45s   10.244.116.25   vms63   <none>           <none>
rs-6bn4w   1/1     Running   0          45s   10.244.116.27   vms63   <none>           <none>
rs-9k4s8   1/1     Running   0          78s   10.244.196.20   vms62   <none>           <none>
rs-d6f62   1/1     Running   0          45s   10.244.196.29   vms62   <none>           <none>
rs-g5wl5   1/1     Running   0          78s   10.244.116.26   vms63   <none>           <none>
rs-gc8r9   1/1     Running   0          45s   10.244.116.24   vms63   <none>           <none>
rs-pszg7   1/1     Running   0          78s   10.244.116.23   vms63   <none>           <none>
rs-qknnq   1/1     Running   0          45s   10.244.196.21   vms62   <none>           <none>
rs-v9vbx   1/1     Running   0          45s   10.244.196.27   vms62   <none>           <none>
rs-zvk8n   1/1     Running   0          45s   10.244.196.28   vms62   <none>           <none>

记忆小贴士

RC没有matchLabels,apiVersion为v1
RC没有matchLabels,apiVersion为v1

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • DS---------DaemonSet
    • 应用场景
    • RS-------ReplicationController
    • RC-------ReplicaSet
    • 记忆小贴士
    相关产品与服务
    Prometheus 监控服务
    Prometheus 监控服务(TencentCloud Managed Service for Prometheus,TMP)是基于开源 Prometheus 构建的高可用、全托管的服务,与腾讯云容器服务(TKE)高度集成,兼容开源生态丰富多样的应用组件,结合腾讯云可观测平台-告警管理和 Prometheus Alertmanager 能力,为您提供免搭建的高效运维能力,减少开发及运维成本。
    领券
    问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档