前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >附014.Kubernetes Prometheus+Grafana+EFK+Kibana+Glusterfs整合性方案

附014.Kubernetes Prometheus+Grafana+EFK+Kibana+Glusterfs整合性方案

作者头像
木二
发布2020-03-24 16:46:03
6550
发布2020-03-24 16:46:03
举报
文章被收录于专栏:木二天空

一 glusterfs存储集群部署

注意:以下为简略步骤,详情参考《附009.Kubernetes永久存储之GlusterFS独立部署》。

1.1 架构示意

1.2 相关规划

主机

IP

磁盘

备注

k8smaster01

172.24.8.71

——

Kubernetes Master节点 Heketi主机

k8smaster02

172.24.8.72

——

Kubernetes Master节点 Heketi主机

k8smaster03

172.24.8.73

——

Kubernetes Master节点 Heketi主机

k8snode01

172.24.8.74

sdb

Kubernetes Worker节点 glusterfs 01节点

k8snode02

172.24.8.75

sdb

Kubernetes Worker节点 glusterfs 02节点

k8snode03

172.24.8.76

sdb

Kubernetes Worker节点 glusterfs 03节点

提示:本规划直接使用裸磁盘完成。

1.3 安装glusterfs

# yum -y install centos-release-gluster

# yum -y install glusterfs-server

# systemctl start glusterd

# systemctl enable glusterd

提示:建议所有节点安装。

1.4 添加信任池

[root@k8snode01 ~]# gluster peer probe k8snode02

[root@k8snode01 ~]# gluster peer probe k8snode03

[root@k8snode01 ~]# gluster peer status #查看信任池状态

[root@k8snode01 ~]# gluster pool list #查看信任池列表

提示:仅需要在glusterfs任一节点执行一次即可。

1.5 安装heketi

[root@k8smaster01 ~]# yum -y install heketi heketi-client

1.6 配置heketi

[root@k8smaster01 ~]# vi /etc/heketi/heketi.json

代码语言:javascript
复制
  1 {
  2   "_port_comment": "Heketi Server Port Number",
  3   "port": "8080",
  4 
  5   "_use_auth": "Enable JWT authorization. Please enable for deployment",
  6   "use_auth": true,
  7 
  8   "_jwt": "Private keys for access",
  9   "jwt": {
 10     "_admin": "Admin has access to all APIs",
 11     "admin": {
 12       "key": "admin123"
 13     },
 14     "_user": "User only has access to /volumes endpoint",
 15     "user": {
 16       "key": "xianghy"
 17     }
 18   },
 19 
 20   "_glusterfs_comment": "GlusterFS Configuration",
 21   "glusterfs": {
 22     "_executor_comment": [
 23       "Execute plugin. Possible choices: mock, ssh",
 24       "mock: This setting is used for testing and development.",
 25       "      It will not send commands to any node.",
 26       "ssh:  This setting will notify Heketi to ssh to the nodes.",
 27       "      It will need the values in sshexec to be configured.",
 28       "kubernetes: Communicate with GlusterFS containers over",
 29       "            Kubernetes exec api."
 30     ],
 31     "executor": "ssh",
 32 
 33     "_sshexec_comment": "SSH username and private key file information",
 34     "sshexec": {
 35       "keyfile": "/etc/heketi/heketi_key",
 36       "user": "root",
 37       "port": "22",
 38       "fstab": "/etc/fstab"
 39     },
 40 
 41     "_db_comment": "Database file name",
 42     "db": "/var/lib/heketi/heketi.db",
 43 
 44     "_loglevel_comment": [
 45       "Set log level. Choices are:",
 46       "  none, critical, error, warning, info, debug",
 47       "Default is warning"
 48     ],
 49     "loglevel" : "warning"
 50   }
 51 }

1.7 配置免秘钥

[root@k8smaster01 ~]# ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ""

[root@k8smaster01 ~]# chown heketi:heketi /etc/heketi/heketi_key

[root@k8smaster01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@k8snode01

[root@k8smaster01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@k8snode02

[root@k8smaster01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@k8snode03

1.8 启动heketi

[root@k8smaster01 ~]# systemctl enable heketi.service

[root@k8smaster01 ~]# systemctl start heketi.service

[root@k8smaster01 ~]# systemctl status heketi.service

[root@k8smaster01 ~]# curl http://localhost:8080/hello #测试访问

1.9 配置Heketi拓扑

[root@k8smaster01 ~]# vi /etc/heketi/topology.json

代码语言:javascript
复制
  1 {
  2   "clusters": [
  3     {
  4       "nodes": [
  5         {
  6           "node": {
  7             "hostnames": {
  8               "manage": [
  9                 "k8snode01"
 10               ],
 11               "storage": [
 12                 "172.24.8.74"
 13               ]
 14             },
 15             "zone": 1
 16           },
 17           "devices": [
 18             "/dev/sdb"
 19           ]
 20         },
 21         {
 22           "node": {
 23             "hostnames": {
 24               "manage": [
 25                 "k8snode02"
 26               ],
 27               "storage": [
 28                 "172.24.8.75"
 29               ]
 30             },
 31             "zone": 1
 32           },
 33           "devices": [
 34             "/dev/sdb"
 35           ]
 36         },
 37         {
 38           "node": {
 39             "hostnames": {
 40               "manage": [
 41                 "k8snode03"
 42               ],
 43               "storage": [
 44                 "172.24.8.76"
 45               ]
 46             },
 47             "zone": 1
 48           },
 49           "devices": [
 50             "/dev/sdb"
 51           ]
 52         }
 53       ]
 54     }
 55   ]
 56 }

[root@k8smaster01 ~]# echo "export HEKETI_CLI_SERVER=http://k8smaster01:8080" >> /etc/profile.d/heketi.sh

[root@k8smaster01 ~]# echo "alias heketi-cli='heketi-cli --user admin --secret admin123'" >> .bashrc

[root@k8smaster01 ~]# source /etc/profile.d/heketi.sh

[root@k8smaster01 ~]# source .bashrc

[root@k8smaster01 ~]# echo $HEKETI_CLI_SERVER

http://k8smaster01:8080

[root@k8smaster01 ~]# heketi-cli --server $HEKETI_CLI_SERVER --user admin --secret admin123 topology load --json=/etc/heketi/topology.json

1.10 集群管理及测试

[root@heketi ~]# heketi-cli cluster list #集群列表

[root@heketi ~]# heketi-cli node list #卷信息

[root@heketi ~]# heketi-cli volume list #卷信息

[root@k8snode01 ~]# gluster volume info #通过glusterfs节点查看

1.11 创建StorageClass

[root@k8smaster01 study]# vi heketi-secret.yaml

代码语言:javascript
复制
  1 apiVersion: v1
  2 kind: Secret
  3 metadata:
  4   name: heketi-secret
  5   namespace: heketi
  6 data:
  7   key: YWRtaW4xMjM=
  8 type: kubernetes.io/glusterfs

[root@k8smaster01 study]# kubectl create ns heketi

[root@k8smaster01 study]# kubectl create -f heketi-secret.yaml #创建heketi

[root@k8smaster01 study]# kubectl get secrets -n heketi

[root@k8smaster01 study]# vim gluster-heketi-storageclass.yaml #正式创建StorageClass

代码语言:javascript
复制
  1 apiVersion: storage.k8s.io/v1
  2 kind: StorageClass
  3 metadata:
  4   name: ghstorageclass
  5 parameters:
  6   resturl: "http://172.24.8.71:8080"
  7   clusterid: "ad0f81f75f01d01ebd6a21834a2caa30"
  8   restauthenabled: "true"
  9   restuser: "admin"
 10   secretName: "heketi-secret"
 11   secretNamespace: "heketi"
 12   volumetype: "replicate:3"
 13 provisioner: kubernetes.io/glusterfs
 14 reclaimPolicy: Delete

[root@k8smaster01 study]# kubectl create -f gluster-heketi-storageclass.yaml

注意:storageclass资源创建后不可变更,如修改只能删除后重建。

[root@k8smaster01 heketi]# kubectl get storageclasses #查看确认

NAME PROVISIONER AGE

gluster-heketi-storageclass kubernetes.io/glusterfs 85s

[root@k8smaster01 heketi]# kubectl describe storageclasses ghstorageclass

二 集群监控Metrics

注意:以下为简略步骤,详情参考《049.集群管理-集群监控Metrics》。

2.1 开启聚合层

开机聚合层功能,使用kubeadm默认已开启此功能,可如下查看验证。

[root@k8smaster01 ~]# cat /etc/kubernetes/manifests/kube-apiserver.yaml

2.2 获取部署文件

[root@k8smaster01 ~]# git clone https://github.com/kubernetes-incubator/metrics-server.git

[root@k8smaster01 ~]# cd metrics-server/deploy/1.8+/

[root@k8smaster01 1.8+]# vi metrics-server-deployment.yaml

代码语言:javascript
复制
  1 ……
  2         image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6	#修改为国内源
  3         command:
  4         - /metrics-server
  5         - --metric-resolution=30s
  6         - --kubelet-insecure-tls
  7         - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP	#添加如上command
  8 ……

2.3 正式部署

[root@k8smaster01 1.8+]# kubectl apply -f .

[root@k8smaster01 1.8+]# kubectl -n kube-system get pods -l k8s-app=metrics-server

[root@k8smaster01 1.8+]# kubectl -n kube-system logs -l k8s-app=metrics-server -f #可查看部署日志

2.4 确认验证

[root@k8smaster01 ~]# kubectl top nodes

[root@k8smaster01 ~]# kubectl top pods --all-namespaces

三 Prometheus部署

注意:以下为简略步骤,详情参考《050.集群管理-Prometheus+Grafana监控方案》。

3.1 获取部署文件

[root@k8smaster01 ~]# git clone https://github.com/prometheus/prometheus

3.2 创建命名空间

[root@k8smaster01 ~]# cd prometheus/documentation/examples/

[root@k8smaster01 examples]# vi monitor-namespace.yaml

代码语言:javascript
复制
  1 apiVersion: v1
  2 kind: Namespace
  3 metadata:
  4   name: monitoring

[root@k8smaster01 examples]# kubectl create -f monitor-namespace.yaml

3.3 创建RBAC

[root@k8smaster01 examples]# vi rbac-setup.yml

代码语言:javascript
复制
  1 apiVersion: rbac.authorization.k8s.io/v1beta1
  2 kind: ClusterRole
  3 metadata:
  4   name: prometheus
  5 rules:
  6 - apiGroups: [""]
  7   resources:
  8   - nodes
  9   - nodes/proxy
 10   - services
 11   - endpoints
 12   - pods
 13   verbs: ["get", "list", "watch"]
 14 - apiGroups:
 15   - extensions
 16   resources:
 17   - ingresses
 18   verbs: ["get", "list", "watch"]
 19 - nonResourceURLs: ["/metrics"]
 20   verbs: ["get"]
 21 ---
 22 apiVersion: v1
 23 kind: ServiceAccount
 24 metadata:
 25   name: prometheus
 26   namespace: monitoring               #仅需修改命名空间
 27 ---
 28 apiVersion: rbac.authorization.k8s.io/v1beta1
 29 kind: ClusterRoleBinding
 30 metadata:
 31   name: prometheus
 32 roleRef:
 33   apiGroup: rbac.authorization.k8s.io
 34   kind: ClusterRole
 35   name: prometheus
 36 subjects:
 37 - kind: ServiceAccount
 38   name: prometheus
 39   namespace: monitoring              #仅需修改命名空间

[root@k8smaster01 examples]# kubectl create -f rbac-setup.yml

3.4 创建Prometheus ConfigMap

[root@k8smaster01 examples]# cat prometheus-kubernetes.yml | grep -v ^$ | grep -v "#" >> prometheus-config.yaml

[root@k8smaster01 examples]# vi prometheus-config.yaml

代码语言:javascript
复制
  1 apiVersion: v1
  2 kind: ConfigMap
  3 metadata:
  4   name: prometheus-server-conf
  5   labels:
  6     name: prometheus-server-conf
  7   namespace: monitoring               #修改命名空间
  8 ……

[root@k8smaster01 examples]# kubectl create -f prometheus-config.yaml

3.5 创建持久PVC

[root@k8smaster01 examples]# vi prometheus-pvc.yaml

代码语言:javascript
复制
  1 apiVersion: v1
  2 kind: PersistentVolumeClaim
  3 metadata:
  4   name: prometheus-pvc
  5   namespace: monitoring
  6   annotations:
  7     volume.beta.kubernetes.io/storage-class: ghstorageclass
  8 spec:
  9   accessModes:
 10   - ReadWriteMany
 11   resources:
 12     requests:
 13       storage: 5Gi

[root@k8smaster01 examples]# kubectl create -f prometheus-pvc.yaml

3.6 Prometheus部署

[root@k8smaster01 examples]# vi prometheus-deployment.yml

代码语言:javascript
复制
  1 apiVersion: apps/v1beta2
  2 kind: Deployment
  3 metadata:
  4   labels:
  5     name: prometheus-deployment
  6   name: prometheus-server
  7   namespace: monitoring
  8 spec:
  9   replicas: 1
 10   selector:
 11     matchLabels:
 12       app: prometheus-server
 13   template:
 14     metadata:
 15       labels:
 16         app: prometheus-server
 17     spec:
 18       containers:
 19         - name: prometheus-server
 20           image: prom/prometheus:v2.14.0
 21           command:
 22           - "/bin/prometheus"
 23           args:
 24             - "--config.file=/etc/prometheus/prometheus.yml"
 25             - "--storage.tsdb.path=/prometheus/"
 26             - "--storage.tsdb.retention=72h"
 27           ports:
 28             - containerPort: 9090
 29               protocol: TCP
 30           volumeMounts:
 31             - name: prometheus-config-volume
 32               mountPath: /etc/prometheus/
 33             - name: prometheus-storage-volume
 34               mountPath: /prometheus/
 35       serviceAccountName: prometheus
 36       imagePullSecrets:
 37         - name: regsecret
 38       volumes:
 39         - name: prometheus-config-volume
 40           configMap:
 41             defaultMode: 420
 42             name: prometheus-server-conf
 43         - name: prometheus-storage-volume
 44           persistentVolumeClaim:
 45             claimName: prometheus-pvc

[root@k8smaster01 examples]# kubectl create -f prometheus-deployment.yml

3.7 创建Prometheus Service

[root@k8smaster01 examples]# vi prometheus-service.yaml

代码语言:javascript
复制
  1 apiVersion: v1
  2 kind: Service
  3 metadata:
  4   labels:
  5     app: prometheus-service
  6   name: prometheus-service
  7   namespace: monitoring
  8 spec:
  9   type: NodePort
 10   selector:
 11     app: prometheus-server
 12   ports:
 13     - port: 9090
 14       targetPort: 9090
 15       nodePort: 30001

[root@k8smaster01 examples]# kubectl create -f prometheus-service.yaml

[root@k8smaster01 examples]# kubectl get all -n monitoring

3.8 确认验证Prometheus

浏览器直接访问:http://172.24.8.100:30001/

clipboard
clipboard

四 部署grafana

注意:以下为简略步骤,详情参考《050.集群管理-Prometheus+Grafana监控方案》。

4.1 获取部署文件

[root@k8smaster01 ~]# git clone https://github.com/liukuan73/kubernetes-addons

[root@k8smaster01 ~]# cd /root/kubernetes-addons/monitor/prometheus+grafana

4.2 创建持久PVC

[root@k8smaster01 prometheus+grafana]# vi grafana-data-pvc.yaml

代码语言:javascript
复制
  1 apiVersion: v1
  2 kind: PersistentVolumeClaim
  3 metadata:
  4   name: grafana-data-pvc
  5   namespace: monitoring
  6   annotations:
  7     volume.beta.kubernetes.io/storage-class: ghstorageclass
  8 spec:
  9   accessModes:
 10   - ReadWriteOnce
 11   resources:
 12     requests:
 13       storage: 5Gi

[root@k8smaster01 prometheus+grafana]# kubectl create -f grafana-data-pvc.yaml

4.3 grafana部署

[root@k8smaster01 prometheus+grafana]# vi grafana.yaml

代码语言:javascript
复制
  1 apiVersion: extensions/v1beta1
  2 kind: Deployment
  3 metadata:
  4   name: monitoring-grafana
  5   namespace: monitoring
  6 spec:
  7   replicas: 1
  8   template:
  9     metadata:
 10       labels:
 11         task: monitoring
 12         k8s-app: grafana
 13     spec:
 14       containers:
 15       - name: grafana
 16         image: grafana/grafana:6.5.0
 17         imagePullPolicy: IfNotPresent
 18         ports:
 19         - containerPort: 3000
 20           protocol: TCP
 21         volumeMounts:
 22         - mountPath: /var/lib/grafana
 23           name: grafana-storage
 24         env:
 25           - name: INFLUXDB_HOST
 26             value: monitoring-influxdb
 27           - name: GF_SERVER_HTTP_PORT
 28             value: "3000"
 29           - name: GF_AUTH_BASIC_ENABLED
 30             value: "false"
 31           - name: GF_AUTH_ANONYMOUS_ENABLED
 32             value: "true"
 33           - name: GF_AUTH_ANONYMOUS_ORG_ROLE
 34             value: Admin
 35           - name: GF_SERVER_ROOT_URL
 36             value: /
 37         readinessProbe:
 38           httpGet:
 39             path: /login
 40             port: 3000
 41       volumes:
 42       - name: grafana-storage
 43         persistentVolumeClaim:
 44           claimName: grafana-data-pvc
 45       nodeSelector:
 46         node-role.kubernetes.io/master: "true"
 47       tolerations:
 48       - key: "node-role.kubernetes.io/master"
 49         effect: "NoSchedule"
 50 ---
 51 apiVersion: v1
 52 kind: Service
 53 metadata:
 54   labels:
 55     kubernetes.io/cluster-service: 'true'
 56     kubernetes.io/name: monitoring-grafana
 57   annotations:
 58     prometheus.io/scrape: 'true'
 59     prometheus.io/tcp-probe: 'true'
 60     prometheus.io/tcp-probe-port: '80'
 61   name: monitoring-grafana
 62   namespace: monitoring
 63 spec:
 64   type: NodePort
 65   ports:
 66   - port: 80
 67     targetPort: 3000
 68     nodePort: 30002
 69   selector:
 70     k8s-app: grafana

[root@k8smaster01 prometheus+grafana]# kubectl label nodes k8smaster01 node-role.kubernetes.io/master=true

[root@k8smaster01 prometheus+grafana]# kubectl label nodes k8smaster02 node-role.kubernetes.io/master=true

[root@k8smaster01 prometheus+grafana]# kubectl label nodes k8smaster03 node-role.kubernetes.io/master=true

[root@k8smaster01 prometheus+grafana]# kubectl create -f grafana.yaml

[root@k8smaster01 examples]# kubectl get all -n monitoring

4.4 确认验证Prometheus

浏览器直接访问:http://172.24.8.100:30002/

4.4 grafana配置

  • 添加数据源:略
  • 创建用户:略

提示:所有grafana配置可配置参考:https://grafana.com/docs/grafana/latest/installation/configuration/。

4.5 查看监控

浏览器再次访问:http://172.24.8.100:30002/

clipboard
clipboard
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2020-03-20 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 一 glusterfs存储集群部署
    • 1.1 架构示意
      • 1.2 相关规划
        • 1.3 安装glusterfs
          • 1.4 添加信任池
            • 1.5 安装heketi
              • 1.6 配置heketi
                • 1.7 配置免秘钥
                  • 1.8 启动heketi
                    • 1.9 配置Heketi拓扑
                      • 1.10 集群管理及测试
                        • 1.11 创建StorageClass
                        • 二 集群监控Metrics
                          • 2.1 开启聚合层
                            • 2.2 获取部署文件
                              • 2.3 正式部署
                                • 2.4 确认验证
                                • 三 Prometheus部署
                                  • 3.1 获取部署文件
                                    • 3.2 创建命名空间
                                      • 3.3 创建RBAC
                                        • 3.4 创建Prometheus ConfigMap
                                          • 3.5 创建持久PVC
                                            • 3.6 Prometheus部署
                                              • 3.7 创建Prometheus Service
                                                • 3.8 确认验证Prometheus
                                                • 四 部署grafana
                                                  • 4.1 获取部署文件
                                                    • 4.2 创建持久PVC
                                                      • 4.3 grafana部署
                                                        • 4.4 确认验证Prometheus
                                                          • 4.4 grafana配置
                                                            • 4.5 查看监控
                                                            相关产品与服务
                                                            容器服务
                                                            腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
                                                            领券
                                                            问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档