Heketi提供了一个RESTful管理界面,可以用来管理GlusterFS卷的生命周期。Heketi会动态在集群内选择bricks构建所需的volumes,从而确保数据的副本会分散到集群不同的故障域内。同时Heketi还支持任意数量的ClusterFS集群。
提示:本实验基于glusterfs和Kubernetes分开部署,heketi管理glusterfs,Kubernetes使用heketi提供的API,从而实现glusterfs的永久存储,,而非Kubernetes部署glusterfs。
提示:本实验Heketi仅管理单zone的glusterfs集群。
主机 | IP | 磁盘 | 备注 |
---|---|---|---|
servera | 172.24.8.41 | sdb | glusterfs节点 |
serverb | 172.24.8.42 | sdb | glusterfs节点 |
serverc | 172.24.8.43 | sdb | glusterfs节点 |
heketi | 172.24.8.44 | Heketi主机 |
servera | serverb | serverc | |||||||
---|---|---|---|---|---|---|---|---|---|
PV | sdb1 | sdb1 | sdb1 | ||||||
VG | vg0 | vg0 | vg0 | ||||||
LV | datalv | datalv | datalv | ||||||
bricks目录 | /bricks/data | /bricks/data | /bricks/data |
所有节点NTP配置;
所有节点添加相应主机名解析:
172.24.8.41 servera
172.24.8.42 serverb
172.24.8.43 serverc
172.24.8.44 heketi
注意:若非必要,建议关闭防火墙和SELinux。
1 [root@servera ~]# fdisk /dev/sdb #创建lvm的sdb1,过程略
2 [root@servera ~]# pvcreate /dev/sdb1 #使用/dev/vdb1创建PV
3 [root@servera ~]# vgcreate vg0 /dev/sdb1 #创建vg
4 [root@servera ~]# lvcreate -L 15G -T vg0/thinpool #创建支持thin的lv池
5 [root@servera ~]# lvcreate -V 10G -T vg0/thinpool -n datalv #创建相应brick的lv
6 [root@servera ~]# vgdisplay #验证确认vg信息
7 [root@servera ~]# pvdisplay #验证确认pv信息
8 [root@servera ~]# lvdisplay #验证确认lv信息
提示:serverb、serverc类似操作,根据规划需求创建完所有基于LVM的brick。
1 [root@servera ~]# yum -y install centos-release-gluster
提示:serverb、serverc、client类似操作,安装相应glusterfs源;
安装相应源之后,会在/etc/yum.repos.d/目录多出文件CentOS-Storage-common.repo,内容如下:
1 # CentOS-Storage.repo
2 #
3 # Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more
4 # information
5
6 [centos-storage-debuginfo]
7 name=CentOS-$releasever - Storage SIG - debuginfo
8 baseurl=http://debuginfo.centos.org/$contentdir/$releasever/storage/$basearch/
9 gpgcheck=1
10 enabled=0
11 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
1 [root@servera ~]# yum -y install glusterfs-server
提示:serverb、serverc类似操作,安装glusterfs服务端。
1 [root@servera ~]# systemctl start glusterd
2 [root@servera ~]# systemctl enable glusterd
提示:serverb、serverc类似操作,所有节点启动glusterfs服务端;
安装完glusterfs之后建议exit退出终端重新登录,从而可以补全glusterfs相关命令。
1 [root@servera ~]# gluster peer probe serverb
2 peer probe: success.
3 [root@servera ~]# gluster peer probe serverc
4 peer probe: success.
5 [root@servera ~]# gluster peer status #查看信任池状态
6 [root@servera ~]# gluster pool list #查看信任池列表
提示:加信任池的操作,只需要在servera、serverb、serverc所有集群节点主机中的任意一台上面执行添加其他三个节点的操作即可。
提示:若未关闭防火墙,在添加信任池之前必须放通防火墙相应规则,操作如下:
1 [root@servera ~]# firewallcmd permanent addservice=glusterfs
2 [root@servera ~]# firewallcmd permanent addservice=nfs
3 [root@servera ~]# firewallcmd permanent addservice=rpcbind
4 [root@servera ~]# firewallcmd permanent addservice=mountd
5 [root@servera ~]# firewallcmd permanent addport=5666/tcp
6 [root@servera ~]# firewallcmd reload
1 [root@heketi ~]# yum -y install centos-release-gluster
2 [root@heketi ~]# yum -y install heketi heketi-client
1 [root@heketi ~]# vi /etc/heketi/heketi.json
2 {
3 "_port_comment": "Heketi Server Port Number",
4 "port": "8080", #默认端口
5
6 "_use_auth": "Enable JWT authorization. Please enable for deployment",
7 "use_auth": true, #基于安全考虑开启认证
8
9 "_jwt": "Private keys for access",
10 "jwt": {
11 "_admin": "Admin has access to all APIs",
12 "admin": {
13 "key": "admin123" #管理员密码
14 },
15 "_user": "User only has access to /volumes endpoint",
16 "user": {
17 "key": "xianghy" #普通用户
18 }
19 },
20
21 "_glusterfs_comment": "GlusterFS Configuration",
22 "glusterfs": {
23 "_executor_comment": [
24 "Execute plugin. Possible choices: mock, ssh",
25 "mock: This setting is used for testing and development.", #用于测试
26 " It will not send commands to any node.",
27 "ssh: This setting will notify Heketi to ssh to the nodes.", #ssh方式
28 " It will need the values in sshexec to be configured.",
29 "kubernetes: Communicate with GlusterFS containers over", #在GlusterFS由kubernetes创建时采用
30 " Kubernetes exec api."
31 ],
32 "executor": "ssh",
33
34 "_sshexec_comment": "SSH username and private key file information",
35 "sshexec": {
36 "keyfile": "/etc/heketi/heketi_key",
37 "user": "root",
38 "port": "22",
39 "fstab": "/etc/fstab"
40 },
41 ……
42 ……
43 "loglevel" : "warning"
44 }
45 }
1 [root@heketi ~]# ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ""
2 [root@heketi ~]# chown heketi:heketi /etc/heketi/heketi_key
3 [root@heketi ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@servera
4 [root@heketi ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@serverb
5 [root@heketi ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@serverc
1 [root@heketi ~]# systemctl enable heketi.service
2 [root@heketi ~]# systemctl start heketi.service
3 [root@heketi ~]# systemctl status heketi.service
4 [root@heketi ~]# curl http://localhost:8080/hello #测试访问
5 Hello from Heketi
拓扑信息用于让Heketi确认可以使用的存储节点、磁盘和集群,必须自行确定节点的故障域。故障域是赋予一组节点的整数值,这组节点共享相同的交换机、电源或其他任何会导致它们同时失效的组件。必须确认哪些节点构成一个集群,Heketi使用这些信息来确保跨故障域中创建副本,从而提供数据冗余能力,Heketi支持多个Gluster存储集群。
配置Heketi拓扑注意以下几点:
1 [root@heketi ~]# vi /etc/heketi/topology.json
2 {
3 "clusters": [
4 {
5 "nodes": [
6 {
7 "node": {
8 "hostnames": {
9 "manage": [
10 "172.24.8.41"
11 ],
12 "storage": [
13 "172.24.8.41"
14 ]
15 },
16 "zone": 1
17 },
18 "devices": [
19 "/dev/mapper/vg0-datalv"
20 ]
21 },
22 {
23 "node": {
24 "hostnames": {
25 "manage": [
26 "172.24.8.42"
27 ],
28 "storage": [
29 "172.24.8.42"
30 ]
31 },
32 "zone": 1
33 },
34 "devices": [
35 "/dev/mapper/vg0-datalv"
36 ]
37 },
38 {
39 "node": {
40 "hostnames": {
41 "manage": [
42 "172.24.8.43"
43 ],
44 "storage": [
45 "172.24.8.43"
46 ]
47 },
48 "zone": 1
49 },
50 "devices": [
51 "/dev/mapper/vg0-datalv"
52 ]
53 }
54 ]
55 }
56 ]
57 }
58
59 [root@heketi ~]# echo "export HEKETI_CLI_SERVER=http://heketi:8080" >> /etc/profile.d/heketi.sh
60 [root@heketi ~]# echo "alias heketi-cli='heketi-cli --user admin --secret admin123'" >> .bashrc
61 [root@heketi ~]# source /etc/profile.d/heketi.sh
62 [root@heketi ~]# source .bashrc
63 [root@heketi ~]# echo $HEKETI_CLI_SERVER
64 http://heketi:8080
65 [root@heketi ~]# heketi-cli --server $HEKETI_CLI_SERVER --user admin --secret admin123 topology load --json=/etc/heketi/topology.json
1 [root@heketi ~]# heketi-cli cluster list #集群列表
2 [root@heketi ~]# heketi-cli cluster info aa83b0045fafa362bfc7a8bfee0c24ad #集群详细信息
3 Cluster id: aa83b0045fafa362bfc7a8bfee0c24ad
4 Nodes:
5 189ee41572ebf0bf1e297de2302cfb39
6 46429de5666fc4c6cc570da4b100465d
7 be0209387384299db34aaf8377c3964c
8 Volumes:
9
10 Block: true
11
12 File: true
13 [root@heketi ~]# heketi-cli topology info aa83b0045fafa362bfc7a8bfee0c24ad #查看拓扑信息
1 [root@heketi ~]# heketi-cli node list #卷信息
2 Id:189ee41572ebf0bf1e297de2302cfb39 Cluster:aa83b0045fafa362bfc7a8bfee0c24ad
3 Id:46429de5666fc4c6cc570da4b100465d Cluster:aa83b0045fafa362bfc7a8bfee0c24ad
4 Id:be0209387384299db34aaf8377c3964c Cluster:aa83b0045fafa362bfc7a8bfee0c24ad
5 [root@heketi ~]# heketi-cli node info 189ee41572ebf0bf1e297de2302cfb39 #节点信息
6 [root@heketi ~]# heketi-cli volume create --size=2 --replica=2 #默认为3副本的replica模式
1 [root@heketi ~]# heketi-cli volume list #卷信息
2 [root@heketi ~]# heketi-cli volume info 7da55685ebeeaaca60708cd797a5e391
3 [root@servera ~]# gluster volume info #通过glusterfs节点查看
1 [root@heketi ~]# yum -y install centos-release-gluster
2 [root@heketi ~]# yum -y install glusterfs-fuse #安装glusterfs-fuse
3 [root@heketi ~]# mount -t glusterfs 172.24.8.41:vol_7da55685ebeeaaca60708cd797a5e391 /mnt
1 [root@heketi ~]# umount /mnt
2 [root@heketi ~]# heketi-cli volume delete 7da55685ebeeaaca60708cd797a5e391 #验证完毕删除
参考:https://www.jianshu.com/p/1069ddaaea78
https://www.cnblogs.com/panwenbin-logs/p/10231859.html
kubernetes共享存储provider模式:
静态模式(Static):集群管理员手工创建PV,在定义PV时设置后端存储的特性;
动态模式(Dynamic):集群管理员不需要手工创建PV,而是通过StorageClass的设置对后端存储进行描述,标记为某种"类型(Class)";此时要求PVC对存储的类型进行说明,系统将自动完成PV的创建及与PVC的绑定;PVC可以声明Class为"",说明PVC禁止使用动态模式。
基于StorageClass的动态存储供应整体过程如下图所示:
提示:关于Kubernetes的部署参考《附003.Kubeadm部署Kubernetes》。
关键字说明:
提示:关于glusterfs各种不同类型的卷见《004.RHGS-创建volume》。
1 [root@k8smaster01 ~]# kubectl create ns heketi #创建命名空间
2 [root@k8smaster01 ~]# echo -n "admin123" | base64 #将密码转换为64位编码
3 YWRtaW4xMjM=
4 [root@k8smaster01 ~]# mkdir -p heketi
5 [root@k8smaster01 ~]# cd heketi/
6 [root@k8smaster01 ~]# vi heketi-secret.yaml #创建用于保存密码的secret
7 apiVersion: v1
8 kind: Secret
9 metadata:
10 name: heketi-secret
11 namespace: heketi
12 data:
13 # base64 encoded password. E.g.: echo -n "mypassword" | base64
14 key: YWRtaW4xMjM=
15 type: kubernetes.io/glusterfs
16 [root@k8smaster01 heketi]# kubectl create -f heketi-secret.yaml #创建heketi
17 [root@k8smaster01 heketi]# kubectl get secrets -n heketi
18 NAME TYPE DATA AGE
19 default-token-5sn5d kubernetes.io/service-account-token 3 43s
20 heketi-secret kubernetes.io/glusterfs 1 5s
21 [root@kubenode1 heketi]# vim gluster-heketi-storageclass.yaml #正式创建StorageClass
22 apiVersion: storage.k8s.io/v1
23 kind: StorageClass
24 metadata:
25 name: gluster-heketi-storageclass
26 parameters:
27 resturl: "http://172.24.8.44:8080"
28 clusterid: "aa83b0045fafa362bfc7a8bfee0c24ad"
29 restauthenabled: "true" #若heketi开启认证此处也必须开启auth认证
30 restuser: "admin"
31 secretName: "heketi-secret" #name/namespace与secret资源中定义一致
32 secretNamespace: "heketi"
33 volumetype: "replicate:3"
34 provisioner: kubernetes.io/glusterfs
35 reclaimPolicy: Delete
36 [root@k8smaster01 heketi]# kubectl create -f gluster-heketi-storageclass.yaml
注意:storageclass资源创建后不可变更,如修改只能删除后重建。
1 [root@k8smaster01 heketi]# kubectl get storageclasses #查看确认
2 NAME PROVISIONER AGE
3 gluster-heketi-storageclass kubernetes.io/glusterfs 85s
4 [root@k8smaster01 heketi]# kubectl describe storageclasses gluster-heketi-storageclass
1 [root@k8smaster01 heketi]# cat gluster-heketi-pvc.yaml
2 apiVersion: v1
3 metadata:
4 name: gluster-heketi-pvc
5 annotations:
6 volume.beta.kubernetes.io/storage-class: gluster-heketi-storageclass
7 spec:
8 accessModes:
9 - ReadWriteOnce
10 resources:
11 requests:
12 storage: 1Gi
注意:accessModes可有如下简写:
1 [root@k8smaster01 heketi]# kubectl create -f gluster-heketi-pvc.yaml
2 [root@k8smaster01 heketi]# kubectl get pvc
3 [root@k8smaster01 heketi]# kubectl describe pvc gluster-heketi-pvc
4 [root@k8smaster01 heketi]# kubectl get pv
5 [root@k8smaster01 heketi]# kubectl describe pv pvc-5f7420ef-082d-11ea-badf-000c29fa7a79
1 [root@k8smaster01 heketi]# kubectl describe endpoints glusterfs-dynamic-5f7420ef-082d-11ea-badf-000c29fa7a79
提示:由上可知:PVC状态为Bound,Capacity为1G。查看PV详细信息,除容量,引用storageclass信息,状态,回收策略等外,同时可知GlusterFS的Endpoint与path。EndpointsName为固定格式:glusterfs-dynamic-PV_NAME,且endpoints资源中指定了挂载存储时的具体地址。
通过5.3所创建的信息:
1 [root@heketi ~]# heketi-cli topology info #heketi主机查看
2 [root@serverb ~]# lsblk #glusterfs节点查看
3 [root@serverb ~]# df -hT #glusterfs节点查看
4 [root@servera ~]# gluster volume list #glusterfs节点查看
5 [root@servera ~]# gluster volume info vol_e4c948687239df9833748d081ddb6fd5
1 [root@xxx ~]# yum -y install centos-release-gluster
2 [root@xxx ~]# yum -y install glusterfs-fuse #安装glusterfs-fuse
提示:所有需要使用glusterfs volume的Kubernetes节点都必须安装glusterfs-fuse以便于正常挂载,同时版本需要和glusterfs节点一致。
1 [root@k8smaster01 heketi]# vi gluster-heketi-pod.yaml
2 kind: Pod
3 apiVersion: v1
4 metadata:
5 name: gluster-heketi-pod
6 spec:
7 containers:
8 - name: gluster-heketi-container
9 image: busybox
10 command:
11 - sleep
12 - "3600"
13 volumeMounts:
14 - name: gluster-heketi-volume #必须和volumes中name一致
15 mountPath: "/pv-data"
16 readOnly: false
17 volumes:
18 - name: gluster-heketi-volume
19 persistentVolumeClaim:
20 claimName: gluster-heketi-pvc #必须和5.3创建的PVC中的name一致
21 [root@k8smaster01 heketi]# kubectl create -f gluster-heketi-pod.yaml -n heketi #创建Pod
1 [root@k8smaster01 heketi]# kubectl get pod -n heketi | grep gluster
2 gluster-heketi-pod 1/1 Running 0 2m43s
3 [root@k8smaster01 heketi]# kubectl exec -it gluster-heketi-pod /bin/sh #进入Pod写入测试文件
4 / # cd /pv-data/
5 /pv-data # echo "This is a file!" >> a.txt
6 /pv-data # echo "This is b file!" >> b.txt
7 /pv-data # ls
8 a.txt b.txt
9 [root@servera ~]# df -hT #在glusterfs节点查看Kubernetes节点的测试文件
10 [root@servera ~]# cd /var/lib/heketi/mounts/vg_47c90d90e03de79696f90bd94cfccdde/brick_721243c3e0cf8a2372f05d5085a4338c/brick/
11 [root@servera brick]# ls
12 [root@servera brick]# cat a.txt
13 [root@servera brick]# cat b.txt
1 [root@k8smaster01 heketi]# kubectl delete -f gluster-heketi-pod.yaml
2 [root@k8smaster01 heketi]# kubectl delete -f gluster-heketi-pvc.yaml
3 [root@k8smaster01 heketi]# kubectl get pvc
4 [root@k8smaster01 heketi]# kubectl get pv
5 [root@servera ~]# gluster volume list
6 No volumes present in cluster