前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >外包精通--使用 Ceph RBD 的完整示例(ceph or rook)笔记

外包精通--使用 Ceph RBD 的完整示例(ceph or rook)笔记

原创
作者头像
Godev
修改2023-07-31 19:29:23
4860
修改2023-07-31 19:29:23
举报
文章被收录于专栏:GodevGodev

使用 Ceph RBD 的完整示例

原文链接

Complete Example Using Ceph RBD - Persistent Storage Examples | Installation and Configuration | OpenShift Enterprise 3.1

该测试在外部集群及rook集群中都已经测试:通过

k8s集群

代码语言:txt
复制
[root@kmaster ceph]# kubectl get node -o wide
NAME      STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE        KERNEL-VERSION               CONTAINER-RUNTIME
kmaster   Ready    control-plane,master   4h58m   v1.21.0   192.168.31.10   <none>        Rocky Linux 8   4.18.0-240.22.1.el8.x86_64   docker://20.10.6
knode01   Ready    <none>                 4h57m   v1.21.0   192.168.31.11   <none>        Rocky Linux 8   4.18.0-240.22.1.el8.x86_64   docker://20.10.6
knode02   Ready    <none>                 4h57m   v1.21.0   192.168.31.12   <none>        Rocky Linux 8   4.18.0-240.22.1.el8.x86_64   docker://20.10.6
[root@kmaster ceph]# 

CEPH 集群

代码语言:txt
复制
[root@kmaster ceph]# kubectl get pod -nrook-ceph
NAME                                                READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-4wr2d                              3/3     Running     0          4h58m
csi-cephfsplugin-nztms                              3/3     Running     0          4h57m
csi-cephfsplugin-provisioner-6f75644874-jxb6n       6/6     Running     0          4h54m
csi-cephfsplugin-provisioner-6f75644874-shszs       6/6     Running     4          4h58m
csi-cephfsplugin-rbvfd                              3/3     Running     0          4h58m
csi-rbdplugin-6h8nb                                 3/3     Running     0          4h58m
csi-rbdplugin-htpnp                                 3/3     Running     0          4h57m
csi-rbdplugin-jzf6n                                 3/3     Running     0          4h58m
csi-rbdplugin-provisioner-67fb987799-fxj68          6/6     Running     4          4h58m
csi-rbdplugin-provisioner-67fb987799-zzm2x          6/6     Running     0          4h54m
rook-ceph-crashcollector-kmaster-7596c6f695-z784j   1/1     Running     0          4h56m
rook-ceph-crashcollector-knode01-5c75d4cbc8-z45lb   1/1     Running     0          4h57m
rook-ceph-crashcollector-knode02-67d58f7c55-m8rvj   1/1     Running     0          4h57m
rook-ceph-mgr-a-cfdb8d4b8-kxxr9                     1/1     Running     0          4h56m
rook-ceph-mon-a-77dbfbb9b6-wlzbw                    1/1     Running     0          4h57m
rook-ceph-mon-b-65d59f4667-hvdbt                    1/1     Running     0          4h57m
rook-ceph-mon-c-9c8b69b9c-4x86g                     1/1     Running     0          4h57m
rook-ceph-operator-6459f5dc4b-pq8gc                 1/1     Running     0          4h58m
rook-ceph-osd-0-6dd858b9b5-xqlc8                    1/1     Running     0          4h56m
rook-ceph-osd-1-8596cc946c-cldp5                    1/1     Running     0          4h56m
rook-ceph-osd-2-76c758bd77-gpwx9                    1/1     Running     0          4h56m
rook-ceph-osd-prepare-kmaster-4rh6n                 0/1     Completed   0          32m
rook-ceph-osd-prepare-knode01-lxm4c                 0/1     Completed   0          32m
rook-ceph-osd-prepare-knode02-cwfhm                 0/1     Completed   0          32m
rook-ceph-tools-7467d8bf8-zqqlj                     1/1     Running     0          104m
[root@kmaster ceph]# 

创建pool

代码语言:txt
复制
[root@rook-ceph-tools-7467d8bf8-x7scq /]# ceph osd pool create k8s 128 128
pool 'k8s' created
[root@rook-ceph-tools-7467d8bf8-x7scq /]# 

创建rbd

代码语言:txt
复制
[root@master ~]# rbd create ceph-image --pool k8s --size=1G
[root@master ~]# rbd ls --pool k8s
ceph-image
[root@master ~]# 

查询secret

代码语言:txt
复制
[root@rook-ceph-tools-7467d8bf8-x7scq /]# ceph auth get-key client.admin | base64
QVFCTjJDQmgwU2FKTXhBQXBPUklUYU5QZTJjaklVaG9TbXBTYnc9PQ==
[root@rook-ceph-tools-7467d8bf8-x7scq /]# 

创建secret、pvc、pc

ceph-pv.yaml

代码语言:txt
复制
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFCTjJDQmgwU2FKTXhBQXBPUklUYU5QZTJjaklVaG9TbXBTYnc9PQ==
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv     
spec:
  capacity:
    storage: 2Gi    
  accessModes:
    - ReadWriteOnce 
  rbd:              
    monitors:       
      - 10.111.94.249:6789
      - 10.106.67.38:6789
      - 10.99.183.92:6789
    pool: k8s
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-secret 
    fsType: ext4        
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:     
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi 

创建pod.yaml

代码语言:txt
复制
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1           
spec:
  containers:
  - name: ceph-busybox
    image: busybox:latest        
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-pv      
      mountPath: /usr/share/busybox 
      readOnly: false
  volumes:
  - name: ceph-pv        
    persistentVolumeClaim:
      claimName: ceph-claim

外部ceph集群测试

代码语言:txt
复制
[root@master ~]# kubectl get pod,pvc,pv
NAME            READY   STATUS    RESTARTS   AGE
pod/ceph-pod1   1/1     Running   0          7m46s

NAME                               STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/ceph-claim   Bound    ceph-pv   2Gi        RWO                           35m

NAME                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
persistentvolume/ceph-pv   2Gi        RWO            Recycle          Bound    default/ceph-claim                           35m
[root@master ~]# 

Rook集群测试

代码语言:txt
复制
[root@kmaster ceph]# kg pod,pvc,pv -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
pod/ceph-pod1   1/1     Running   0          58s   10.244.2.26   knode02   <none>           <none>

NAME                               STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
persistentvolumeclaim/ceph-claim   Bound    ceph-pv   2Gi        RWO                           14m   Filesystem

NAME                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE   VOLUMEMODE
persistentvolume/ceph-pv   2Gi        RWO            Recycle          Bound    default/ceph-claim                           14m   Filesystem
[root@kmaster ceph]#

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 使用 Ceph RBD 的完整示例
    • k8s集群
      • CEPH 集群
        • 创建pool
          • 创建rbd
            • 查询secret
              • 创建secret、pvc、pc
              • 外部ceph集群测试
              • Rook集群测试
              相关产品与服务
              对象存储
              对象存储(Cloud Object Storage,COS)是由腾讯云推出的无目录层次结构、无数据格式限制,可容纳海量数据且支持 HTTP/HTTPS 协议访问的分布式存储服务。腾讯云 COS 的存储桶空间无容量上限,无需分区管理,适用于 CDN 数据分发、数据万象处理或大数据计算与分析的数据湖等多种场景。
              领券
              问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档