5 namespace: rook-ceph 6 spec: 7 cephVersion: 8 image: ceph/ceph:v14.2.4-20190917...rook-ceph-operator-cb47c46bc-pszfh #可查看部署log [root@k8smaster01 ceph]# kubectl get pods -n rook-ceph...[root@k8smaster01 ceph]# kubectl create -f toolbox.yaml [root@k8smaster01 ceph]# kubectl -n rook-ceph...exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name...[root@k8smaster01 ~]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools
kubectl create secret tls testsoft-secret --cert=www.testsoft.com.crt --key=www.testsoft.com.key -n rook-ceph...# 获取访问密码 # 访问地址:https://ceph.testsoft.com kubectl get secrets -n rook-ceph rook-ceph-dashboard-password...get pod -l app=rook-ceph-rgw kubectl -n rook-ceph get svc -l app=rook-ceph-rgw # 创建storageclasses...rook-ceph - devices: - name: sdb name: k8s-node01 - devices: - name: sdb...name: k8s-node02 # 重启operator kubectl -n rook-ceph rollout restart deployment rook-ceph-operator
created customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created namespace/rook-ceph...centos9 ~/rook/deploy/examples master kubectl create -f cluster.yaml cephcluster.ceph.rook.io/rook-ceph...local-path-storage local-path-provisioner-9cd9bd544-v27p7 1/1 Running 0 4m55s rook-ceph...rook-ceph-csi-detect-version-6djth 0/1 Init:0/1 0 71s rook-ceph...rook-ceph-crashcollector-kind-worker-86774b8649-lgsg9 1/1 Running 0 98s rook-ceph
cd rook/deploy/examples kubectl create -f crds.yaml -f common.yaml -f operator.yaml # 检查 kubectl -n rook-ceph...rook-ceph rook-release/rook-ceph -f values.yaml 3)创建 Rook Ceph 集群 现在 Rook Operator 处于 Running 状态,接下来我们就可以创建...https://:nodePort/ 6)检查 kubectl get pods,svc -n rook-ceph 7)通过 ceph-tool 工具 pod 查看 ceph...集群状态 kubectl exec -it `kubectl get pods -n rook-ceph|grep rook-ceph-tools|awk '{print $1}'` -n rook-ceph...filesystem.yaml 3)对象存储 (RGW) 测试 1、创建对象存储 kubectl create -f object.yaml # 验证rgw pod正常运行 kubectl -n rook-ceph
作为 CNCF 毕业项目,Rook-Ceph 对云原生场景的支持毋庸置疑。...如何部署 Rook-Ceph 并对接到 Rainbond 之中,请参考文档 Rook-Ceph 对接方案 。3....修改 Ceph 集群配置,禁用 dashboard 内置 ssl:$ kubectl -n rook-ceph edit cephcluster -n rook-ceph rook-ceph# 修改 ssl...="{['data']['password']}" | base64 --decode && echo图片3.4 使用对象存储请参考文档 Rook-Ceph 部署对接 ,可以在 Rook-Ceph 中部署对象存储...通过对 Rook-Ceph 的使用体验的描述以及最后的性能测试对比,不得不说,Rook-Ceph 即将成为我们在云原生存储领域探索的一个主攻方向。
---apiVersion: storage.k8s.io/v1kind: StorageClass #存储驱动metadata: name: rook-ceph-block# Change "rook-ceph...provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph...共同操作一个地方参考文档:Ceph DocsapiVersion: ceph.rook.io/v1kind: CephFilesystemmetadata: name: myfs namespace: rook-ceph...enabled: falseapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: rook-cephfs# Change "rook-ceph...clusterID: rook-ceph # CephFS filesystem name into which the volume shall be created fsName: myfs
注意修改operator镜像# verify the rook-ceph-operator is in the `Running` state before proceedingkubectl -n rook-ceph...应用nodePort文件#获取访问密码kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data'...kubectl get svc -n rook-ceph|grep dashboard curl 访问dashboard确定哪个mgr不能访问 自己做一个可访问到的service。...再部署如下的ingressapiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: ceph-rook-dash namespace: rook-ceph...get cephclusterkubectl -n rook-ceph patch cephclusters.ceph.rook.io rook-ceph -p '{"metadata":{"finalizers
kubectl apply -f operator.yaml 在继续操作之前,验证 rook-ceph-operator 是否处于“Running”状态: $ kubectl get pod -n rook-ceph...创建如下的资源清单文件:(cluster.yaml) apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph...namespace: rook-ceph spec: cephVersion: # 最新得 ceph 镜像, 可以查看 https://hub.docker.com/r/ceph/ceph/tags...exec-it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"-o jsonpath='{.items[0].metadata.name...labels: app: rook-ceph-mgr rook_cluster: rook-ceph spec: ports: - name: dashboard
-7rvl4命名空间名称:rook-ceph POD名称:csi-cephfsplugin-8slqr命名空间名称:rook-ceph POD名称:csi-cephfsplugin-9mkkx...这个命名空间下有... csi-cephfsplugin-2pv6l192.168.11.18 rook-ceph csi-cephfsplugin-7c9rp192.168.11.11 rook-ceph... csi-cephfsplugin-7rvl4192.168.11.19 rook-ceph csi-cephfsplugin-8slqr192.168.11.15 rook-ceph... csi-cephfsplugin-9mkkx192.168.11.20 rook-ceph csi-cephfsplugin-css2r192.168.11.12 rook-ceph... csi-cephfsplugin-dblnm192.168.11.16 rook-ceph csi-cephfsplugin-nsbsp192.168.11.14 rook-ceph
$ kubectl -n rook-ceph get pods # mgr 1, mon 3, # rook-ceph-crashcollector (有几个 node 就有几个) # rook-ceph-osd...pod,排序从 0 开始) Ceph 的问题很多,经常需要使用工具箱查看一些情况,按照如下步骤部署: $ kubectl create -f toolbox.yaml $ kubectl -n rook-ceph...labels: app: rook-ceph-mgr rook_cluster: rook-ceph spec: ports: - name: dashboard...账号是 admin,密码可以在线查到: $ kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data...delete cephcluster rook-ceph $ kubectl -n rook-ceph get cephcluster # 确认 rook-ceph 被删除 $ kubectl delete
3.5 创建ceph集群 kubectl create -f cluster.yaml 创建完成后,可以查看pod的状态: [root@k8s-master01 ceph]# kubectl -n rook-ceph...3.6 安装ceph 客户端工具 这个文件的路径还是在ceph文件夹下 kubectl create -f toolbox.yaml -n rook-ceph 待容器Running后,即可执行相关命令...delete cephcluster rook-ceph 确认上一步删除之后,查询一下 kubectl -n rook-ceph get cephcluster 4.2 删除Operator 和相关的资源...显示未Terminating,无法删除 NAMESPACE=rook-ceph kubectl proxy & kubectl get namespace $NAMESPACE -o json |...get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH rook-ceph
k8s资源在etcd中的prefix默认为/registry/,如下所示: /registry/secrets/rook-ceph/rook-ceph-rgw-token-vd98q /registry.../secrets/rook-ceph/rook-ceph-system-token-rnpms /registry/secrets/rook-ceph/rook-csi-cephfs-plugin-sa-token-n5qdq.../registry/secrets/rook-ceph/rook-csi-cephfs-provisioner-sa-token-n7947 /registry/secrets/rook-ceph/rook-csi-rbd-plugin-sa-token-qcknl.../registry/secrets/rook-ceph/rook-csi-rbd-provisioner-sa-token-wrn6w /registry/serviceaccounts/default.../rook-ceph-rgw-token-vd98q这个key的value,而这个key对应的是k8s中rook-ceph这个命名空间下的一个名为rook-ceph-rgw-token-vd98q的secret
created $ kubectl get cephcluster -n rook-ceph NAME DATADIRHOSTPATH MONCOUNT AGE STATE...rook-ceph /var/lib/rook 3 29m Created $ kubectl get pod -n rook-ceph NAME...namespace: rook-ceph .........$ kubectl get svc -n rook-ceph |grep mgr-dashboard rook-ceph-mgr-dashboard ClusterIP...但是需要用户名和密码,这里有两种方式获取: 方式一:rook-ceph 默认创建了一个 rook-ceph-dashboard-password 的 secret,可以用这种方式获取 password。
下面命令将在 rook-ceph 命名空间中创建一个名为 developer 的角色,并为该角色分配一些操作权限。...--namespace=rook-ceph:指定该角色所属的命名空间为 rook-ceph。 --resource=pods:指定该角色所授权的资源类型为 pods。...命名空间下的role: [root@k8s-a-master api-user]# kubectl get roles -n rook-ceph NAME ...--namespace=rook-ceph 表示在 rook-ceph 命名空间中使用该用户凭据。命名空间用于将 Kubernetes 资源划分为不同的逻辑组。...同时,该上下文默认的命名空间为 rook-ceph,经过实战,其实是没必要指定命名空间。因为,就算指定了命名空间,当不管是查看还是删除上下文的时候,不管有没有指定命名空间都是可以的。
v4.20.17.log # rclone ls ftp: 4548 hadoop-deploy.sh 24094 v4.20.17.log 12022 storage/rook-ceph.../1-operator.yaml 4280 storage/rook-ceph/2-cluster.yaml 416 storage/ceph-tools/mon-add.sh...:mybackup1-1253766168/ 416 ceph-tools/mon-add.sh 197 ceph-tools/mon-remove.sh 12022 rook-ceph.../1-operator.yaml 4280 rook-ceph/2-cluster.yaml ... rclone copy会copy指定源目录下的所有文件,目的里并不会包含源目录名.../1-operator.yaml 4280 storage/rook-ceph/2-cluster.yaml ... rclone copy支持参数实时显示传输统计 -P/--progress
octopus-only functionality2021-08-21 10:43:53.411730 I | op-osd: finished running OSDs in namespace "rook-ceph...2021-08-21 10:43:53.411841 I | ceph-cluster-controller: done reconciling ceph cluster in namespace "rook-ceph...op-mgr: successful modules: dashboard部署完成,查看namespace下pod的状态[root@kmaster ceph]# kubectl get pod -n rook-ceph...-- 集群操作命令 ,例如:kubectl exec -it rook-ceph-tools-7467d8bf8-x7scq /bin/bash -n rook-ceph -- ceph -s kubectl...-7467d8bf8-x7scq /bin/bash -n rook-ceph -- ceph -s cluster: id: 0ad47b5f-e055-4448-b8b6-5ab5ccd57799
sc NAME PROVISIONER AGE rook-ceph-block ceph.rook.io/block 9s $ kubectl -n rook-ceph...get cephfilesystem NAME MDSCOUNT AGE busy-box-fs 1 30s $ kubectl -n rook-ceph...$ kubectl -n rook-ceph get secret/rook-ceph-object-user-busy-box-obj-busy-box-obj-user -o jsonpath='{...$ kubectl -n rook-ceph exec -it rook-ceph-tools-5bd5cdb949-d22ql bash [root@node2 /]# yum install s3cmd...labels: app: rook-ceph-rgw rook_cluster: rook-ceph rook_object_store: busy-box-obj spec
88m 然后创建ceph集群: kubectl create -f cluster.yaml 查看ceph集群: [root@dev-86-201 ~]# kubectl get pod -n rook-ceph...Running 0 63m 参数说明: apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph...namespace: rook-ceph spec: cephVersion: # For the latest ceph images, see https://hub.docker.com...管理账户admin,获取登录密码: kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o yaml | grep "password...clusterNamespace" MUST be the same as the one in which your rook cluster exist clusterNamespace: rook-ceph
[root@kmaster ceph]# kubectl get pod -n rook-ceph NAME READY STATUS...2021-08-21 10:43:53.411841 I | ceph-cluster-controller: done reconciling ceph cluster in namespace "rook-ceph...[root@kmaster ceph]# kubectl exec -it rook-ceph-tools-7467d8bf8-x7scq /bin/bash -n rook-ceph kubectl...-- 集群操作命令 ,例如: kubectl exec -it rook-ceph-tools-7467d8bf8-x7scq /bin/bash -n rook-ceph -- ceph -s kubectl...ceph]# 再次查看集群状态 [root@kmaster ceph]# kubectl exec -it rook-ceph-tools-7467d8bf8-x7scq /bin/bash -n rook-ceph
领取专属 10元无门槛券
手把手带您无忧上云