前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >CentOS 7.5安装部署Jewel版本Ceph集群

CentOS 7.5安装部署Jewel版本Ceph集群

作者头像
三杯水Plus
发布2018-11-14 17:19:16
7610
发布2018-11-14 17:19:16
举报
文章被收录于专栏:运维运维

参考文档

https://www.linuxidc.com/Linux/2017-09/146760.htm https://www.cnblogs.com/luohaixian/p/8087591.html http://docs.ceph.com/docs/master/start/quick-start-preflight/#rhel-centos

简介

Ceph的核心组件包括Ceph OSD、Ceph Monitor、Ceph MDS和Ceph RWG。 Ceph OSD:OSD的英文全称是Object Storage Device,它的主要功能是存储数据、复制数据、平衡数据、恢复数据等,与其它OSD间进行心跳检查等,并将一些变化情况上报给Ceph Monitor。一般情况下一块硬盘对应一个OSD,由OSD来对硬盘存储进行管理,当然一个分区也可以成为一个OSD。 Ceph Monitor:由该英文名字我们可以知道它是一个监视器,负责监视Ceph集群,维护Ceph集群的健康状态,同时维护着Ceph集群中的各种Map图,比如OSD Map、Monitor Map、PG Map和CRUSH Map,这些Map统称为Cluster Map,Cluster Map是RADOS的关键数据结构,管理集群中的所有成员、关系、属性等信息以及数据的分发,比如当用户需要存储数据到Ceph集群时,OSD需要先通过Monitor获取最新的Map图,然后根据Map图和object id等计算出数据最终存储的位置。 Ceph MDS:全称是Ceph MetaData Server,主要保存的文件系统服务的元数据,但对象存储和块存储设备是不需要使用该服务的。 Ceph RWG:RGW为Rados Gateway的缩写,ceph通过RGW为互联网云服务提供商提供对象存储服务。RGW在librados之上向应用提供访问ceph集群的RestAPI, 支持Amazon S3和openstack swift两种接口。对RGW最直接的理解就是一个协议转换层,把从上层应用符合S3或Swift协议的请求转换成rados的请求, 将数据保存在rados集群中。

架构图

安装部署

一、基础环境

0、服务分布

mon ceph0、ceph2、cphe3 注意mon为奇数节点 osd ceph0、ceph1、ceph2、ceph3 rgw ceph1 deploy ceph0

1、host解析

root@idcv-ceph0 ~# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 172.20.1.138 idcv-ceph0 172.20.1.139 idcv-ceph1 172.20.1.140 idcv-ceph2 172.20.1.141 idcv-ceph3

2、ntp时间同步

root@idcv-ceph0 ~# ntpdate 172.20.0.63

3、ssh免密码登陆

root@idcv-ceph0 ~# ssh-keygen root@idcv-ceph0 ~# ssh-copy-id root@idcv-ceph1 root@idcv-ceph0 ~# ssh-copy-id root@idcv-ceph2 root@idcv-ceph0 ~# ssh-copy-id root@idcv-ceph3

4、update系统

root@idcv-ceph0 ~# yum update

5、关闭selinux

root@idcv-ceph0 ~# sed -i 's/enforcing/disabled/g' /etc/selinux/config

6、关闭iptables

root@idcv-ceph0 ~# systemctl disable firewalld

7、reboot

root@idcv-ceph0 ~# reboot

二、安装部署deploy节点

1、设置国内yum源

root@idcv-ceph0 ~# cat /etc/yum.repos.d/ceph.repo Ceph name=Ceph packages for $basearch baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 Ceph-noarch name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 ceph-source name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1

2、安装ceph-deploy

root@idcv-ceph0 ~# yum install ceph-deploy root@idcv-ceph0 ~# ceph-deploy --version 1.5.39 root@idcv-ceph0 ~# ceph -v ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

3、创建部署目录并部署集群

root@idcv-ceph0 ~# mkdir cluster root@idcv-ceph0 ~# cd cluster root@idcv-ceph0 cluster# ceph-deploy new idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3 ceph_deploy.conf found configuration file at: /root/.cephdeploy.conf ceph_deploy.cli Invoked (1.5.39): /usr/bin/ceph-deploy new idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3 ceph_deploy.cli ceph-deploy options: ceph_deploy.cli username : None ceph_deploy.cli func : <function new at 0x7f7c607aa5f0> ceph_deploy.cli verbose : False ceph_deploy.cli overwrite_conf : False ceph_deploy.cli quiet : False ceph_deploy.cli cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7c5ff1bcf8> ceph_deploy.cli cluster : ceph ceph_deploy.cli ssh_copykey : True ceph_deploy.cli mon : 'idcv-ceph0', 'idcv-ceph1', 'idcv-ceph2', 'idcv-ceph3'INFO public_network : None ceph_deploy.cli ceph_conf : None ceph_deploy.cli cluster_network : None ceph_deploy.cli default_release : False ceph_deploy.cli fsid : None ceph_deploy.new Creating new cluster named ceph ceph_deploy.new making sure passwordless SSH succeeds idcv-ceph0 connected to host: idcv-ceph0 idcv-ceph0 detect platform information from remote host idcv-ceph0 detect machine type idcv-ceph0 find the location of an executable idcv-ceph0 Running command: /usr/sbin/ip link show idcv-ceph0 Running command: /usr/sbin/ip addr show idcv-ceph0 IP addresses found: u'172.20.1.138'DEBUG Resolving host idcv-ceph0 ceph_deploy.new Monitor idcv-ceph0 at 172.20.1.138 ceph_deploy.new making sure passwordless SSH succeeds idcv-ceph1 connected to host: idcv-ceph0 idcv-ceph1 Running command: ssh -CT -o BatchMode=yes idcv-ceph1 idcv-ceph1 connection detected need for sudo idcv-ceph1 connected to host: idcv-ceph1 idcv-ceph1 detect platform information from remote host idcv-ceph1 detect machine type idcv-ceph1 find the location of an executable idcv-ceph1 Running command: sudo /usr/sbin/ip link show idcv-ceph1 Running command: sudo /usr/sbin/ip addr show idcv-ceph1 IP addresses found: u'172.20.1.139'DEBUG Resolving host idcv-ceph1 ceph_deploy.new Monitor idcv-ceph1 at 172.20.1.139 ceph_deploy.new making sure passwordless SSH succeeds idcv-ceph2 connected to host: idcv-ceph0 idcv-ceph2 Running command: ssh -CT -o BatchMode=yes idcv-ceph2 idcv-ceph2 connection detected need for sudo idcv-ceph2 connected to host: idcv-ceph2 idcv-ceph2 detect platform information from remote host idcv-ceph2 detect machine type idcv-ceph2 find the location of an executable idcv-ceph2 Running command: sudo /usr/sbin/ip link show idcv-ceph2 Running command: sudo /usr/sbin/ip addr show idcv-ceph2 IP addresses found: u'172.20.1.140'DEBUG Resolving host idcv-ceph2 ceph_deploy.new Monitor idcv-ceph2 at 172.20.1.140 ceph_deploy.new making sure passwordless SSH succeeds idcv-ceph3 connected to host: idcv-ceph0 idcv-ceph3 Running command: ssh -CT -o BatchMode=yes idcv-ceph3 idcv-ceph3 connection detected need for sudo idcv-ceph3 connected to host: idcv-ceph3 idcv-ceph3 detect platform information from remote host idcv-ceph3 detect machine type idcv-ceph3 find the location of an executable idcv-ceph3 Running command: sudo /usr/sbin/ip link show idcv-ceph3 Running command: sudo /usr/sbin/ip addr show idcv-ceph3 IP addresses found: u'172.20.1.141'DEBUG Resolving host idcv-ceph3 ceph_deploy.new Monitor idcv-ceph3 at 172.20.1.141 ceph_deploy.new Monitor initial members are 'idcv-ceph0', 'idcv-ceph1', 'idcv-ceph2', 'idcv-ceph3'DEBUG Monitor addrs are '172.20.1.138', '172.20.1.139', '172.20.1.140', '172.20.1.141'DEBUG Creating a random mon key... ceph_deploy.new Writing monitor keyring to ceph.mon.keyring... ceph_deploy.new Writing initial config to ceph.conf...

三、安装mon服务

1、修改cpeh.conf文件

注意mon为奇数,如果为偶数,有一个不会安装,另外设置好public_network,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s)

root@idcv-ceph0 cluster# cat ceph.conf global fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3 mon_initial_members = idcv-ceph0, idcv-ceph1, idcv-ceph2, idcv-ceph3 mon_host = 172.20.1.138,172.20.1.139,172.20.1.140,172.20.1.141 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public_network = 172.20.0.0/20 mon_clock_drift_allowed = 2

2、开始部署mon服务

root@idcv-ceph0 cluster# ceph-deploy mon create-initial ceph_deploy.conf found configuration file at: /root/.cephdeploy.conf ceph_deploy.cli Invoked (1.5.39): /usr/bin/ceph-deploy mon create-initial ceph_deploy.cli ceph-deploy options: ceph_deploy.cli username : None ceph_deploy.cli verbose : False ceph_deploy.cli overwrite_conf : False ceph_deploy.cli subcommand : create-initial ceph_deploy.cli quiet : False ceph_deploy.cli cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd263377368> ceph_deploy.cli cluster : ceph ceph_deploy.cli func : <function mon at 0x7fd26335c6e0> ceph_deploy.cli ceph_conf : None ceph_deploy.cli default_release : False ceph_deploy.cli keyrings : None ceph_deploy.mon Deploying mon, cluster ceph hosts idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3 ceph_deploy.mon detecting platform for host idcv-ceph0 ... idcv-ceph0 connected to host: idcv-ceph0 idcv-ceph0 detect platform information from remote host idcv-ceph0 detect machine type idcv-ceph0 find the location of an executable ceph_deploy.mon distro info: CentOS Linux 7.5.1804 Core idcv-ceph0 determining if provided host has same hostname in remote idcv-ceph0 get remote short hostname idcv-ceph0 deploying mon to idcv-ceph0 idcv-ceph0 get remote short hostname idcv-ceph0 remote hostname: idcv-ceph0 idcv-ceph0 write cluster configuration to /etc/ceph/{cluster}.conf idcv-ceph0 create the mon path if it does not exist idcv-ceph0 checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph0/done idcv-ceph0 done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph0/done idcv-ceph0 creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring idcv-ceph0 create the monitor keyring file idcv-ceph0 Running command: ceph-mon --cluster ceph --mkfs -i idcv-ceph0 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring --setuser 167 --setgroup 167 idcv-ceph0 ceph-mon: renaming mon.noname-a 172.20.1.138:6789/0 to mon.idcv-ceph0 idcv-ceph0 ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3 idcv-ceph0 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph0 for mon.idcv-ceph0 idcv-ceph0 unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring idcv-ceph0 create a done file to avoid re-doing the mon deployment idcv-ceph0 create the init path if it does not exist idcv-ceph0 Running command: systemctl enable ceph.target idcv-ceph0 Running command: systemctl enable ceph-mon@idcv-ceph0 idcv-ceph0 Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph0.service to /usr/lib/systemd/system/ceph-mon@.service. idcv-ceph0 Running command: systemctl start ceph-mon@idcv-ceph0 idcv-ceph0 Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status idcv-ceph0 **** idcv-ceph0 status for monitor: mon.idcv-ceph0 idcv-ceph0 { idcv-ceph0 "election_epoch": 0, idcv-ceph0 "extra_probe_peers": [ idcv-ceph0 "172.20.1.139:6789/0", idcv-ceph0 "172.20.1.140:6789/0", idcv-ceph0 "172.20.1.141:6789/0" idcv-ceph0 ], idcv-ceph0 "monmap": { idcv-ceph0 "created": "2018-07-03 11:06:12.249491", idcv-ceph0 "epoch": 0, idcv-ceph0 "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3", idcv-ceph0 "modified": "2018-07-03 11:06:12.249491", idcv-ceph0 "mons": [idcv-ceph0][DEBUG ] { [idcv-ceph0][DEBUG ] "addr": "172.20.1.138:6789/0", [idcv-ceph0][DEBUG ] "name": "idcv-ceph0", [idcv-ceph0][DEBUG ] "rank": 0 [idcv-ceph0][DEBUG ] }, [idcv-ceph0][DEBUG ] { [idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/1", [idcv-ceph0][DEBUG ] "name": "idcv-ceph1", [idcv-ceph0][DEBUG ] "rank": 1 [idcv-ceph0][DEBUG ] }, [idcv-ceph0][DEBUG ] { [idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/2", [idcv-ceph0][DEBUG ] "name": "idcv-ceph2", [idcv-ceph0][DEBUG ] "rank": 2 [idcv-ceph0][DEBUG ] }, [idcv-ceph0][DEBUG ] { [idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/3", [idcv-ceph0][DEBUG ] "name": "idcv-ceph3", [idcv-ceph0][DEBUG ] "rank": 3 [idcv-ceph0][DEBUG ] } [idcv-ceph0][DEBUG ] DEBUG }, idcv-ceph0 "name": "idcv-ceph0", idcv-ceph0 "outside_quorum": [ idcv-ceph0 "idcv-ceph0" idcv-ceph0 ], idcv-ceph0 "quorum": [], idcv-ceph0 "rank": 0, idcv-ceph0 "state": "probing", idcv-ceph0 "sync_provider": DEBUG } idcv-ceph0 **** idcv-ceph0 monitor: mon.idcv-ceph0 is running idcv-ceph0 Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status ceph_deploy.mon detecting platform for host idcv-ceph1 ... idcv-ceph1 connection detected need for sudo idcv-ceph1 connected to host: idcv-ceph1 idcv-ceph1 detect platform information from remote host idcv-ceph1 detect machine type idcv-ceph1 find the location of an executable ceph_deploy.mon distro info: CentOS Linux 7.5.1804 Core idcv-ceph1 determining if provided host has same hostname in remote idcv-ceph1 get remote short hostname idcv-ceph1 deploying mon to idcv-ceph1 idcv-ceph1 get remote short hostname idcv-ceph1 remote hostname: idcv-ceph1 idcv-ceph1 write cluster configuration to /etc/ceph/{cluster}.conf ceph_deploy.mon RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite ceph_deploy.mon detecting platform for host idcv-ceph2 ... idcv-ceph2 connection detected need for sudo idcv-ceph2 connected to host: idcv-ceph2 idcv-ceph2 detect platform information from remote host idcv-ceph2 detect machine type idcv-ceph2 find the location of an executable ceph_deploy.mon distro info: CentOS Linux 7.5.1804 Core idcv-ceph2 determining if provided host has same hostname in remote idcv-ceph2 get remote short hostname idcv-ceph2 deploying mon to idcv-ceph2 idcv-ceph2 get remote short hostname idcv-ceph2 remote hostname: idcv-ceph2 idcv-ceph2 write cluster configuration to /etc/ceph/{cluster}.conf idcv-ceph2 create the mon path if it does not exist idcv-ceph2 checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph2/done idcv-ceph2 done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph2/done idcv-ceph2 creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring idcv-ceph2 create the monitor keyring file idcv-ceph2 Running command: sudo ceph-mon --cluster ceph --mkfs -i idcv-ceph2 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring --setuser 167 --setgroup 167 idcv-ceph2 ceph-mon: renaming mon.noname-c 172.20.1.140:6789/0 to mon.idcv-ceph2 idcv-ceph2 ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3 idcv-ceph2 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph2 for mon.idcv-ceph2 idcv-ceph2 unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring idcv-ceph2 create a done file to avoid re-doing the mon deployment idcv-ceph2 create the init path if it does not exist idcv-ceph2 Running command: sudo systemctl enable ceph.target idcv-ceph2 Running command: sudo systemctl enable ceph-mon@idcv-ceph2 idcv-ceph2 Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph2.service to /usr/lib/systemd/system/ceph-mon@.service. idcv-ceph2 Running command: sudo systemctl start ceph-mon@idcv-ceph2 idcv-ceph2 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status idcv-ceph2 **** idcv-ceph2 status for monitor: mon.idcv-ceph2 idcv-ceph2 { idcv-ceph2 "election_epoch": 0, idcv-ceph2 "extra_probe_peers": [ idcv-ceph2 "172.20.1.138:6789/0", idcv-ceph2 "172.20.1.139:6789/0", idcv-ceph2 "172.20.1.141:6789/0" idcv-ceph2 ], idcv-ceph2 "monmap": { idcv-ceph2 "created": "2018-07-03 11:06:15.703352", idcv-ceph2 "epoch": 0, idcv-ceph2 "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3", idcv-ceph2 "modified": "2018-07-03 11:06:15.703352", idcv-ceph2 "mons": [idcv-ceph2][DEBUG ] { [idcv-ceph2][DEBUG ] "addr": "172.20.1.138:6789/0", [idcv-ceph2][DEBUG ] "name": "idcv-ceph0", [idcv-ceph2][DEBUG ] "rank": 0 [idcv-ceph2][DEBUG ] }, [idcv-ceph2][DEBUG ] { [idcv-ceph2][DEBUG ] "addr": "172.20.1.140:6789/0", [idcv-ceph2][DEBUG ] "name": "idcv-ceph2", [idcv-ceph2][DEBUG ] "rank": 1 [idcv-ceph2][DEBUG ] }, [idcv-ceph2][DEBUG ] { [idcv-ceph2][DEBUG ] "addr": "0.0.0.0:0/2", [idcv-ceph2][DEBUG ] "name": "idcv-ceph1", [idcv-ceph2][DEBUG ] "rank": 2 [idcv-ceph2][DEBUG ] }, [idcv-ceph2][DEBUG ] { [idcv-ceph2][DEBUG ] "addr": "0.0.0.0:0/3", [idcv-ceph2][DEBUG ] "name": "idcv-ceph3", [idcv-ceph2][DEBUG ] "rank": 3 [idcv-ceph2][DEBUG ] } [idcv-ceph2][DEBUG ] DEBUG }, idcv-ceph2 "name": "idcv-ceph2", idcv-ceph2 "outside_quorum": [ idcv-ceph2 "idcv-ceph0", idcv-ceph2 "idcv-ceph2" idcv-ceph2 ], idcv-ceph2 "quorum": [], idcv-ceph2 "rank": 1, idcv-ceph2 "state": "probing", idcv-ceph2 "sync_provider": DEBUG } idcv-ceph2 **** idcv-ceph2 monitor: mon.idcv-ceph2 is running idcv-ceph2 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status ceph_deploy.mon detecting platform for host idcv-ceph3 ... idcv-ceph3 connection detected need for sudo idcv-ceph3 connected to host: idcv-ceph3 idcv-ceph3 detect platform information from remote host idcv-ceph3 detect machine type idcv-ceph3 find the location of an executable ceph_deploy.mon distro info: CentOS Linux 7.5.1804 Core idcv-ceph3 determining if provided host has same hostname in remote idcv-ceph3 get remote short hostname idcv-ceph3 deploying mon to idcv-ceph3 idcv-ceph3 get remote short hostname idcv-ceph3 remote hostname: idcv-ceph3 idcv-ceph3 write cluster configuration to /etc/ceph/{cluster}.conf idcv-ceph3 create the mon path if it does not exist idcv-ceph3 checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph3/done idcv-ceph3 done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph3/done idcv-ceph3 creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring idcv-ceph3 create the monitor keyring file idcv-ceph3 Running command: sudo ceph-mon --cluster ceph --mkfs -i idcv-ceph3 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring --setuser 167 --setgroup 167 idcv-ceph3 ceph-mon: renaming mon.noname-d 172.20.1.141:6789/0 to mon.idcv-ceph3 idcv-ceph3 ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3 idcv-ceph3 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph3 for mon.idcv-ceph3 idcv-ceph3 unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring idcv-ceph3 create a done file to avoid re-doing the mon deployment idcv-ceph3 create the init path if it does not exist idcv-ceph3 Running command: sudo systemctl enable ceph.target idcv-ceph3 Running command: sudo systemctl enable ceph-mon@idcv-ceph3 idcv-ceph3 Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph3.service to /usr/lib/systemd/system/ceph-mon@.service. idcv-ceph3 Running command: sudo systemctl start ceph-mon@idcv-ceph3 idcv-ceph3 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status idcv-ceph3 **** idcv-ceph3 status for monitor: mon.idcv-ceph3 idcv-ceph3 { idcv-ceph3 "election_epoch": 1, idcv-ceph3 "extra_probe_peers": [ idcv-ceph3 "172.20.1.138:6789/0", idcv-ceph3 "172.20.1.139:6789/0", idcv-ceph3 "172.20.1.140:6789/0" idcv-ceph3 ], idcv-ceph3 "monmap": { idcv-ceph3 "created": "2018-07-03 11:06:18.695039", idcv-ceph3 "epoch": 0, idcv-ceph3 "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3", idcv-ceph3 "modified": "2018-07-03 11:06:18.695039", idcv-ceph3 "mons": [idcv-ceph3][DEBUG ] { [idcv-ceph3][DEBUG ] "addr": "172.20.1.138:6789/0", [idcv-ceph3][DEBUG ] "name": "idcv-ceph0", [idcv-ceph3][DEBUG ] "rank": 0 [idcv-ceph3][DEBUG ] }, [idcv-ceph3][DEBUG ] { [idcv-ceph3][DEBUG ] "addr": "172.20.1.140:6789/0", [idcv-ceph3][DEBUG ] "name": "idcv-ceph2", [idcv-ceph3][DEBUG ] "rank": 1 [idcv-ceph3][DEBUG ] }, [idcv-ceph3][DEBUG ] { [idcv-ceph3][DEBUG ] "addr": "172.20.1.141:6789/0", [idcv-ceph3][DEBUG ] "name": "idcv-ceph3", [idcv-ceph3][DEBUG ] "rank": 2 [idcv-ceph3][DEBUG ] }, [idcv-ceph3][DEBUG ] { [idcv-ceph3][DEBUG ] "addr": "0.0.0.0:0/2", [idcv-ceph3][DEBUG ] "name": "idcv-ceph1", [idcv-ceph3][DEBUG ] "rank": 3 [idcv-ceph3][DEBUG ] } [idcv-ceph3][DEBUG ] DEBUG }, idcv-ceph3 "name": "idcv-ceph3", idcv-ceph3 "outside_quorum": [], idcv-ceph3 "quorum": [], idcv-ceph3 "rank": 2, idcv-ceph3 "state": "electing", idcv-ceph3 "sync_provider": DEBUG } idcv-ceph3 **** idcv-ceph3 monitor: mon.idcv-ceph3 is running idcv-ceph3 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status ceph_deploy GenericError: Failed to create 1 monitors

3、注意mon节点只能是奇数,根据上面报错有一个节点没有安装成功mon服务,需要把idcv-ceph1删掉

root@idcv-ceph0 cluster# cat ceph.conf global fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3 mon_initial_members = idcv-ceph0, idcv-ceph1, idcv-ceph2, idcv-ceph3 mon_host = 172.20.1.138,172.20.1.139,172.20.1.140,172.20.1.141 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public_network = 172.20.0.0/20 mon_clock_drift_allowed = 2 root@idcv-ceph0 cluster# ceph mon remove idcv-ceph1 removing mon.idcv-ceph1 at 0.0.0.0:0/1, there will be 3 monitors root@idcv-ceph0 cluster# ceph -s cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3 health HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds 64 pgs stuck inactive 64 pgs stuck unclean no osds monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0} election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3 osdmap e1: 0 osds: 0 up, 0 in flags sortbitwise,require_jewel_osds pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating

4、也可以修改ceph.conf文件,再覆盖部署一次

root@idcv-ceph0 cluster# cat ceph.conf global fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3 mon_initial_members = idcv-ceph0, idcv-ceph2, idcv-ceph3 mon_host = 172.20.1.138,172.20.1.140,172.20.1.141 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public_network = 172.20.0.0/20 mon_clock_drift_allowed = 2 root@idcv-ceph0 cluster# ceph-deploy --overwrite-conf mon create-initial ceph_deploy.conf found configuration file at: /root/.cephdeploy.conf ceph_deploy.cli Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf mon create-initial ceph_deploy.cli ceph-deploy options: ceph_deploy.cli username : None ceph_deploy.cli verbose : False ceph_deploy.cli overwrite_conf : True ceph_deploy.cli subcommand : create-initial ceph_deploy.cli quiet : False ceph_deploy.cli cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fce9cf7a368> ceph_deploy.cli cluster : ceph ceph_deploy.cli func : <function mon at 0x7fce9cf5f6e0> ceph_deploy.cli ceph_conf : None ceph_deploy.cli default_release : False ceph_deploy.cli keyrings : None ceph_deploy.mon Deploying mon, cluster ceph hosts idcv-ceph0 idcv-ceph2 idcv-ceph3 ceph_deploy.mon detecting platform for host idcv-ceph0 ... idcv-ceph0 connected to host: idcv-ceph0 idcv-ceph0 detect platform information from remote host idcv-ceph0 detect machine type idcv-ceph0 find the location of an executable ceph_deploy.mon distro info: CentOS Linux 7.5.1804 Core idcv-ceph0 determining if provided host has same hostname in remote idcv-ceph0 get remote short hostname idcv-ceph0 deploying mon to idcv-ceph0 idcv-ceph0 get remote short hostname idcv-ceph0 remote hostname: idcv-ceph0 idcv-ceph0 write cluster configuration to /etc/ceph/{cluster}.conf idcv-ceph0 create the mon path if it does not exist idcv-ceph0 checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph0/done idcv-ceph0 create a done file to avoid re-doing the mon deployment idcv-ceph0 create the init path if it does not exist idcv-ceph0 Running command: systemctl enable ceph.target idcv-ceph0 Running command: systemctl enable ceph-mon@idcv-ceph0 idcv-ceph0 Running command: systemctl start ceph-mon@idcv-ceph0 idcv-ceph0 Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status idcv-ceph0 **** idcv-ceph0 status for monitor: mon.idcv-ceph0 idcv-ceph0 { idcv-ceph0 "election_epoch": 8, idcv-ceph0 "extra_probe_peers": [ idcv-ceph0 "172.20.1.139:6789/0", idcv-ceph0 "172.20.1.140:6789/0", idcv-ceph0 "172.20.1.141:6789/0" idcv-ceph0 ], idcv-ceph0 "monmap": { idcv-ceph0 "created": "2018-07-03 11:06:12.249491", idcv-ceph0 "epoch": 2, idcv-ceph0 "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3", idcv-ceph0 "modified": "2018-07-03 11:21:27.254076", idcv-ceph0 "mons": [idcv-ceph0][DEBUG ] { [idcv-ceph0][DEBUG ] "addr": "172.20.1.138:6789/0", [idcv-ceph0][DEBUG ] "name": "idcv-ceph0", [idcv-ceph0][DEBUG ] "rank": 0 [idcv-ceph0][DEBUG ] }, [idcv-ceph0][DEBUG ] { [idcv-ceph0][DEBUG ] "addr": "172.20.1.140:6789/0", [idcv-ceph0][DEBUG ] "name": "idcv-ceph2", [idcv-ceph0][DEBUG ] "rank": 1 [idcv-ceph0][DEBUG ] }, [idcv-ceph0][DEBUG ] { [idcv-ceph0][DEBUG ] "addr": "172.20.1.141:6789/0", [idcv-ceph0][DEBUG ] "name": "idcv-ceph3", [idcv-ceph0][DEBUG ] "rank": 2 [idcv-ceph0][DEBUG ] } [idcv-ceph0][DEBUG ] DEBUG }, idcv-ceph0 "name": "idcv-ceph0", idcv-ceph0 "outside_quorum": [], idcv-ceph0 "quorum": [ idcv-ceph0 0, idcv-ceph0 1, idcv-ceph0 2 idcv-ceph0 ], idcv-ceph0 "rank": 0, idcv-ceph0 "state": "leader", idcv-ceph0 "sync_provider": DEBUG } idcv-ceph0 **** idcv-ceph0 monitor: mon.idcv-ceph0 is running idcv-ceph0 Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status ceph_deploy.mon detecting platform for host idcv-ceph2 ... idcv-ceph2 connection detected need for sudo idcv-ceph2 connected to host: idcv-ceph2 idcv-ceph2 detect platform information from remote host idcv-ceph2 detect machine type idcv-ceph2 find the location of an executable ceph_deploy.mon distro info: CentOS Linux 7.5.1804 Core idcv-ceph2 determining if provided host has same hostname in remote idcv-ceph2 get remote short hostname idcv-ceph2 deploying mon to idcv-ceph2 idcv-ceph2 get remote short hostname idcv-ceph2 remote hostname: idcv-ceph2 idcv-ceph2 write cluster configuration to /etc/ceph/{cluster}.conf idcv-ceph2 create the mon path if it does not exist idcv-ceph2 checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph2/done idcv-ceph2 create a done file to avoid re-doing the mon deployment idcv-ceph2 create the init path if it does not exist idcv-ceph2 Running command: sudo systemctl enable ceph.target idcv-ceph2 Running command: sudo systemctl enable ceph-mon@idcv-ceph2 idcv-ceph2 Running command: sudo systemctl start ceph-mon@idcv-ceph2 idcv-ceph2 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status idcv-ceph2 **** idcv-ceph2 status for monitor: mon.idcv-ceph2 idcv-ceph2 { idcv-ceph2 "election_epoch": 8, idcv-ceph2 "extra_probe_peers": [ idcv-ceph2 "172.20.1.138:6789/0", idcv-ceph2 "172.20.1.139:6789/0", idcv-ceph2 "172.20.1.141:6789/0" idcv-ceph2 ], idcv-ceph2 "monmap": { idcv-ceph2 "created": "2018-07-03 11:06:12.249491", idcv-ceph2 "epoch": 2, idcv-ceph2 "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3", idcv-ceph2 "modified": "2018-07-03 11:21:27.254076", idcv-ceph2 "mons": [idcv-ceph2][DEBUG ] { [idcv-ceph2][DEBUG ] "addr": "172.20.1.138:6789/0", [idcv-ceph2][DEBUG ] "name": "idcv-ceph0", [idcv-ceph2][DEBUG ] "rank": 0 [idcv-ceph2][DEBUG ] }, [idcv-ceph2][DEBUG ] { [idcv-ceph2][DEBUG ] "addr": "172.20.1.140:6789/0", [idcv-ceph2][DEBUG ] "name": "idcv-ceph2", [idcv-ceph2][DEBUG ] "rank": 1 [idcv-ceph2][DEBUG ] }, [idcv-ceph2][DEBUG ] { [idcv-ceph2][DEBUG ] "addr": "172.20.1.141:6789/0", [idcv-ceph2][DEBUG ] "name": "idcv-ceph3", [idcv-ceph2][DEBUG ] "rank": 2 [idcv-ceph2][DEBUG ] } [idcv-ceph2][DEBUG ] DEBUG }, idcv-ceph2 "name": "idcv-ceph2", idcv-ceph2 "outside_quorum": [], idcv-ceph2 "quorum": [ idcv-ceph2 0, idcv-ceph2 1, idcv-ceph2 2 idcv-ceph2 ], idcv-ceph2 "rank": 1, idcv-ceph2 "state": "peon", idcv-ceph2 "sync_provider": DEBUG } idcv-ceph2 **** idcv-ceph2 monitor: mon.idcv-ceph2 is running idcv-ceph2 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status ceph_deploy.mon detecting platform for host idcv-ceph3 ... idcv-ceph3 connection detected need for sudo idcv-ceph3 connected to host: idcv-ceph3 idcv-ceph3 detect platform information from remote host idcv-ceph3 detect machine type idcv-ceph3 find the location of an executable ceph_deploy.mon distro info: CentOS Linux 7.5.1804 Core idcv-ceph3 determining if provided host has same hostname in remote idcv-ceph3 get remote short hostname idcv-ceph3 deploying mon to idcv-ceph3 idcv-ceph3 get remote short hostname idcv-ceph3 remote hostname: idcv-ceph3 idcv-ceph3 write cluster configuration to /etc/ceph/{cluster}.conf idcv-ceph3 create the mon path if it does not exist idcv-ceph3 checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph3/done idcv-ceph3 create a done file to avoid re-doing the mon deployment idcv-ceph3 create the init path if it does not exist idcv-ceph3 Running command: sudo systemctl enable ceph.target idcv-ceph3 Running command: sudo systemctl enable ceph-mon@idcv-ceph3 idcv-ceph3 Running command: sudo systemctl start ceph-mon@idcv-ceph3 idcv-ceph3 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status idcv-ceph3 **** idcv-ceph3 status for monitor: mon.idcv-ceph3 idcv-ceph3 { idcv-ceph3 "election_epoch": 8, idcv-ceph3 "extra_probe_peers": [ idcv-ceph3 "172.20.1.138:6789/0", idcv-ceph3 "172.20.1.139:6789/0", idcv-ceph3 "172.20.1.140:6789/0" idcv-ceph3 ], idcv-ceph3 "monmap": { idcv-ceph3 "created": "2018-07-03 11:06:12.249491", idcv-ceph3 "epoch": 2, idcv-ceph3 "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3", idcv-ceph3 "modified": "2018-07-03 11:21:27.254076", idcv-ceph3 "mons": [idcv-ceph3][DEBUG ] { [idcv-ceph3][DEBUG ] "addr": "172.20.1.138:6789/0", [idcv-ceph3][DEBUG ] "name": "idcv-ceph0", [idcv-ceph3][DEBUG ] "rank": 0 [idcv-ceph3][DEBUG ] }, [idcv-ceph3][DEBUG ] { [idcv-ceph3][DEBUG ] "addr": "172.20.1.140:6789/0", [idcv-ceph3][DEBUG ] "name": "idcv-ceph2", [idcv-ceph3][DEBUG ] "rank": 1 [idcv-ceph3][DEBUG ] }, [idcv-ceph3][DEBUG ] { [idcv-ceph3][DEBUG ] "addr": "172.20.1.141:6789/0", [idcv-ceph3][DEBUG ] "name": "idcv-ceph3", [idcv-ceph3][DEBUG ] "rank": 2 [idcv-ceph3][DEBUG ] } [idcv-ceph3][DEBUG ] DEBUG }, idcv-ceph3 "name": "idcv-ceph3", idcv-ceph3 "outside_quorum": [], idcv-ceph3 "quorum": [ idcv-ceph3 0, idcv-ceph3 1, idcv-ceph3 2 idcv-ceph3 ], idcv-ceph3 "rank": 2, idcv-ceph3 "state": "peon", idcv-ceph3 "sync_provider": DEBUG } idcv-ceph3 **** idcv-ceph3 monitor: mon.idcv-ceph3 is running idcv-ceph3 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status ceph_deploy.mon processing monitor mon.idcv-ceph0 idcv-ceph0 connected to host: idcv-ceph0 idcv-ceph0 detect platform information from remote host idcv-ceph0 detect machine type idcv-ceph0 find the location of an executable idcv-ceph0 Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status ceph_deploy.mon mon.idcv-ceph0 monitor has reached quorum! ceph_deploy.mon processing monitor mon.idcv-ceph2 idcv-ceph2 connection detected need for sudo idcv-ceph2 connected to host: idcv-ceph2 idcv-ceph2 detect platform information from remote host idcv-ceph2 detect machine type idcv-ceph2 find the location of an executable idcv-ceph2 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status ceph_deploy.mon mon.idcv-ceph2 monitor has reached quorum! ceph_deploy.mon processing monitor mon.idcv-ceph3 idcv-ceph3 connection detected need for sudo idcv-ceph3 connected to host: idcv-ceph3 idcv-ceph3 detect platform information from remote host idcv-ceph3 detect machine type idcv-ceph3 find the location of an executable idcv-ceph3 Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status ceph_deploy.mon mon.idcv-ceph3 monitor has reached quorum! ceph_deploy.mon all initial monitors are running and have formed quorum ceph_deploy.mon Running gatherkeys... ceph_deploy.gatherkeys Storing keys in temp directory /tmp/tmpBqY1be idcv-ceph0 connected to host: idcv-ceph0 idcv-ceph0 detect platform information from remote host idcv-ceph0 detect machine type idcv-ceph0 get remote short hostname idcv-ceph0 fetch remote file idcv-ceph0 Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status idcv-ceph0 Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.admin idcv-ceph0 Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-mds idcv-ceph0 Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-mgr idcv-ceph0 Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr idcv-ceph0 Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-osd idcv-ceph0 Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-rgw ceph_deploy.gatherkeys Storing ceph.client.admin.keyring ceph_deploy.gatherkeys Storing ceph.bootstrap-mds.keyring ceph_deploy.gatherkeys Storing ceph.bootstrap-mgr.keyring ceph_deploy.gatherkeys keyring 'ceph.mon.keyring' already exists ceph_deploy.gatherkeys Storing ceph.bootstrap-osd.keyring ceph_deploy.gatherkeys Storing ceph.bootstrap-rgw.keyring ceph_deploy.gatherkeys Destroy temp directory /tmp/tmpBqY1be root@idcv-ceph0 cluster# ls ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.lo

五、部署OSD角色

先准备后激活

ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk

ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1

root@idcv-ceph0 cluster# ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk ceph_deploy.conf found configuration file at: /root/.cephdeploy.conf ceph_deploy.cli Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk ceph_deploy.cli ceph-deploy options: ceph_deploy.cli username : None ceph_deploy.cli block_db : None ceph_deploy.cli disk : ('idcv-ceph0', '/dev/sdb', None), ('idcv-ceph1', '/dev/sdb', None), ('idcv-ceph2', '/dev/sdb', None), ('idcv-ceph3', '/dev/sdb', None)INFO dmcrypt : False ceph_deploy.cli verbose : False ceph_deploy.cli bluestore : None ceph_deploy.cli block_wal : None ceph_deploy.cli overwrite_conf : True ceph_deploy.cli subcommand : prepare ceph_deploy.cli dmcrypt_key_dir : /etc/ceph/dmcrypt-keys ceph_deploy.cli quiet : False ceph_deploy.cli cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f103c7f35a8> ceph_deploy.cli cluster : ceph ceph_deploy.cli fs_type : xfs ceph_deploy.cli filestore : None ceph_deploy.cli func : <function osd at 0x7f103c846f50> ceph_deploy.cli ceph_conf : None ceph_deploy.cli default_release : False ceph_deploy.cli zap_disk : True ceph_deploy.osd Preparing cluster ceph disks idcv-ceph0:/dev/sdb: idcv-ceph1:/dev/sdb: idcv-ceph2:/dev/sdb: idcv-ceph3:/dev/sdb: idcv-ceph0 connected to host: idcv-ceph0 idcv-ceph0 detect platform information from remote host idcv-ceph0 detect machine type idcv-ceph0 find the location of an executable ceph_deploy.osd Distro info: CentOS Linux 7.5.1804 Core ceph_deploy.osd Deploying osd to idcv-ceph0 idcv-ceph0 write cluster configuration to /etc/ceph/{cluster}.conf ceph_deploy.osd Preparing host idcv-ceph0 disk /dev/sdb journal None activate False idcv-ceph0 find the location of an executable idcv-ceph0 Running command: /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb idcv-ceph0 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid idcv-ceph0 command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph0 command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph0 command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 set_type: Will colocate journal with data on /dev/sdb idcv-ceph0 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs idcv-ceph0 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs idcv-ceph0 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs idcv-ceph0 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 zap: Zapping partition table on /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/sbin/sgdisk --zap-all -- /dev/sdb idcv-ceph0 Caution: invalid backup GPT header, but valid main header; regenerating idcv-ceph0 backup header from main header. idcv-ceph0 idcv-ceph0 **** idcv-ceph0 Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk idcv-ceph0 verification and recovery are STRONGLY recommended. idcv-ceph0 **** idcv-ceph0 GPT data structures destroyed! You may now partition the disk using fdisk or idcv-ceph0 other utilities. idcv-ceph0 command_check_call: Running command: /usr/sbin/sgdisk --clear --mbrtogpt -- /dev/sdb idcv-ceph0 Creating new GPT entries. idcv-ceph0 The operation has completed successfully. idcv-ceph0 update_partition: Calling partprobe on zapped device /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph0 command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 ptype_tobe_for_name: name = journal idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 create_partition: Creating journal partition num 2 size 5120 on /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:ca6594bd-a4b2-4be7-9aa5-69ba91ce7441 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb idcv-ceph0 The operation has completed successfully. idcv-ceph0 update_partition: Calling partprobe on created device /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph0 command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid idcv-ceph0 prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441 idcv-ceph0 prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441 idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 set_data_partition: Creating osd partition on /dev/sdb idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 ptype_tobe_for_name: name = data idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 create_partition: Creating data partition num 1 size 0 on /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:3b210c8e-b2ac-4266-9e59-623c031ebb89 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb idcv-ceph0 The operation has completed successfully. idcv-ceph0 update_partition: Calling partprobe on created device /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph0 command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid idcv-ceph0 populate_data_path_device: Creating xfs fs on /dev/sdb1 idcv-ceph0 command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1 idcv-ceph0 meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks idcv-ceph0 = sectsz=512 attr=2, projid32bit=1 idcv-ceph0 = crc=1 finobt=0, sparse=0 idcv-ceph0 data = bsize=4096 blocks=24903419, imaxpct=25 idcv-ceph0 = sunit=0 swidth=0 blks idcv-ceph0 naming =version 2 bsize=4096 ascii-ci=0 ftype=1 idcv-ceph0 log =internal log bsize=4096 blocks=12159, version=2 idcv-ceph0 = sectsz=512 sunit=0 blks, lazy-count=1 idcv-ceph0 realtime =none extsz=4096 blocks=0, rtextents=0 idcv-ceph0 mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.kvs_nq with options noatime,inode64 idcv-ceph0 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.kvs_nq idcv-ceph0 command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.kvs_nq idcv-ceph0 populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.kvs_nq idcv-ceph0 command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/ceph_fsid.2933.tmp idcv-ceph0 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/ceph_fsid.2933.tmp idcv-ceph0 command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/fsid.2933.tmp idcv-ceph0 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/fsid.2933.tmp idcv-ceph0 command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/magic.2933.tmp idcv-ceph0 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/magic.2933.tmp idcv-ceph0 command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/journal_uuid.2933.tmp idcv-ceph0 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/journal_uuid.2933.tmp idcv-ceph0 adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.kvs_nq/journal -> /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441 idcv-ceph0 command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq idcv-ceph0 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq idcv-ceph0 unmount: Unmounting /var/lib/ceph/tmp/mnt.kvs_nq idcv-ceph0 command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.kvs_nq idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph0 command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb idcv-ceph0 Warning: The kernel is still using the old partition table. idcv-ceph0 The new table will be used at the next reboot. idcv-ceph0 The operation has completed successfully. idcv-ceph0 update_partition: Calling partprobe on prepared device /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph0 command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb idcv-ceph0 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph0 command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1 idcv-ceph0 checking OSD status... idcv-ceph0 find the location of an executable idcv-ceph0 Running command: /bin/ceph --cluster=ceph osd stat --format=json ceph_deploy.osd Host idcv-ceph0 is now ready for osd use. idcv-ceph1 connection detected need for sudo idcv-ceph1 connected to host: idcv-ceph1 idcv-ceph1 detect platform information from remote host idcv-ceph1 detect machine type idcv-ceph1 find the location of an executable ceph_deploy.osd Distro info: CentOS Linux 7.5.1804 Core ceph_deploy.osd Deploying osd to idcv-ceph1 idcv-ceph1 write cluster configuration to /etc/ceph/{cluster}.conf ceph_deploy.osd Preparing host idcv-ceph1 disk /dev/sdb journal None activate False idcv-ceph1 find the location of an executable idcv-ceph1 Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb idcv-ceph1 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid idcv-ceph1 command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph1 command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph1 command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 set_type: Will colocate journal with data on /dev/sdb idcv-ceph1 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs idcv-ceph1 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs idcv-ceph1 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs idcv-ceph1 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 zap: Zapping partition table on /dev/sdb idcv-ceph1 command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb idcv-ceph1 Creating new GPT entries. idcv-ceph1 GPT data structures destroyed! You may now partition the disk using fdisk or idcv-ceph1 other utilities. idcv-ceph1 command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb idcv-ceph1 Creating new GPT entries. idcv-ceph1 The operation has completed successfully. idcv-ceph1 update_partition: Calling partprobe on zapped device /dev/sdb idcv-ceph1 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph1 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph1 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 ptype_tobe_for_name: name = journal idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 create_partition: Creating journal partition num 2 size 5120 on /dev/sdb idcv-ceph1 command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:09dad07a-985e-4733-a228-f7b1105b7385 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb idcv-ceph1 The operation has completed successfully. idcv-ceph1 update_partition: Calling partprobe on created device /dev/sdb idcv-ceph1 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph1 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph1 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid idcv-ceph1 prepare_device: Journal is GPT partition /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385 idcv-ceph1 prepare_device: Journal is GPT partition /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385 idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 set_data_partition: Creating osd partition on /dev/sdb idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 ptype_tobe_for_name: name = data idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 create_partition: Creating data partition num 1 size 0 on /dev/sdb idcv-ceph1 command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:2809f370-e6ad-4d29-bf6b-57fe1f2004c6 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb idcv-ceph1 The operation has completed successfully. idcv-ceph1 update_partition: Calling partprobe on created device /dev/sdb idcv-ceph1 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph1 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph1 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid idcv-ceph1 populate_data_path_device: Creating xfs fs on /dev/sdb1 idcv-ceph1 command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1 idcv-ceph1 meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks idcv-ceph1 = sectsz=512 attr=2, projid32bit=1 idcv-ceph1 = crc=1 finobt=0, sparse=0 idcv-ceph1 data = bsize=4096 blocks=24903419, imaxpct=25 idcv-ceph1 = sunit=0 swidth=0 blks idcv-ceph1 naming =version 2 bsize=4096 ascii-ci=0 ftype=1 idcv-ceph1 log =internal log bsize=4096 blocks=12159, version=2 idcv-ceph1 = sectsz=512 sunit=0 blks, lazy-count=1 idcv-ceph1 realtime =none extsz=4096 blocks=0, rtextents=0 idcv-ceph1 mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.HAg1vC with options noatime,inode64 idcv-ceph1 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.HAg1vC idcv-ceph1 command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.HAg1vC idcv-ceph1 populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.HAg1vC idcv-ceph1 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/ceph_fsid.2415.tmp idcv-ceph1 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/ceph_fsid.2415.tmp idcv-ceph1 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/fsid.2415.tmp idcv-ceph1 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/fsid.2415.tmp idcv-ceph1 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/magic.2415.tmp idcv-ceph1 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/magic.2415.tmp idcv-ceph1 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/journal_uuid.2415.tmp idcv-ceph1 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/journal_uuid.2415.tmp idcv-ceph1 adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.HAg1vC/journal -> /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385 idcv-ceph1 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC idcv-ceph1 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC idcv-ceph1 unmount: Unmounting /var/lib/ceph/tmp/mnt.HAg1vC idcv-ceph1 command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.HAg1vC idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph1 command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb idcv-ceph1 The operation has completed successfully. idcv-ceph1 update_partition: Calling partprobe on prepared device /dev/sdb idcv-ceph1 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph1 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph1 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph1 command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1 idcv-ceph1 checking OSD status... idcv-ceph1 find the location of an executable idcv-ceph1 Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json ceph_deploy.osd Host idcv-ceph1 is now ready for osd use. idcv-ceph2 connection detected need for sudo idcv-ceph2 connected to host: idcv-ceph2 idcv-ceph2 detect platform information from remote host idcv-ceph2 detect machine type idcv-ceph2 find the location of an executable ceph_deploy.osd Distro info: CentOS Linux 7.5.1804 Core ceph_deploy.osd Deploying osd to idcv-ceph2 idcv-ceph2 write cluster configuration to /etc/ceph/{cluster}.conf ceph_deploy.osd Preparing host idcv-ceph2 disk /dev/sdb journal None activate False idcv-ceph2 find the location of an executable idcv-ceph2 Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb idcv-ceph2 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid idcv-ceph2 command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph2 command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph2 command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 set_type: Will colocate journal with data on /dev/sdb idcv-ceph2 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs idcv-ceph2 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs idcv-ceph2 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs idcv-ceph2 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 zap: Zapping partition table on /dev/sdb idcv-ceph2 command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb idcv-ceph2 Creating new GPT entries. idcv-ceph2 GPT data structures destroyed! You may now partition the disk using fdisk or idcv-ceph2 other utilities. idcv-ceph2 command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb idcv-ceph2 Creating new GPT entries. idcv-ceph2 The operation has completed successfully. idcv-ceph2 update_partition: Calling partprobe on zapped device /dev/sdb idcv-ceph2 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph2 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph2 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 ptype_tobe_for_name: name = journal idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 create_partition: Creating journal partition num 2 size 5120 on /dev/sdb idcv-ceph2 command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:857f0966-30d5-4ad1-9e0c-abff0fbbbc4e --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb idcv-ceph2 The operation has completed successfully. idcv-ceph2 update_partition: Calling partprobe on created device /dev/sdb idcv-ceph2 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph2 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph2 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid idcv-ceph2 prepare_device: Journal is GPT partition /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e idcv-ceph2 prepare_device: Journal is GPT partition /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 set_data_partition: Creating osd partition on /dev/sdb idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 ptype_tobe_for_name: name = data idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 create_partition: Creating data partition num 1 size 0 on /dev/sdb idcv-ceph2 command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:dac63cc2-6876-4004-ba3b-7786be39d392 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb idcv-ceph2 The operation has completed successfully. idcv-ceph2 update_partition: Calling partprobe on created device /dev/sdb idcv-ceph2 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph2 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph2 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid idcv-ceph2 populate_data_path_device: Creating xfs fs on /dev/sdb1 idcv-ceph2 command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1 idcv-ceph2 meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks idcv-ceph2 = sectsz=512 attr=2, projid32bit=1 idcv-ceph2 = crc=1 finobt=0, sparse=0 idcv-ceph2 data = bsize=4096 blocks=24903419, imaxpct=25 idcv-ceph2 = sunit=0 swidth=0 blks idcv-ceph2 naming =version 2 bsize=4096 ascii-ci=0 ftype=1 idcv-ceph2 log =internal log bsize=4096 blocks=12159, version=2 idcv-ceph2 = sectsz=512 sunit=0 blks, lazy-count=1 idcv-ceph2 mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.jhzVmR with options noatime,inode64 idcv-ceph2 realtime =none extsz=4096 blocks=0, rtextents=0 idcv-ceph2 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.jhzVmR idcv-ceph2 command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.jhzVmR idcv-ceph2 populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.jhzVmR idcv-ceph2 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/ceph_fsid.2354.tmp idcv-ceph2 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/ceph_fsid.2354.tmp idcv-ceph2 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/fsid.2354.tmp idcv-ceph2 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/fsid.2354.tmp idcv-ceph2 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/magic.2354.tmp idcv-ceph2 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/magic.2354.tmp idcv-ceph2 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/journal_uuid.2354.tmp idcv-ceph2 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/journal_uuid.2354.tmp idcv-ceph2 adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.jhzVmR/journal -> /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e idcv-ceph2 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR idcv-ceph2 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR idcv-ceph2 unmount: Unmounting /var/lib/ceph/tmp/mnt.jhzVmR idcv-ceph2 command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.jhzVmR idcv-ceph2 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph2 command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb idcv-ceph2 Warning: The kernel is still using the old partition table. idcv-ceph2 The new table will be used at the next reboot. idcv-ceph2 The operation has completed successfully. idcv-ceph2 update_partition: Calling partprobe on prepared device /dev/sdb idcv-ceph2 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph2 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph2 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph2 command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1 idcv-ceph2 checking OSD status... idcv-ceph2 find the location of an executable idcv-ceph2 Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json ceph_deploy.osd Host idcv-ceph2 is now ready for osd use. idcv-ceph3 connection detected need for sudo idcv-ceph3 connected to host: idcv-ceph3 idcv-ceph3 detect platform information from remote host idcv-ceph3 detect machine type idcv-ceph3 find the location of an executable ceph_deploy.osd Distro info: CentOS Linux 7.5.1804 Core ceph_deploy.osd Deploying osd to idcv-ceph3 idcv-ceph3 write cluster configuration to /etc/ceph/{cluster}.conf ceph_deploy.osd Preparing host idcv-ceph3 disk /dev/sdb journal None activate False idcv-ceph3 find the location of an executable idcv-ceph3 Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb idcv-ceph3 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid idcv-ceph3 command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph3 command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph3 command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 set_type: Will colocate journal with data on /dev/sdb idcv-ceph3 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs idcv-ceph3 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs idcv-ceph3 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs idcv-ceph3 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 zap: Zapping partition table on /dev/sdb idcv-ceph3 command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb idcv-ceph3 Creating new GPT entries. idcv-ceph3 GPT data structures destroyed! You may now partition the disk using fdisk or idcv-ceph3 other utilities. idcv-ceph3 command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb idcv-ceph3 Creating new GPT entries. idcv-ceph3 The operation has completed successfully. idcv-ceph3 update_partition: Calling partprobe on zapped device /dev/sdb idcv-ceph3 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph3 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph3 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 ptype_tobe_for_name: name = journal idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 create_partition: Creating journal partition num 2 size 5120 on /dev/sdb idcv-ceph3 command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:52677a68-3cf4-4d9a-b2d4-8c823e1cb901 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb idcv-ceph3 The operation has completed successfully. idcv-ceph3 update_partition: Calling partprobe on created device /dev/sdb idcv-ceph3 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph3 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph3 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid idcv-ceph3 prepare_device: Journal is GPT partition /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901 idcv-ceph3 prepare_device: Journal is GPT partition /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901 idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 set_data_partition: Creating osd partition on /dev/sdb idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 ptype_tobe_for_name: name = data idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 create_partition: Creating data partition num 1 size 0 on /dev/sdb idcv-ceph3 command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:a85b0288-85ce-4887-8249-497ba880fe10 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb idcv-ceph3 The operation has completed successfully. idcv-ceph3 update_partition: Calling partprobe on created device /dev/sdb idcv-ceph3 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph3 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph3 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid idcv-ceph3 populate_data_path_device: Creating xfs fs on /dev/sdb1 idcv-ceph3 command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1 idcv-ceph3 meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks idcv-ceph3 = sectsz=512 attr=2, projid32bit=1 idcv-ceph3 = crc=1 finobt=0, sparse=0 idcv-ceph3 data = bsize=4096 blocks=24903419, imaxpct=25 idcv-ceph3 = sunit=0 swidth=0 blks idcv-ceph3 naming =version 2 bsize=4096 ascii-ci=0 ftype=1 idcv-ceph3 log =internal log bsize=4096 blocks=12159, version=2 idcv-ceph3 = sectsz=512 sunit=0 blks, lazy-count=1 idcv-ceph3 realtime =none extsz=4096 blocks=0, rtextents=0 idcv-ceph3 mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.gjITlj with options noatime,inode64 idcv-ceph3 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.gjITlj idcv-ceph3 command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.gjITlj idcv-ceph3 populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.gjITlj idcv-ceph3 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/ceph_fsid.2372.tmp idcv-ceph3 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/ceph_fsid.2372.tmp idcv-ceph3 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/fsid.2372.tmp idcv-ceph3 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/fsid.2372.tmp idcv-ceph3 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/magic.2372.tmp idcv-ceph3 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/magic.2372.tmp idcv-ceph3 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/journal_uuid.2372.tmp idcv-ceph3 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/journal_uuid.2372.tmp idcv-ceph3 adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.gjITlj/journal -> /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901 idcv-ceph3 command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj idcv-ceph3 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj idcv-ceph3 unmount: Unmounting /var/lib/ceph/tmp/mnt.gjITlj idcv-ceph3 command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.gjITlj idcv-ceph3 get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid idcv-ceph3 command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb idcv-ceph3 Warning: The kernel is still using the old partition table. idcv-ceph3 The new table will be used at the next reboot. idcv-ceph3 The operation has completed successfully. idcv-ceph3 update_partition: Calling partprobe on prepared device /dev/sdb idcv-ceph3 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph3 command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb idcv-ceph3 command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 idcv-ceph3 command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1 idcv-ceph3 checking OSD status... idcv-ceph3 find the location of an executable idcv-ceph3 Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json ceph_deploy.osd Host idcv-ceph3 is now ready for osd use. root@idcv-ceph0 cluster# ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1 ceph_deploy.conf found configuration file at: /root/.cephdeploy.conf ceph_deploy.cli Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1 ceph_deploy.cli ceph-deploy options: ceph_deploy.cli username : None ceph_deploy.cli verbose : False ceph_deploy.cli overwrite_conf : True ceph_deploy.cli subcommand : activate ceph_deploy.cli quiet : False ceph_deploy.cli cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc94a47f5a8> ceph_deploy.cli cluster : ceph ceph_deploy.cli func : <function osd at 0x7fc94a4d2f50> ceph_deploy.cli ceph_conf : None ceph_deploy.cli default_release : False ceph_deploy.cli disk : ('idcv-ceph0', '/dev/sdb1', None), ('idcv-ceph1', '/dev/sdb1', None), ('idcv-ceph2', '/dev/sdb1', None), ('idcv-ceph3', '/dev/sdb1', None)DEBUG Activating cluster ceph disks idcv-ceph0:/dev/sdb1: idcv-ceph1:/dev/sdb1: idcv-ceph2:/dev/sdb1: idcv-ceph3:/dev/sdb1: idcv-ceph0 connected to host: idcv-ceph0 idcv-ceph0 detect platform information from remote host idcv-ceph0 detect machine type idcv-ceph0 find the location of an executable ceph_deploy.osd Distro info: CentOS Linux 7.5.1804 Core ceph_deploy.osd activating host idcv-ceph0 disk /dev/sdb1 ceph_deploy.osd will use init type: systemd idcv-ceph0 find the location of an executable idcv-ceph0 Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1 idcv-ceph0 main_activate: path = /dev/sdb1 idcv-ceph0 get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid idcv-ceph0 command: Running command: /usr/sbin/blkid -o udev -p /dev/sdb1 idcv-ceph0 command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdb1 idcv-ceph0 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs idcv-ceph0 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs idcv-ceph0 mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.X6wbv9 with options noatime,inode64 idcv-ceph0 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.X6wbv9 idcv-ceph0 command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.X6wbv9 idcv-ceph0 activate: Cluster uuid is 812d3acb-eaa8-4355-9a74-64f2cd5209b3 idcv-ceph0 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid idcv-ceph0 activate: Cluster name is ceph idcv-ceph0 activate: OSD uuid is 3b210c8e-b2ac-4266-9e59-623c031ebb89 idcv-ceph0 activate: OSD id is 0 idcv-ceph0 activate: Marking with init system systemd idcv-ceph0 command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.X6wbv9/systemd idcv-ceph0 command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.X6wbv9/systemd idcv-ceph0 activate: ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.X6wbv9 idcv-ceph0 mount_activate: ceph osd.0 already mounted in position; unmounting ours. idcv-ceph0 unmount: Unmounting /var/lib/ceph/tmp/mnt.X6wbv9 idcv-ceph0 command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.X6wbv9 idcv-ceph0 start_daemon: Starting ceph osd.0... idcv-ceph0 command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@0 idcv-ceph0 Removed symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service. idcv-ceph0 command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@0 --runtime idcv-ceph0 command_check_call: Running command: /usr/bin/systemctl enable ceph-osd@0 idcv-ceph0 Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service. idcv-ceph0 command_check_call: Running command: /usr/bin/systemctl start ceph-osd@0 idcv-ceph0 checking OSD status... idcv-ceph0 find the location of an executable idcv-ceph0 Running command: /bin/ceph --cluster=ceph osd stat --format=json idcv-ceph0 Running command: systemctl enable ceph.target idcv-ceph1 connection detected need for sudo idcv-ceph1 connected to host: idcv-ceph1 idcv-ceph1 detect platform information from remote host idcv-ceph1 detect machine type idcv-ceph1 find the location of an executable ceph_deploy.osd Distro info: CentOS Linux 7.5.1804 Core ceph_deploy.osd activating host idcv-ceph1 disk /dev/sdb1 ceph_deploy.osd will use init type: systemd idcv-ceph1 find the location of an executable idcv-ceph1 Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1 idcv-ceph1 main_activate: path = /dev/sdb1 idcv-ceph1 get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid idcv-ceph1 command: Running command: /sbin/blkid -o udev -p /dev/sdb1 idcv-ceph1 command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdb1 idcv-ceph1 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs idcv-ceph1 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs idcv-ceph1 mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.zUV3_1 with options noatime,inode64 idcv-ceph1 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.zUV3_1 idcv-ceph1 command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.zUV3_1 idcv-ceph1 activate: Cluster uuid is 812d3acb-eaa8-4355-9a74-64f2cd5209b3 idcv-ceph1 command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid idcv-ceph1 activate: Cluster name is ceph idcv-ceph1 activate: OSD uuid is 2809f370-e6ad-4d29-bf6b-57fe1f2004c6 idcv-ceph1 allocate_osd_id: Allocating OSD id... idcv-ceph1 command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 2809f370-e6ad-4d29-bf6b-57fe1f2004c6 idcv-ceph1 mount_activate: Failed to activate idcv-ceph1 unmount: Unmounting /var/lib/ceph/tmp/mnt.zUV3_1 idcv-ceph1 command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.zUV3_1 idcv-ceph1 Traceback (most recent call last): idcv-ceph1 File "/usr/sbin/ceph-disk", line 9, in <module> idcv-ceph1 load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() idcv-ceph1 File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5371, in run idcv-ceph1 main(sys.argv1:) idcv-ceph1 File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5322, in main idcv-ceph1 args.func(args) idcv-ceph1 File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3445, in main_activate idcv-ceph1 reactivate=args.reactivate, idcv-ceph1 File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3202, in mount_activate idcv-ceph1 (osd_id, cluster) = activate(path, activate_key_template, init) idcv-ceph1 File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3365, in activate idcv-ceph1 keyring=keyring, idcv-ceph1 File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1013, in allocate_osd_id idcv-ceph1 raise Error('ceph osd create failed', e, e.output) idcv-ceph1 ceph_disk.main.Error: Error: ceph osd create failed: Command '/usr/bin/ceph' returned non-zero exit status 1: 2018-07-03 11:47:35.463545 7f8310450700 0 librados: client.bootstrap-osd authentication error (1) Operation not permitted idcv-ceph1 Error connecting to cluster: PermissionError idcv-ceph1 idcv-ceph1 RuntimeError: command returned non-zero exit status: 1 ceph_deploy RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1

2、查看了下idcv-ceph1没有加上去

root@idcv-ceph0 cluster# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 100G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 99.5G 0 part └─centos-root 253:0 0 99.5G 0 lvm / sdb 8:16 0 100G 0 disk ├─sdb1 8:17 0 95G 0 part /var/lib/ceph/osd/ceph-0 └─sdb2 8:18 0 5G 0 part sr0 11:0 1 1024M 0 romundefinedroot@idcv-ceph0 cluster# ceph -s cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3 health HEALTH_OK monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0} election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3 osdmap e14: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects 100 MB used, 284 GB / 284 GB avail 64 active+clean root@idcv-ceph0 cluster#

3、使用这个方法赋予角色OSD

root@idcv-ceph0 cluster# ceph-deploy install --no-adjust-repos --osd idcv-ceph1 ceph_deploy.conf found configuration file at: /root/.cephdeploy.conf ceph_deploy.cli Invoked (1.5.39): /usr/bin/ceph-deploy install --no-adjust-repos --osd idcv-ceph1 ceph_deploy.cli ceph-deploy options: ceph_deploy.cli verbose : False ceph_deploy.cli testing : None ceph_deploy.cli cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f19c0ebd440> ceph_deploy.cli cluster : ceph ceph_deploy.cli dev_commit : None ceph_deploy.cli install_mds : False ceph_deploy.cli stable : None ceph_deploy.cli default_release : False ceph_deploy.cli username : None ceph_deploy.cli adjust_repos : False ceph_deploy.cli func : <function install at 0x7f19c1f96d70> ceph_deploy.cli install_mgr : False ceph_deploy.cli install_all : False ceph_deploy.cli repo : False ceph_deploy.cli host : 'idcv-ceph1'INFO install_rgw : False ceph_deploy.cli install_tests : False ceph_deploy.cli repo_url : None ceph_deploy.cli ceph_conf : None ceph_deploy.cli install_osd : True ceph_deploy.cli version_kind : stable ceph_deploy.cli install_common : False ceph_deploy.cli overwrite_conf : False ceph_deploy.cli quiet : False ceph_deploy.cli dev : master ceph_deploy.cli nogpgcheck : False ceph_deploy.cli local_mirror : None ceph_deploy.cli release : None ceph_deploy.cli install_mon : False ceph_deploy.cli gpg_url : None ceph_deploy.install Installing stable version jewel on cluster ceph hosts idcv-ceph1 ceph_deploy.install Detecting platform for host idcv-ceph1 ... idcv-ceph1 connection detected need for sudo idcv-ceph1 connected to host: idcv-ceph1 idcv-ceph1 detect platform information from remote host idcv-ceph1 detect machine type ceph_deploy.install Distro info: CentOS Linux 7.5.1804 Core idcv-ceph1 installing Ceph on idcv-ceph1 idcv-ceph1 Running command: sudo yum clean all idcv-ceph1 Loaded plugins: fastestmirror, priorities idcv-ceph1 Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates idcv-ceph1 Cleaning up everything idcv-ceph1 Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos idcv-ceph1 Cleaning up list of fastest mirrors idcv-ceph1 Running command: sudo yum -y install ceph idcv-ceph1 Loaded plugins: fastestmirror, priorities idcv-ceph1 Determining fastest mirrors idcv-ceph1 base: mirrors.tuna.tsinghua.edu.cn idcv-ceph1 epel: mirrors.huaweicloud.com idcv-ceph1 extras: mirror.bit.edu.cn idcv-ceph1 updates: mirrors.huaweicloud.com idcv-ceph1 12 packages excluded due to repository priority protections idcv-ceph1 Package 1:ceph-10.2.10-0.el7.x86_64 already installed and latest version idcv-ceph1 Nothing to do idcv-ceph1 Running command: sudo ceph --version idcv-ceph1 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

4、节点cpeh1 还是安装不上osd角色,这边准备初始化ceph1重新添加

ceph-deploy purge 节点 ceph-deploy purgedata 节点 清楚安装包和残余数据 ceph-dpeloy install --no-adjust-repos --osd ceph1 直接装包 赋予OSD存储角色之后在添加OSD 具体步骤如下: ceph-deploy purge idcv-ceph1 ceph-deploy purgedata idcv-ceph1 ceph-deploy --overwrite-conf osd prepare idcv-ceph1:/dev/sdbundefinedceph-deploy --overwrite-conf osd activate idcv-ceph1:/dev/sdb1

5、部署成功osd查看集群状态

root@idcv-ceph0 cluster# ceph -s cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3 health HEALTH_OK monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0} election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3 osdmap e27: 4 osds: 4 up, 4 in flags sortbitwise,require_jewel_osds pgmap v64: 104 pgs, 6 pools, 1588 bytes data, 171 objects 138 MB used, 379 GB / 379 GB avail 104 active+clean

六、部署RGW服务

1、部署cdph1为对象网关

root@idcv-ceph0 cluster# ceph-deploy install --no-adjust-repos --rgw idcv-ceph1 ceph_deploy.conf found configuration file at: /root/.cephdeploy.conf ceph_deploy.cli Invoked (1.5.39): /usr/bin/ceph-deploy install --no-adjust-repos --rgw idcv-ceph1 ceph_deploy.cli ceph-deploy options: ceph_deploy.cli verbose : False ceph_deploy.cli testing : None ceph_deploy.cli cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fba6af12440> ceph_deploy.cli cluster : ceph ceph_deploy.cli dev_commit : None ceph_deploy.cli install_mds : False ceph_deploy.cli stable : None ceph_deploy.cli default_release : False ceph_deploy.cli username : None ceph_deploy.cli adjust_repos : False ceph_deploy.cli func : <function install at 0x7fba6bfe9d70> ceph_deploy.cli install_mgr : False ceph_deploy.cli install_all : False ceph_deploy.cli repo : False ceph_deploy.cli host : 'idcv-ceph1'INFO install_rgw : True ceph_deploy.cli install_tests : False ceph_deploy.cli repo_url : None ceph_deploy.cli ceph_conf : None ceph_deploy.cli install_osd : False ceph_deploy.cli version_kind : stable ceph_deploy.cli install_common : False ceph_deploy.cli overwrite_conf : False ceph_deploy.cli quiet : False ceph_deploy.cli dev : master ceph_deploy.cli nogpgcheck : False ceph_deploy.cli local_mirror : None ceph_deploy.cli release : None ceph_deploy.cli install_mon : False ceph_deploy.cli gpg_url : None ceph_deploy.install Installing stable version jewel on cluster ceph hosts idcv-ceph1 ceph_deploy.install Detecting platform for host idcv-ceph1 ... idcv-ceph1 connection detected need for sudo idcv-ceph1 connected to host: idcv-ceph1 idcv-ceph1 detect platform information from remote host idcv-ceph1 detect machine type ceph_deploy.install Distro info: CentOS Linux 7.5.1804 Core idcv-ceph1 installing Ceph on idcv-ceph1 idcv-ceph1 Running command: sudo yum clean all idcv-ceph1 Loaded plugins: fastestmirror, priorities idcv-ceph1 Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates idcv-ceph1 Cleaning up everything idcv-ceph1 Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos idcv-ceph1 Cleaning up list of fastest mirrors idcv-ceph1 Running command: sudo yum -y install ceph-radosgw idcv-ceph1 Loaded plugins: fastestmirror, priorities idcv-ceph1 Determining fastest mirrors idcv-ceph1 base: mirrors.aliyun.com idcv-ceph1 epel: mirrors.aliyun.com idcv-ceph1 extras: mirrors.aliyun.com idcv-ceph1 updates: mirror.bit.edu.cn idcv-ceph1 12 packages excluded due to repository priority protections idcv-ceph1 Resolving Dependencies idcv-ceph1 --> Running transaction check idcv-ceph1 ---> Package ceph-radosgw.x86_64 1:10.2.10-0.el7 will be installed idcv-ceph1 --> Finished Dependency Resolution idcv-ceph1 idcv-ceph1 Dependencies Resolved idcv-ceph1 idcv-ceph1 ================================================================================ idcv-ceph1 Package Arch Version Repository Size idcv-ceph1 ================================================================================ idcv-ceph1 Installing: idcv-ceph1 ceph-radosgw x86_64 1:10.2.10-0.el7 Ceph 266 k idcv-ceph1 idcv-ceph1 Transaction Summary idcv-ceph1 ================================================================================ idcv-ceph1 Install 1 Package idcv-ceph1 idcv-ceph1 Total download size: 266 k idcv-ceph1 Installed size: 795 k idcv-ceph1 Downloading packages: idcv-ceph1 Running transaction check idcv-ceph1 Running transaction test idcv-ceph1 Transaction test succeeded idcv-ceph1 Running transaction idcv-ceph1 Installing : 1:ceph-radosgw-10.2.10-0.el7.x86_64 1/1 idcv-ceph1 Verifying : 1:ceph-radosgw-10.2.10-0.el7.x86_64 1/1 idcv-ceph1 idcv-ceph1 Installed: idcv-ceph1 ceph-radosgw.x86_64 1:10.2.10-0.el7undefinedidcv-ceph1 idcv-ceph1 Complete! idcv-ceph1 Running command: sudo ceph --version idcv-ceph1 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

2、设置idcv-ceph1为管理网关

root@idcv-ceph0 cluster# ceph-deploy admin idcv-ceph1 ceph_deploy.conf found configuration file at: /root/.cephdeploy.conf ceph_deploy.cli Invoked (1.5.39): /usr/bin/ceph-deploy admin idcv-ceph1 ceph_deploy.cli ceph-deploy options: ceph_deploy.cli username : None ceph_deploy.cli verbose : False ceph_deploy.cli overwrite_conf : False ceph_deploy.cli quiet : False ceph_deploy.cli cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5f91222fc8> ceph_deploy.cli cluster : ceph ceph_deploy.cli client : 'idcv-ceph1'INFO func : <function admin at 0x7f5f9234f9b0> ceph_deploy.cli ceph_conf : None ceph_deploy.cli default_release : False ceph_deploy.admin Pushing admin keys and conf to idcv-ceph1 idcv-ceph1 connection detected need for sudo idcv-ceph1 connected to host: idcv-ceph1 idcv-ceph1 detect platform information from remote host idcv-ceph1 detect machine type idcv-ceph1 write cluster configuration to /etc/ceph/{cluster}.conf

3、创建生成网关实例idcv-ceph1

root@idcv-ceph0 cluster# ceph-deploy rgw create idcv-ceph1 ceph_deploy.conf found configuration file at: /root/.cephdeploy.conf ceph_deploy.cli Invoked (1.5.39): /usr/bin/ceph-deploy rgw create idcv-ceph1 ceph_deploy.cli ceph-deploy options: ceph_deploy.cli username : None ceph_deploy.cli verbose : False ceph_deploy.cli rgw : ('idcv-ceph1', 'rgw.idcv-ceph1')INFO overwrite_conf : False ceph_deploy.cli subcommand : create ceph_deploy.cli quiet : False ceph_deploy.cli cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6c86f85128> ceph_deploy.cli cluster : ceph ceph_deploy.cli func : <function rgw at 0x7f6c8805a7d0> ceph_deploy.cli ceph_conf : None ceph_deploy.cli default_release : False ceph_deploy.rgw Deploying rgw, cluster ceph hosts idcv-ceph1:rgw.idcv-ceph1 idcv-ceph1 connection detected need for sudo idcv-ceph1 connected to host: idcv-ceph1 idcv-ceph1 detect platform information from remote host idcv-ceph1 detect machine type ceph_deploy.rgw Distro info: CentOS Linux 7.5.1804 Core ceph_deploy.rgw remote host will use systemd ceph_deploy.rgw deploying rgw bootstrap to idcv-ceph1 idcv-ceph1 write cluster configuration to /etc/ceph/{cluster}.conf idcv-ceph1 rgw keyring does not exist yet, creating one idcv-ceph1 create a keyring file idcv-ceph1 create path recursively if it doesn't exist idcv-ceph1 Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.idcv-ceph1 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.idcv-ceph1/keyring idcv-ceph1 Running command: sudo systemctl enable ceph-radosgw@rgw.idcv-ceph1 idcv-ceph1 Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.idcv-ceph1.service to /usr/lib/systemd/system/ceph-radosgw@.service. idcv-ceph1 Running command: sudo systemctl start ceph-radosgw@rgw.idcv-ceph1 idcv-ceph1 Running command: sudo systemctl enable ceph.target ceph_deploy.rgw The Ceph Object Gateway (RGW) is now running on host idcv-ceph1 and default port 7480

4、测试网关服务

root@idcv-ceph0 cluster# curl 172.20.1.139:7480<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="[http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>](http://s3.amazonaws.com/doc/2006-03-01/);

总结

到此所有需要相关服务已经部署完毕,如果对ceph.conf比较了解,设置正确参数,部署应该会比较顺利,下一篇将会测试osd块存储功能及rgw对象存储功能,链接为https://cloud.tencent.com/developer/article/1363079

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2018-07-09 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 参考文档
  • 简介
  • 架构图
  • 安装部署
    • 一、基础环境
      • 二、安装部署deploy节点
        • 三、安装mon服务
          • 五、部署OSD角色
            • 六、部署RGW服务
            • 总结
            相关产品与服务
            对象存储
            对象存储(Cloud Object Storage,COS)是由腾讯云推出的无目录层次结构、无数据格式限制,可容纳海量数据且支持 HTTP/HTTPS 协议访问的分布式存储服务。腾讯云 COS 的存储桶空间无容量上限,无需分区管理,适用于 CDN 数据分发、数据万象处理或大数据计算与分析的数据湖等多种场景。
            领券
            问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档