我已经安装了一个ceph集群(quincy),已经有2个节点和4个OSD。现在,我在集群中添加了运行Debian (斗牛眼)的第三个主机。新主机被正确检测并运行一个母机。
问题是,在新主机上没有列出OSD,即使应该有2个磁盘可用。当我在我的一个节点上运行命令时:
$ sudo ceph orch device ls
我只能从其他节点看到设备。但是没有列出新节点。
但是lsblk
显示了新主机上的两个可用磁盘:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 476.9G 0 disk
sdb 8:16 1 476.9G 0 disk
├─sdb1 8:17 1 16G 0 part [SWAP]
├─sdb2 8:18 1 1G 0 part /boot
└─sdb3 8:19 1 459.9G 0 part /
sdc 8:32 1 476.9G 0 disk
我还在新主机上尝试了ceph-volume
命令。但是,这个命令也没有找到任何磁盘:
$ sudo cephadm ceph-volume inventory
Inferring fsid e79............
Device Path Size Device nodes rotates available Model name
我已经删除了新主机,并安装了一个新操作系统。但我不知道是什么原因导致ceph找不到任何磁盘。
是否可能C年会不允许将节点与SSD/SATA和SSD/NVME混合?
调用cephadm.log
期间的ceph-volume inventory
输出似乎没有提供任何其他信息:
2022-12-08 00:15:15,432 7fdca25ac740 DEBUG --------------------------------------------------------------------------------
cephadm ['ceph-volume', 'inventory']
2022-12-08 00:15:15,432 7fdca25ac740 DEBUG Using default config /etc/ceph/ceph.conf
2022-12-08 00:15:16,131 7fee4d4c8740 DEBUG --------------------------------------------------------------------------------
cephadm ['check-host']
2022-12-08 00:15:16,131 7fee4d4c8740 INFO docker (/usr/bin/docker) is present
2022-12-08 00:15:16,131 7fee4d4c8740 INFO systemctl is present
2022-12-08 00:15:16,131 7fee4d4c8740 INFO lvcreate is present
2022-12-08 00:15:16,176 7fee4d4c8740 INFO Unit ntp.service is enabled and running
2022-12-08 00:15:16,176 7fee4d4c8740 INFO Host looks OK
2022-12-08 00:15:16,444 7f370bfbf740 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45', 'ls']
2022-12-08 00:15:20,100 7fdca25ac740 INFO Inferring fsid 0f3cd66c-74e5-11ed-813b-901b0e95a162
2022-12-08 00:15:20,121 7fdca25ac740 DEBUG /usr/bin/docker: stdout quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45|cc65afd6173a|v17|2022-10-18 01:41:41 +0200 CEST
2022-12-08 00:15:22,253 7f6f2e30a740 DEBUG --------------------------------------------------------------------------------
cephadm ['gather-facts']
2022-12-08 00:15:22,482 7f82221ce740 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45', 'list-networks']
2022-12-08 00:15:24,261 7fdca25ac740 DEBUG Using container info for daemon 'mon'
2022-12-08 00:15:24,261 7fdca25ac740 INFO Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 01:41:41 +0200 CEST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
ceph-volume.log
输出:
[2022-12-07 23:24:00,496][ceph_volume.main][INFO ] Running command: ceph-volume inventory
[2022-12-07 23:24:00,499][ceph_volume.util.system][INFO ] Executable lvs found on the host, will use /sbin/lvs
[2022-12-07 23:24:00,499][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-12-07 23:24:00,560][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
[2022-12-07 23:24:00,569][ceph_volume.process][INFO ] stdout NAME="sda" KNAME="sda" PKNAME="" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Crucial_CT500MX2" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" PKNAME="sda" MAJ:MIN="8:1" FSTYPE="swap" MOUNTPOINT="[SWAP]" LABEL="" UUID="51f95805-2d5f-4cba-a885-775a0c19ad53" RO="0" RM="1" MODEL="" SIZE="32G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" PKNAME="sda" MAJ:MIN="8:2" FSTYPE="ext3" MOUNTPOINT="/rootfs/boot" LABEL="" UUID="676438b6-3214-4c05-bc6b-94bd7a88c26f" RO="0" RM="1" MODEL="" SIZE="1G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO ] stdout NAME="sda3" KNAME="sda3" PKNAME="sda" MAJ:MIN="8:3" FSTYPE="ext4" MOUNTPOINT="/rootfs" LABEL="" UUID="a251c9b0-a91c-4768-bd42-5730e032ce58" RO="0" RM="1" MODEL="" SIZE="432.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO ] stdout NAME="sdb" KNAME="sdb" PKNAME="" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Crucial_CT500MX2" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2022-12-07 23:24:00,573][ceph_volume.util.system][INFO ] Executable pvs found on the host, will use /sbin/pvs
[2022-12-07 23:24:00,573][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o pv_name,vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
发布于 2023-03-29 19:51:23
在搜索quiete一段时间之后,无法检测到节点中的SAS-设备,通过使用以下命令手动添加HDD,我成功地将HDD提升为OSD:
cephadm shell
ceph orch daemon add osd --method raw host1:/dev/sda
https://serverfault.com/questions/1117213
复制相似问题