首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >使用当前配置检测到的无阻塞设备

使用当前配置检测到的无阻塞设备
EN

Ask Ubuntu用户
提问于 2018-09-12 10:59:19
回答 2查看 3.2K关注 0票数 1

在通过Juju部署Openstack之后,ceph导致阻塞。

代码语言:javascript
运行
复制
$: juju status 
ceph-osd/0                blocked   idle       1        10.20.253.197                      No block devices detected using current configuration
ceph-osd/1*               blocked   idle       2        10.20.253.199                      No block devices detected using current configuration
ceph-osd/2                blocked   idle       0        10.20.253.200                      No block devices detected using current configuration

我已经将ceph osd/0加入到第一台机器中。

代码语言:javascript
运行
复制
$: juju ssh ceph-osd/0

然后运行以下命令:

代码语言:javascript
运行
复制
$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xaa276e23

Device     Boot Start        End    Sectors  Size Id Type
/dev/vda1        2048 1048575966 1048573919  500G 83 Linux


Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CAA6111D-5ECF-48EB-B4BF-9EC58E38AD64

Device     Start        End    Sectors  Size Type
/dev/vdb1   2048       4095       2048    1M BIOS boot
/dev/vdb2   4096 1048563711 1048559616  500G Linux filesystem

$: df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.9G     0  7.9G   0% /dev
tmpfs           1.6G  856K  1.6G   1% /run
/dev/vda1       492G   12G  455G   3% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs           100K     0  100K   0% /var/lib/lxd/shmounts
tmpfs           100K     0  100K   0% /var/lib/lxd/devlxd
tmpfs           1.6G     0  1.6G   0% /run/user/1000  

$: lsblk 
    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    vda    252:0    0  500G  0 disk 
    └─vda1 252:1    0  500G  0 part /
    vdb    252:16   0  500G  0 disk 
    ├─vdb1 252:17   0    1M  0 part 
    └─vdb2 252:18   0  500G  0 part 
EN

回答 2

Ask Ubuntu用户

回答已采纳

发布于 2018-09-12 11:56:45

If我们的环境已经部署到,我已经解决了使用以下两个任务:

1°任务

代码语言:javascript
运行
复制
$: juju ssh ceph-osd/0 
$: sudo fdisk /dev/vdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): d
Partition number (1,2, default 2): 1

Partition 1 has been deleted.

Command (m for help): d
Selected partition 2
Partition 2 has been deleted.

Command (m for help): w

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
select "d" to delete all partitions and then "w" to write the new change. 

然后

代码语言:javascript
运行
复制
$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2fa2c9a8

Device     Boot Start        End    Sectors  Size Id Type
/dev/vda1        2048 1048575966 1048573919  500G 83 Linux


Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 146912CF-FC27-4FDC-A202-24F05DC00E69

然后

代码语言:javascript
运行
复制
    $: sudo fdisk /dev/vdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition number (1-128, default 1): 
First sector (34-1048575966, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-1048575966, default 1048575966): 

Created a new partition 1 of type 'Linux filesystem' and of size 500 GiB.

Command (m for help): p
Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 146912CF-FC27-4FDC-A202-24F05DC00E69

Device     Start        End    Sectors  Size Type
/dev/vdb1   2048 1048575966 1048573919  500G Linux filesystem

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

然后

代码语言:javascript
运行
复制
$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2fa2c9a8

Device     Boot Start        End    Sectors  Size Id Type
/dev/vda1        2048 1048575966 1048573919  500G 83 Linux


Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 146912CF-FC27-4FDC-A202-24F05DC00E69

Device     Start        End    Sectors  Size Type
/dev/vdb1   2048 1048575966 1048573919  500G Linux filesystem

我也为另一台机器ceph/1 'n ceph/2重复了这个任务

2°任务

在Juju上,我已经更改了3 ceph在/dev/sdb 1中的字符串/dev/sdb,保存'n提交

现在它自己的状态是“闲置”的。

代码语言:javascript
运行
复制
$: juju status
Model      Controller             Cloud/Region  Version  SLA          Timestamp
openstack  maas-cloud-controller  maas-cloud    2.4.2    unsupported  13:54:02+02:00

App                    Version        Status  Scale  Charm                  Store       Rev  OS      Notes
ceph-mon               13.2.1+dfsg1   active      3  ceph-mon               jujucharms   26  ubuntu  
ceph-osd               13.2.1+dfsg1   active      3  ceph-osd               jujucharms  269  ubuntu  
ceph-radosgw           13.2.1+dfsg1   active      1  ceph-radosgw           jujucharms  259  ubuntu  
cinder                 13.0.0         active      1  cinder                 jujucharms  273  ubuntu  
cinder-ceph            13.0.0         active      1  cinder-ceph            jujucharms  234  ubuntu  
glance                 17.0.0         active      1  glance                 jujucharms  268  ubuntu  
keystone               14.0.0         active      1  keystone               jujucharms  283  ubuntu  
mysql                  5.7.20-29.24   active      1  percona-cluster        jujucharms  269  ubuntu  
neutron-api            13.0.0         active      1  neutron-api            jujucharms  262  ubuntu  
neutron-gateway        13.0.0         active      1  neutron-gateway        jujucharms  253  ubuntu  
neutron-openvswitch    13.0.0         active      3  neutron-openvswitch    jujucharms  251  ubuntu  
nova-cloud-controller  18.0.0         active      1  nova-cloud-controller  jujucharms  311  ubuntu  
nova-compute           18.0.0         active      3  nova-compute           jujucharms  287  ubuntu  
ntp                    4.2.8p10+dfsg  active      4  ntp                    jujucharms   27  ubuntu  
openstack-dashboard    14.0.0         active      1  openstack-dashboard    jujucharms  266  ubuntu  
rabbitmq-server        3.6.10         active      1  rabbitmq-server        jujucharms   78  ubuntu  

Unit                      Workload  Agent  Machine  Public address  Ports              Message
ceph-mon/0                active    idle   2/lxd/1  10.20.253.216                      Unit is ready and clustered
ceph-mon/1                active    idle   0/lxd/0  10.20.253.95                       Unit is ready and clustered
ceph-mon/2*               active    idle   1/lxd/0  10.20.253.83                       Unit is ready and clustered
ceph-osd/0                active    idle   1        10.20.253.197                      Unit is ready (1 OSD)
ceph-osd/1*               active    idle   2        10.20.253.199                      Unit is ready (1 OSD)
ceph-osd/2                active    idle   0        10.20.253.200                      Unit is ready (1 OSD)
ceph-radosgw/0*           active    idle   3/lxd/0  10.20.253.87    80/tcp             Unit is ready
cinder/0*                 active    idle   0/lxd/1  10.20.253.188   8776/tcp           Unit is ready
  cinder-ceph/0*          active    idle            10.20.253.188                      Unit is ready
glance/0*                 active    idle   2/lxd/0  10.20.253.217   9292/tcp           Unit is ready
keystone/0*               active    idle   1/lxd/1  10.20.253.134   5000/tcp           Unit is ready
mysql/0*                  active    idle   3/lxd/1  10.20.253.96    3306/tcp           Unit is ready
neutron-api/0*            active    idle   0/lxd/2  10.20.253.189   9696/tcp           Unit is ready
neutron-gateway/0*        active    idle   3        10.20.253.198                      Unit is ready
  ntp/3                   active    idle            10.20.253.198   123/udp            Ready
nova-cloud-controller/0*  active    idle   2/lxd/2  10.20.253.218   8774/tcp,8778/tcp  Unit is ready
nova-compute/0            active    idle   1        10.20.253.197                      Unit is ready
  neutron-openvswitch/0*  active    idle            10.20.253.197                      Unit is ready
  ntp/0*                  active    idle            10.20.253.197   123/udp            Ready
nova-compute/1*           active    idle   0        10.20.253.200                      Unit is ready
  neutron-openvswitch/1   active    idle            10.20.253.200                      Unit is ready
  ntp/1                   active    idle            10.20.253.200   123/udp            Ready
nova-compute/2            active    idle   2        10.20.253.199                      Unit is ready
  neutron-openvswitch/2   active    idle            10.20.253.199                      Unit is ready
  ntp/2                   active    idle            10.20.253.199   123/udp            Ready
openstack-dashboard/0*    active    idle   1/lxd/2  10.20.253.13    80/tcp,443/tcp     Unit is ready
rabbitmq-server/0*        active    idle   3/lxd/2  10.20.253.86    5672/tcp           Unit is ready

Machine  State    DNS            Inst id              Series  AZ         Message
0        started  10.20.253.200  fxbapd               bionic  Openstack  Deployed
0/lxd/0  started  10.20.253.95   juju-53dcb3-0-lxd-0  bionic  Openstack  Container started
0/lxd/1  started  10.20.253.188  juju-53dcb3-0-lxd-1  bionic  Openstack  Container started
0/lxd/2  started  10.20.253.189  juju-53dcb3-0-lxd-2  bionic  Openstack  Container started
1        started  10.20.253.197  mqdnxt               bionic  Openstack  Deployed
1/lxd/0  started  10.20.253.83   juju-53dcb3-1-lxd-0  bionic  Openstack  Container started
1/lxd/1  started  10.20.253.134  juju-53dcb3-1-lxd-1  bionic  Openstack  Container started
1/lxd/2  started  10.20.253.13   juju-53dcb3-1-lxd-2  bionic  Openstack  Container started
2        started  10.20.253.199  ysg683               bionic  Openstack  Deployed
2/lxd/0  started  10.20.253.217  juju-53dcb3-2-lxd-0  bionic  Openstack  Container started
2/lxd/1  started  10.20.253.216  juju-53dcb3-2-lxd-1  bionic  Openstack  Container started
2/lxd/2  started  10.20.253.218  juju-53dcb3-2-lxd-2  bionic  Openstack  Container started
3        started  10.20.253.198  scycac               bionic  Openstack  Deployed
3/lxd/0  started  10.20.253.87   juju-53dcb3-3-lxd-0  bionic  Openstack  Container started
3/lxd/1  started  10.20.253.96   juju-53dcb3-3-lxd-1  bionic  Openstack  Container started
3/lxd/2  started  10.20.253.86   juju-53dcb3-3-lxd-2  bionic  Openstack  Container started

While -如果我们必须运行Openstack的部署,在此之前,我们必须在Juju中将字符串osd-设备( string )从/dev/sdb改为/dev/vdb (在3 ceph中)。然后我们就可以继续执行它的承诺了。

票数 0
EN

Ask Ubuntu用户

发布于 2021-07-21 11:51:26

ceph的默认磁盘路径当前设置为:'/dev/sdb‘。您必须将它设置为数据(‘/dev/vdb’)的磁盘路径:

代码语言:javascript
运行
复制
$ juju config ceph-osd osd-devices
/dev/sdb
$ juju config ceph-osd osd-devices='/dev/vdb'

当您配置磁盘时,它应该没有分区。在那之后,ceph应该变得活跃起来。

票数 -1
EN
页面原文内容由Ask Ubuntu提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://askubuntu.com/questions/1074575

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档