1.配置你的hostname 你不要搞奇葩的hostname,奇葩的hostname就可能有奇葩的问题,你就老实的node1,这种的字母加数字 centos7修改host: hostnamectl set-hostname node1 2.准备好参考的配置文件 [global] auth service required = cephx filestore xattr use omap = true auth client required = cephx auth cluster required = cephx mon host = 192.168.0.1,192.168.0.2,192.168.0.3 mon initial members = node1,node2,node3 fsid = 87619a71-34dc-4203-8c52-995c374562a6 [mon.node1] host = node1 mon addr = 192.168.0.1:6789 [mon.node2] host = node2 mon addr = 192.168.0.2:6789 [mon.node3] host = node3 mon addr = 192.168.0.3:6789 写到/etc/ceph/ceph.conf 3.清理环境 rm -fr /var/lib/ceph/* rm -fr /tmp/monmap /tmp/ceph.mon.keyring
4.做一个新的集群uuid [root@node1 /data]# uuidgen 87619a71-34dc-4203-8c52-995c374562a6 修改你的ceph.conf “fsid = a7f64266-0894-4f1e-a635-d0aecca0e993″ 5.确定你的mon集合 你要做几个mon在在你的配置文件写几个,一般是单数比如1个mon 3个mon mon initial members = node1,node2,node3 mon host = 192.168.0.1,192.168.0.2,192.168.0.3 6.制作密钥环 做mon密钥 ceph-authtool –create-keyring /tmp/ceph.mon.keyring –gen-key -n mon. –cap mon ‘allow *’ 做管理员密钥并且和mon密钥合并 ceph-authtool –create-keyring /etc/ceph/ceph.client.admin.keyring –gen-key -n client.admin –set-uid=0 –cap mon ‘allow *’ –cap osd ‘allow *’ –cap mds ‘allow’ ceph-authtool /tmp/ceph.mon.keyring –import-keyring /etc/ceph/ceph.client.admin.keyring 7.做好mon视图,这个很重要,mon的每个节点都要做进去 monmaptool –create –add node1 192.168.0.1 –add node2 192.168.0.2 –add node3 192.168.0.3 –fsid 87619a71-34dc-4203-8c52-995c374562a6 /tmp/monmap 8.做好mon的数据目录 mkdir -p /var/lib/ceph/mon/ceph-node1 9.初始化和创建mon的文件系统 ceph-mon –mkfs -i node1 –monmap /tmp/monmap –keyring /tmp/ceph.mon.keyring touch /var/lib/ceph/mon/ceph-node1/done #这个必须要,标识已经准备ok 10.启动服务 /etc/init.d/ceph start mon.node1 /etc/init.d/ceph start mon.node2 /etc/init.d/ceph start mon.node3
#!/bin/sh
for disk in $(ls /dev/sd*1);do
#跳过的磁盘设备
if [ "$disk" = "/dev/sda1" -o "$disk" = "/dev/sdb1" ] ;then
echo "skip $disk"
else
i=$(ceph osd create)
echo "mkxfs..$disk"
mkfs.xfs -f $disk
mkdir -p /var/lib/ceph/osd/ceph-$i
mount -t xfs -o noatime,inode64 -- $disk /var/lib/ceph/osd/ceph-$i
ceph-osd -i $i --mkfs --mkkey --osd-uuid b23b48bf-373a-489c-821a-31b60b5b5af0
ceph auth add osd.$i osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-$i/keyring
ceph osd crush add osd.$i 1.0 host=node1
echo "[osd.$i]" >> /etc/ceph/ceph.conf
echo "host = node1" >> /etc/ceph/ceph.conf
/etc/init.d/ceph start osd.$i
fi
done
1.在配置文件/etc/ceph/ceph.conf中增加下面的内容
[mds] mds data = /var/lib/ceph/mds/mds.$id keyring = /var/lib/ceph/mds/mds.$id/mds.$id.keyring [mds.0] host = {hostname}
2.创建好目录和key mkdir -p /var/lib/ceph/mds/mds.0 ceph auth get-or-create mds.0 mds ‘allow ‘ osd ‘allow *’ mon ‘allow rwx’ > /var/lib/ceph/mds/mds.0/mds.0.keyring
3.启动mds /etc/init.d/ceph start mds.0
4.检测 ceph -s
做ceph的文件系统 1.增加元数据存储池和数据存储池 $ ceph osd pool create cephfs_data <pg_num> $ ceph osd pool create cephfs_metadata <pg_num> 关于 pg_num it is mandatory to choose the value of pg_num because it cannot be calculated automatically. Here are a few values commonly used:
Less than 5 OSDs set pg_num to 128 Between 5 and 10 OSDs set pg_num to 512 Between 10 and 50 OSDs set pg_num to 4096 If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the pg_num value by yourself
(OSDs * 100) Total PGs = ———— pool size
2.生成文件系统 $ceph osd lspools 5 cephfs_data,6 cephfs_metadata, $ceph mds newfs 6 5 –yes-i-really-mean-it
文件系统挂载 1.文件系统挂载需要定制配置文件/etc/ceph/ceph.conf [client] log file = /data/logs/ceph-client.log keyring = /etc/ceph/keyring keyring可以从ceph auth list获取到 2.挂载 ceph-fuse -m 192.168.0.1:6789 /mnt/ceph/
未做性能优化时, Ceph 会把日志存储在与 OSD 数据相同的硬盘上。追求高性能的 OSD 可用单独的硬盘存储日志数据,如固态硬盘能提供高性能日志。
osd journal size 默认值是 0 ,所以你得在 ceph.conf 里设置。日志尺寸应该是 filestore max sync interval 与期望吞吐量的乘积再乘以 2 。
osd journal size = {2 * (expected throughput * filestore max sync interval)} 期望吞吐量应考虑期望的硬盘吞吐量(即持续数据传输速率)、和网络吞吐量,例如一个 7200 转硬盘的速度大致是 100MB/s 。硬盘和网络吞吐量中较小的( min() )一个是相对合理的吞吐量,有的用户则以 10GB 日志尺寸起步,例如:
osd journal size = 10000