专栏首页运维Jewel版本Ceph集群功能性能测试

Jewel版本Ceph集群功能性能测试

参考文档

http://docs.ceph.com/docs/master/start/quick-start-preflight/#rhel-centos https://www.linuxidc.com/Linux/2017-09/146760.htm http://s3browser.com/ http://docs.ceph.org.cn/man/8/rbd/ https://hub.packtpub.com/working-ceph-block-device/# https://github.com/s3fs-fuse/s3fs-fuse https://blog.csdn.net/miaodichiyou/article/details/76050361 http://mathslinux.org/?p=717 http://elf8848.iteye.com/blog/2089055

测试目标

使用rbd映射挂载块存储并测试性能 使用rbd-nbd映射挂载条带块存储并测试性能 使用s3brower测试对象存储读写 使用s3fs挂载挂载对象存储 使用对象存储写使用块存储读

一,使用rbd映射挂载块存储并测试性能

1、创建image

[root@idcv-ceph0 cluster]# ceph osd pool create test_pool 100 pool 'test_pool' created [root@idcv-ceph0 cluster]# rados lspools rbd .rgw.root default.rgw.control default.rgw.data.root default.rgw.gc default.rgw.log default.rgw.users.uid default.rgw.users.keys default.rgw.buckets.index default.rgw.buckets.data test_pool [root@idcv-ceph0 cluster]# rbd list [root@idcv-ceph0 cluster]# rbd create test_pool/testimage1 --size 40960 [root@idcv-ceph0 cluster]# rbd create test_pool/testimage2 --size 40960 [root@idcv-ceph0 cluster]# rbd create test_pool/testimage3 --size 40960 [root@idcv-ceph0 cluster]# rbd create test_pool/testimage4 --size 40960 [root@idcv-ceph0 cluster]# rbd list [root@idcv-ceph0 cluster]# rbd list test_pool testimage1 testimage2 testimage3 testimage4

2、映射image

[root@idcv-ceph0 cluster]# rbd map test_pool/testimage1 rbd: sysfs write failed RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable". In some cases useful info is found in syslog - try "dmesg | tail" or so. rbd: map failed: (6) No such device or address [root@idcv-ceph0 cluster]# dmesg |tail [113320.926463] rbd: loaded (major 252) [113320.931044] libceph: mon2 172.20.1.141:6789 session established [113320.931364] libceph: client4193 fsid 812d3acb-eaa8-4355-9a74-64f2cd5209b3 [113320.936922] rbd: image testimage1: image uses unsupported features: 0x38 [113339.870548] libceph: mon1 172.20.1.140:6789 session established [113339.870906] libceph: client4168 fsid 812d3acb-eaa8-4355-9a74-64f2cd5209b3 [113339.877109] rbd: image testimage1: image uses unsupported features: 0x38 [113381.405453] libceph: mon2 172.20.1.141:6789 session established [113381.405784] libceph: client4202 fsid 812d3acb-eaa8-4355-9a74-64f2cd5209b3 [113381.411625] rbd: image testimage1: image uses unsupported features: 0x38

报错处理方法:disable新特性

[root@idcv-ceph0 cluster]# rbd info test_pool/testimage1 rbd image 'testimage1': size 40960 MB in 10240 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.10802ae8944a format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags: [root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1 rbd: at least one feature name must be specified [root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1 fast-diff [root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1 object-map [root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1 exclusive-lock [root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1 deep-flatten [root@idcv-ceph0 cluster]# rbd info test_pool/testimage1 rbd image 'testimage1': size 40960 MB in 10240 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.10802ae8944a format: 2 features: layering flags: [root@idcv-ceph0 cluster]# rbd map test_pool/testimage1 /dev/rbd0

同理操作testimage2\3\4,最终如下

[root@idcv-ceph0 cluster]# rbd showmapped id pool image snap device 0 test_pool testimage1 - /dev/rbd0 1 test_pool testimage2 - /dev/rbd1 2 test_pool testimage3 - /dev/rbd2 3 test_pool testimage4 - /dev/rbd3

备注收缩image大小

[root@idcv-ceph0 ceph-disk0]# rbd resize -p test_pool --image testimage1 -s 10240 --allow-shrink Resizing image: 100% complete...done. [root@idcv-ceph0 ceph-disk0]# rbd resize -p test_pool --image testimage2 -s 10240 --allow-shrink Resizing image: 100% complete...done. [root@idcv-ceph0 ceph-disk0]# rbd resize -p test_pool --image testimage3 -s 10240 --allow-shrink Resizing image: 100% complete...done. [root@idcv-ceph0 ceph-disk0]# rbd resize -p test_pool --image testimage4 -s 10240 --allow-shrink Resizing image: 100% complete...done.

3、格式化挂载

[root@idcv-ceph0 ceph-disk0]# mkfs.xfs /dev/rbd0 [root@idcv-ceph0 ceph-disk0]# mkfs.xfs -f /dev/rbd0

4、DD测试

[root@idcv-ceph0 ceph-disk0]# dd if=/dev/zero of=/mnt/ceph-disk0/file0 count=1000 bs=4M conv=fsync 1000+0 records in 1000+0 records out 4194304000 bytes (4.2 GB) copied, 39.1407 s, 107 MB/s

二、使用rbd-nbd映射挂载条带块存储并测试性能

1、创建image 根据官网文档条带化测试需要带参数--stripe-unit及--stripe-count 计划测试object-size为4M、4K且count为1时,object-szie为32M且count为8、16时块存储性能

[root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage5 --size 10240 --stripe-unit 2097152 --stripe-count 16 [root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage5 rbd image 'testimage5': size 10240 MB in 2560 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.10c52ae8944a format: 2 features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten flags: stripe unit: 2048 kB stripe count: 16 [root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage6 --size 10240 --stripe-unit 4096 --stripe-count 4 [root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage6 rbd image 'testimage6': size 10240 MB in 2560 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.10c82ae8944a format: 2 features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten flags: stripe unit: 4096 bytes stripe count: 4 [root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage7 --size 10240 --object-size 32M --stripe-unit 4194304 --stripe-count 4 [root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage7 rbd image 'testimage7': size 10240 MB in 320 objects order 25 (32768 kB objects) block_name_prefix: rbd_data.107e238e1f29 format: 2 features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten flags: stripe unit: 4096 kB stripe count: 4 [root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage8 --size 10240 --object-size 32M --stripe-unit 2097152 --stripe-count 16 [root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage8 rbd image 'testimage8': size 10240 MB in 320 objects order 25 (32768 kB objects) block_name_prefix: rbd_data.109d2ae8944a format: 2 features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten flags: stripe unit: 2048 kB stripe count: 16 [root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage11 --size 10240 --object-size 4M [root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage12 --size 10240 --object-size 4K [root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage11 rbd image 'testimage11': size 10240 MB in 2560 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.10ac238e1f29 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags: [root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage12 rbd image 'testimage12': size 10240 MB in 2621440 objects order 12 (4096 bytes objects) block_name_prefix: rbd_data.10962ae8944a format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags:

2、映射image

[root@idcv-ceph2 mnt]# rbd map test_pool/testimage8 rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail" or so. rbd: map failed: (22) Invalid argument [root@idcv-ceph2 mnt]# dmesg |tail [118760.024660] XFS (rbd0): Log I/O Error Detected. Shutting down filesystem [118760.024710] XFS (rbd0): Please umount the filesystem and rectify the problem(s) [118760.024766] XFS (rbd0): Unable to update superblock counters. Freespace may not be correct on next mount. [118858.837102] XFS (rbd0): Mounting V5 Filesystem [118858.872345] XFS (rbd0): Ending clean mount [173522.968410] rbd: rbd0: encountered watch error: -107 [176701.031429] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432) [176827.317008] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432) [177423.107103] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432) [177452.820032] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432)

3、排错发现rbd不支持条带特性需要需要使用rbd-nbd rbd-nbd支持所有的新特性,后续map时也不需要disable新特性,但是linux内核默认没有nbd模块,需要编译内核安装,可以参考下面链接https://blog.csdn.net/miaodichiyou/article/details/76050361

[root@idcv-ceph2 ~]# wget http://vault.centos.org/7.5.1804/updates/Source/SPackages/kernel-3.10.0-862.2.3.el7.src.rpm [root@idcv-ceph2 ~]# rpm -ivh kernel-3.10.0-862.2.3.el7.src.rpm [root@idcv-ceph2 ~]# cd /root/rpmbuild/ [root@idcv-ceph0 rpmbuild]# cd SOURCES/ [root@idcv-ceph0 SOURCES]# tar Jxvf linux-3.10.0-862.2.3.el7.tar.xz -C /usr/src/kernels/ [root@idcv-ceph0 SOURCES]# cd /usr/src/kernels/ [root@idcv-ceph0 kernels]# mv 3.10.0-862.6.3.el7.x86_64 3.10.0-862.6.3.el7.x86_64-old [root@idcv-ceph0 kernels]# mv linux-3.10.0-862.2.3.el7 3.10.0-862.6.3.el7.x86_64 [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# cd 3.10.0-862.6.3.el7.x86_64 [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# mkdir mrproper [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# cp ../3.10.0-862.6.3.el7.x86_64-old/Module.symvers ./ [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# cp /boot/config-3.10.0-862.2.3.el7.x86_64 ./.config [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# yum install elfutils-libelf-devel [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# make prepare [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# make scripts [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# make CONFIG_BLK_DEV_NBD=m M=drivers/block [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# modinfo nbd [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# cp drivers/block/nbd.ko /lib/modules/3.10.0-862.2.3.el7.x86_64/kernel/drivers/block/ [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# depmod -a [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# modprobe nbd [root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# lsmod |grep nbd nbd 17554 5

4、使用rbd-nbd映射image

[root@idcv-ceph0 ~]# rbd-nbd map test_pool/testimage17 /dev/nbd0 [root@idcv-ceph0 ~]# rbd info test_pool/testimage17 rbd image 'testimage17': size 10240 MB in 1280 objects order 23 (8192 kB objects) block_name_prefix: rbd_data.112d74b0dc51 format: 2 features: layering, striping flags: stripe unit: 1024 kB stripe count: 8 [root@idcv-ceph0 ~]# mkfs.xfs /dev/nbd0 meta-data=/dev/nbd0 isize=512 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@idcv-ceph0 ~]# mount /dev/nbd0 /mnt/ceph-8M/ [root@idcv-ceph0 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 100G 3.5G 96G 4% / devtmpfs 7.8G 0 7.8G 0% /dev tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 7.8G 12M 7.8G 1% /run tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/sda1 497M 150M 348M 31% /boot tmpfs 1.6G 0 1.6G 0% /run/user/0 /dev/sdb1 95G 40G 56G 42% /var/lib/ceph/osd/ceph-0 /dev/rbd0 10G 7.9G 2.2G 79% /mnt/ceph-disk0 /dev/rbd1 10G 7.9G 2.2G 79% /mnt/ceph-4M /dev/nbd0 10G 33M 10G 1% /mnt/ceph-8M

5、dd测试性能 object-size为8M

[root@idcv-ceph0 ~]# dd if=/dev/zero of=/mnt/ceph-8M/file0-1 count=800 bs=10M conv=fsync 800+0 records in 800+0 records out 8388608000 bytes (8.4 GB) copied, 50.964 s, 165 MB/s [root@idcv-ceph0 ~]# dd if=/dev/zero of=/mnt/ceph-8M/file0-1 count=80 bs=100M conv=fsync 80+0 records in 80+0 records out 8388608000 bytes (8.4 GB) copied, 26.3178 s, 319 MB/s

object-size为32M

[root@idcv-ceph0 ceph-32M]# rbd info test_pool/testimage18 rbd image 'testimage18': size 40960 MB in 1280 objects order 25 (32768 kB objects) block_name_prefix: rbd_data.11052ae8944a format: 2 features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten flags: stripe unit: 2048 kB stripe count: 8 [root@idcv-ceph0 ceph-32M]# dd if=/dev/zero of=/mnt/ceph-32M/file0-1 count=2000 bs=10M conv=fsync 2000+0 records in 2000+0 records out 20971520000 bytes (21 GB) copied, 67.4266 s, 311 MB/s [root@idcv-ceph0 ceph-32M]# dd if=/dev/zero of=/mnt/ceph-32M/file0-1 count=20000 bs=1M conv=fsync 20000+0 records in 20000+0 records out 20971520000 bytes (21 GB) copied, 61.7757 s, 339 MB/s

6、测试方法汇总 4m cnt=1 4k cnt=1 32M cnt=8,16 dd测试 1M 100M

32M /mnt/ceph-32M-8 /mnt/ceph-32M-16

rbd create test_pool/testimage8 --size 10240 --object-size 32M --stripe-unit 2097152 --stripe-count 16 dd if=/dev/zero of=/mnt/ceph-32M-16/file32M count=80 bs=100M conv=fsync dd if=/dev/zero of=/mnt/ceph-32M-16/file32M count=8000 bs=1M conv=fsync rbd create test_pool/testimage19 --size 10240 --object-size 32M --stripe-unit 4194304 --stripe-count 8 dd if=/dev/zero of=/mnt/ceph-32M-8/file32M count=80 bs=100M conv=fsync dd if=/dev/zero of=/mnt/ceph-32M-8/file32M count=8000 bs=1M conv=fsync

4M /mnt/ceph-4M

rbd create test_pool/testimage11 --size 10240 --object-size 4M dd if=/dev/zero of=/mnt/ceph-4M/file4M count=80 bs=100M conv=fsync dd if=/dev/zero of=/mnt/ceph-4M/file4M count=8000 bs=1M conv=fsync

4K /mnt/ceph-4K

rbd create test_pool/testimage12 --size 10240 --object-size 4K dd if=/dev/zero of=/mnt/ceph-4K/file4K count=80 bs=100M conv=fsync dd if=/dev/zero of=/mnt/ceph-4K/file4K count=8000 bs=1M conv=fsync

7、dd测试结果汇总

8、使用fio随机写测试 先安装fio

yum install libaio-devel wget http://brick.kernel.dk/snaps/fio-2.1.10.tar.gz tar zxf fio-2.1.10.tar.gz cd fio-2.1.10/ make make install

32M-8

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd4 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60 Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=272729KB/s, minb=272729KB/s, maxb=272729KB/s, mint=15379msec, maxt=15379msec Disk stats (read/write): nbd4: ios=0/32280, merge=0/0, ticks=0/36624, in_queue=36571, util=97.61% fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd4 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60 Run status group 0 (all jobs): WRITE: io=4000.0MB, aggrb=326504KB/s, minb=326504KB/s, maxb=326504KB/s, mint=12545msec, maxt=12545msec Disk stats (read/write): nbd4: ios=0/31391, merge=0/0, ticks=0/1592756, in_queue=1597878, util=97.04%

32M-16

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd3 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60 fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd3 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60

4M

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/rbd1 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60 fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/rbd1 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60

4K

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=400M -filename=/dev/rbd2 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60 fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=400M -filename=/dev/rbd2 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60

9、fio测试结果汇总

三、使用s3brower测试对象存储读写

1、创建对象存储账号密码

[root@idcv-ceph0 cluster]# radosgw-admin user create --uid=test --display-name="test" --access-key=123456 --secret=123456 [root@idcv-ceph0 cluster]# radosgw-admin user info --uid=test { "user_id": "test", "display_name": "test", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "test", "access_key": "123456", "secret_key": "123456" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "user_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "temp_url_keys": [] }

2、安装配置s3brower

3、创建bucket上传下载测试

四、使用s3fs挂载挂载对象存储读写

测试对象存储方式写入文件,从rbd方式读目录 1、安装部署 https://github.com/s3fs-fuse/s3fs-fuse/releases

安装 查看README On CentOS 7:

sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel

Then compile from master via the following commands:

git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make
sudo make install

[root@idcv-ceph0 ~]# wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.83.tar.gz [root@idcv-ceph0 ~]# ls [root@idcv-ceph0 ~]# tar zxvf v1.83.tar.gz [root@idcv-ceph0 s3fs-fuse-1.83]# cd s3fs-fuse-1.83/ [root@idcv-ceph0 s3fs-fuse-1.83]# ls [root@idcv-ceph0 s3fs-fuse-1.83]# vi README.md [root@idcv-ceph0 s3fs-fuse-1.83]# yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel [root@idcv-ceph0 s3fs-fuse-1.83]# ./autogen.sh [root@idcv-ceph0 s3fs-fuse-1.83]# ls [root@idcv-ceph0 s3fs-fuse-1.83]# ./configure [root@idcv-ceph0 s3fs-fuse-1.83]# make [root@idcv-ceph0 s3fs-fuse-1.83]# make install [root@idcv-ceph0 s3fs-fuse-1.83]# mkdir /mnt/s3 [root@idcv-ceph0 s3fs-fuse-1.83]# vi /root/.passwd-s3fs [root@idcv-ceph0 s3fs-fuse-1.83]# chmod 600 /root/.passwd-s3fs

2、挂载

[root@idcv-ceph0 ~]# s3fs testbucket /mnt/s3 -o url=http://172.20.1.139:7480 -o umask=0022 -o use_path_request_style [root@idcv-ceph0 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 95G 75G 21G 79% /var/lib/ceph/osd/ceph-0 /dev/rbd1 10G 7.9G 2.2G 79% /mnt/ceph-4M /dev/rbd2 10G 814M 9.2G 8% /mnt/ceph-4K /dev/nbd3 10G 7.9G 2.2G 79% /mnt/ceph-32M-16 /dev/nbd4 10G 33M 10G 1% /mnt/ceph-32M-8 s3fs 256T 0 256T 0% /mnt/s3

3、验证读写

[root@idcv-ceph0 ~]# ls /mnt/s3/images/ kernel-3.10.0-862.2.3.el7.src.rpm nbd.ko test.jpg [root@idcv-ceph0 ~]# cp /etc/hosts hosts hosts.allow hosts.deny [root@idcv-ceph0 ~]# cp /etc/hosts /mnt/s3/images/ [root@idcv-ceph0 ~]# ls /mnt/s3/images/ hosts kernel-3.10.0-862.2.3.el7.src.rpm nbd.ko test.jpg

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • kvm 虚拟机 转换成vmware虚拟机

    # qemu-img convert Monitor.img –O vmdk ./Monitor.vmdk

    三杯水Plus
  • walle 报错 out of pty devices 处理

    备注: 这个参数默认是1024,不知道walle这边为啥不释放,一直在增加,如果不重启服务器,在线解决只能修改kernel.pty.max参数。

    三杯水Plus
  • Ceph集群由Jewel版本升级到Luminous版本

    https://www.virtualtothecore.com/en/upgrade-ceph-cluster-luminous/ http://www.ch...

    三杯水Plus
  • 介绍linux下利用编译bash设置root账号共用的权限审计设置

    在日常运维工作中,公司不同人员(一般是运维人员)共用root账号登录linux服务器进行维护管理,在不健全的账户权限审计制度下,一旦出现问题,就很难找出源头,甚...

    洗尽了浮华
  • 附012.Kubeadm部署高可用Kubernetes

    Kubernetes的高可用主要指的是控制平面的高可用,即指多套Master节点组件和Etcd组件,工作节点通过负载均衡连接到各Master。HA有通常有如下两...

    木二
  • 使用Feign实现Form表单提交

    之前,笔者写了《使用Spring Cloud Feign上传文件》。近日,有同事在对接遗留的Struts古董系统,需要使用Feign实现Form表单提交。其实步...

    用户1516716
  • docker 非root用户修改mount到容器的文件出现“Operation not permitted

    进入容器,在/mnt目录下进行修改文件属性的操作,出现如下错误(此时容器中的user id=0)

    charlieroro
  • Kubernetes中,通过Service访问Pod快速入门

    版权声明:本文为耕耘实录原创文章,各大自媒体平台同步更新。欢迎转载,转载请注明出处,谢谢

    耕耘实录
  • 如何过滤屏蔽掉抓取你WordPress网站的无用蜘蛛

    很久之前其实就发现一个问题,很多的蜘蛛来抓取你的网站,通过分析网站的日志文件可以看到,有很多蜘蛛我们是欢迎的,有些我们确实不想要的,但是却长期的在抓取,于是想要...

    wordpress建站吧
  • CentOS7设置IP地址

    Step1:切换至root用户 在linux下,root用户是最高级用户。我们在修改IP地址之前,需要切换至root用户才有权限操作。 ? Step2:进...

    似水的流年

扫码关注云+社区

领取腾讯云代金券