在本实验的 VIOC 中,一个磁盘对应 4 条 VSCSI 路径。查看磁盘默认的属性 ;
# lsattr -El hdisk1
PCM PCM/friend/vscsi Path Control Module False
PR_key_value none N/A True
algorithm fail_over Algorithm Truehcheck_cmd test_unit_rdy Health Check Command Truehcheck_interval 0 Health Check Interval Truehcheck_mode nonactive Health Check Mode Truemax_transfer 0x40000 Maximum TRANSFER Size Truepvid none Physical volume identifier Falsequeue_depth 8 Queue DEPTH Truereserve_policy no_reserve R eserve Policy True
在列出的磁盘属性中,主要关注 algorithm、hcheck_interval、hcheck_mode、nonactive:
algorithm 数值为 fail_over,表示系统发送所有 I/O 到单条路径。如果确定某条路径发生故障,那么选择备用路径来发送所有的 I/O。该算法按顺序列表跟踪所有启用的路径。如果用来发送 I/O 的路径被标记为失败或禁用,那么将选择列表中下一个启用的路径。列表中的顺序是由路径优先级 path priority 属性确定的。
简而言之,就是当磁盘 hdisk0 的一条路径 failed 的时候,I/O 将会切换到另外一条路径。而 I/O 同一时刻只能在一条路径上发送。
hcheck_interval 定义在磁盘的路径上执行运行状况检查的频率。该属性支持的范围是 0 - 3600 秒。当所选的值为 0 时,会禁用运行状况检查。也就是说,如果这个值设置为 0,那么当一条 failed 路径修复的时候,系统不会识别到,依然认为路径不可用。
hcheck_mode,确定在使用运行状况检查功能时必须检查的路径。该属性支持以下方式:
在 MPIO 方式下,hcheck_mode 需要设置为 nonactive,而 hcheck_interval 设置为 60,即 60 秒。
将 hcheck_interval 修改为 60:
# chdev -l hdisk1 -a hcheck_interval=60
hdisk1 changed
默认情况下,VSCSI 的所有路径优先级都是 1:
# lspath -AE -l hdisk1 -p vscsi0
priority 1 Priority True
# lspath -AE -l hdisk1 -p vscsi1
priority 1 Priority True
# lspath -AE -l hdisk1 -p vscsi2
priority 1 Priority True
# lspath -AE -l hdisk1 -p vscsi3
priority 1 Priority True
下面,我们向 VIOC 的 hdisk1 使用 dd 发起压力,启用 nmon,然后使用命令,依次将 SSP Cluster 中的节点从 cluster 中停掉,顺序为:vios1 停掉 ->vios2 停掉 ->vios3 停掉 ->vios1 启动 ->vios4 停掉,与此同时观察磁盘 I/O 流量的变化以及路径的切换时间:
从上图可以看出,VIOC 上 hdisk1 的 I/O 发生了四次切换,分别是 vscsi0->vsci1->vscsi2->vscsi3->vscsi1,VSCSI 切换时间很快,在 2-3 秒之间。
为了验证 SSP 和普通光纤卡映射磁盘以及普通 VSCSI 映射性能对比,分别通过 SSP 给 50G 的磁盘,给 VIOS 通过物理光纤卡映射 50G 的 LUN。测试中存储提供磁盘均源于 IBM XIV 存储。
On VIOS:
# lsdev -Cc disk |grep -i hdisk5
hdisk5 Available 01-00-02 MPIO 2810 XIV Disk
Physical FC:hdisk5 为 XIV 存储上通过 VIOS 上的物理光纤卡映射给 VIOS 的 50G 的 LUN。
On VIOC:
# lsdev -Cc disk |grep -i hdisk4
hdisk4 Available 15-T1-01 MPIO 2810 XIV Disk
NPIV:hdisk4 为 VIOC 上通过 NPIV 映射的 XIV 上的 LUN,大小为 50G
# lsdev -Cc disk |grep -i hdisk8
hdisk8 Available Virtual SCSI Disk Drive
SSP:hdisk8 为 SSP 提供给 VIOC 的逻辑单元,大小为 50G。
# lsdev -Cc disk |grep -i hdisk10
hdisk10 Available Virtual SCSI Disk Drive
Normal VSCSI:hdisk10 为 VIOS 上的本地 146GSAS 磁盘,以 PV 方式通过 VSCSI 映射给 VIOC
为了保证测试的准确性,使用 ndisk 工具,分别用 8K 和 128K 对磁盘进行读写测试。测试结果中,左面数字为磁盘 IOPS,右边数字为磁盘的吞吐量。
通过性能测试可以看到 SSP 逻辑单元的性能,与直接物理光纤卡映射的磁盘(和 NPIV)的方式有差距,但并不是很大。IPOS 差距在 20% 左右,吞吐量差距在 15% 以内。如果只是将 SSP 作为 VIOC 的系统盘使用,那么性能方面完全没有问题,并且可靠性将会很高。
PowerVM environment I/O performance test | ||||
---|---|---|---|---|
测试类型 / 磁盘类型 | FC IOPS/ 吞吐量(MB/S) | NPIV IOPS/ 吞吐量(MB/S) | SSP IOPS/ 吞吐量(MB/S) | VSCI IOPS/ 吞吐量(MB/S) |
8K 顺序 / 读写比率 8:2 | 3859.5/30.15 | 3501.9/27.36 | 2924.5/22.85 | 926.1/7.24 |
128K 顺序 / 读写比率 8:2 | 1394.5/174.31 | 1268.4/158.55 | 1158.6/148.2 | 405/50.2 |
8K 随机 / 读写比率 8:2 | 3707.1/28.96 | 3146.2/24.58 | 2097.1/22.71 | 402/3.14 |
128K 随机 / 读写比率 8:2 | 1330.1/166.26 | 1212.1/151.51 | 1136.2/142.02 | 306.5/38.31 |
8K 顺序 / 读写比率 2:8 | 3188.5/24.91 | 2532.2/19.78 | 2523.2/19.71 | 293.6/2.29 |
128K 顺序 / 读写比率 2:8 | 1184.4/148.06 | 1094/136.74 | 1054.6/131.83 | 222/27.75 |
8K 随机 / 读写比率 2:8 | 3044.1/23.78 | 2805.3/21.92 | 2135.5/16.68 | 291.1/2.27 |
128K 随机 / 读写比率 2:8 | 1214.8/155.23 | 1140/142.5 | 1089.2/136.15 | 243.9/30.49 |
为了节省篇幅,下面只列出多种测试场景中的一种测试记录。
On VIOS
# ./ndisk -f /dev/rhdisk5 -S -r80 -b 8k -t 180&# Command: ./ndisk -f /dev/rhdisk7 -S -r80 -b 8k -t 180
Synchronous Disk test (regular read/write)
No. of processes = 1
I/O type = Sequential
Block size = 8192
Read-WriteRatio: 80:20 = read mostly
Sync type: none = just close the file
Number of files = 1
File size = 33554432 bytes = 32768 KB = 32 MB
Run time = 180 seconds
Snooze % = 0 percent
----> Running test with block Size=8192 (8KB) .
Proc - <-----Disk IO----> | <-----Throughput------> RunTime
Num - TOTAL IO/sec | MB/sec KB/sec Seconds
1 - 694718 3859.5| 30.15 30876.37 180.00
On VIOC:
# ./ndisk -f /dev/rhdisk4 -S -r80 -b 8k -t 180
Command: ./ndisk -f /dev/rhdisk4 -S -r80 -b 8k -t 180
Synchronous Disk test (regular read/write)
No. of processes = 1
I/O type = Sequential
Block size = 8192
Read-WriteRatio: 80:20 = read mostly
Sync type: none = just close the file
Number of files = 1
File size = 33554432 bytes = 32768 KB = 32 MB
Run time = 180 seconds
Snooze % = 0 percent
----> Running test with block Size=8192 (8KB) .
Proc - <-----Disk IO----> | <-----Throughput------> RunTime
Num - TOTAL IO/sec | MB/sec KB/sec Seconds
1 - 630317 3501.9 | 27.36 28015.56 179.99
#
On VIOC
# ./ndisk -f /dev/rhdisk8 -S -r80 -b 8k -t 180&
[1] 7274718
# Command: ./ndisk -f /dev/rhdisk8 -S -r80 -b 8k -t 180
Synchronous Disk test (regular read/write)
No. of processes = 1
I/O type = Sequential
Block size = 8192
Read-WriteRatio: 80:20 = read mostly
Sync type: none = just close the file
Number of files = 1
File size = 33554432 bytes = 32768 KB = 32 MB
Run time = 180 seconds
Snooze % = 0 percent
----> Running test with block Size=8192 (8KB) .
Proc - <-----Disk IO----> | <-----Throughput------> RunTime
Num - TOTAL IO/sec | MB/sec KB/sec Seconds
1 - 526398 2924.5| 22.85 23395.65 180.00
On VIOC:# ./ndisk -f /dev/rhdisk10 -S -r80 -b 8k -t 180
Command: ./ndisk -f /dev/rhdisk10 -S -r80 -b 8k -t 180
Synchronous Disk test (regular read/write)
No. of processes = 1
I/O type = Sequential
Block size = 8192
Read-WriteRatio: 80:20 = read mostly
Sync type: none = just close the file
Number of files = 1
File size = 33554432 bytes = 32768 KB = 32 MB
Run time = 180 seconds
Snooze % = 0 percent
----> Running test with block Size=8192 (8KB) .
Proc - <-----Disk IO----> | <-----Throughput------> RunTime
Num - TOTAL IO/sec | MB/sec KB/sec Seconds
1 - 166707 926.1 | 7.24 7409.13 180.00
使用 ndisk 工具对磁盘发起顺序大 I/O 读写,以验证吞吐量指标:
On VIOS(Physical FC):
# ./ndisk -f /dev/rhdisk5 -S -r80 -b 128k -t 180&
[1] 5898646
# Command: ./ndisk -f /dev/rhdisk5 -S -r80 -b 128k -t 180
Synchronous Disk test (regular read/write)
No. of processes = 1
I/O type = Sequential
Block size = 131072
Read-WriteRatio: 80:20 = read mostly
Sync type: none = just close the file
Number of files = 1
File size = 33554432 bytes = 32768 KB = 32 MB
Run time = 180 seconds
Snooze % = 0 percent
----> Running test with block Size=131072 (128KB) .
# Proc - <-----Disk IO----> | <-----Throughput------> RunTime
Num - TOTAL IO/sec | MB/sec KB/sec Seconds
1 - 251012 1394.5 | 174.31 178497.56 180.00
On VIOC (NPIV):
# ./ndisk -f /dev/rhdisk4 -S -r80 -b 128k -t 180
Command: ./ndisk -f /dev/rhdisk4 -S -r80 -b 128k -t 180
Synchronous Disk test (regular read/write)
No. of processes = 1
I/O type = Sequential
Block size = 131072
Read-WriteRatio: 80:20 = read mostly
Sync type: none = just close the file
Number of files = 1
File size = 33554432 bytes = 32768 KB = 32 MB
Run time = 180 seconds
Snooze % = 0 percent
----> Running test with block Size=131072 (128KB) .
Proc - <-----Disk IO----> | <-----Throughput------> RunTime
Num - TOTAL IO/sec | MB/sec KB/sec Seconds
1 - 228308 1268.4 | 158.55 162360.27 179.99
On VIOC(SSP):
# ./ndisk -f /dev/rhdisk8 -S -r80 -b 128k -t 180& [1] 7602292
# Command: ./ndisk -f /dev/rhdisk8 -S -r80 -b 128k -t 180
Synchronous Disk test (regular read/write)
No. of processes = 1
I/O type = Sequential
Block size = 131072
Read-WriteRatio: 80:20 = read mostly
Sync type: none = just close the file
Number of files = 1
File size = 33554432 bytes = 32768 KB = 32 MB
Run time = 180 seconds
Snooze % = 0 percent
----> Running test with block Size=131072 (128KB) .
Proc - <-----Disk IO----> | <-----Throughput------> RunTime
Num - TOTAL IO/sec | MB/sec KB/sec Seconds
1 - 213403 1185.6 | 148.20 151753.80 180.00
[1] + Done ./ndisk -f /dev/rhdisk8 -S -r80 -b 128k -t 180&
On VIOC(Normal VSCSI): # ./ndisk -f /dev/rhdisk10 -S -r80 -b 128k -t 180
Command: ./ndisk -f /dev/rhdisk10 -S -r80 -b 128k -t 180
Synchronous Disk test (regular read/write)
No. of processes = 1
I/O type = Sequential
Block size = 131072
Read-WriteRatio: 80:20 = read mostly
Sync type: none = just close the file
Number of files = 1
File size = 33554432 bytes = 32768 KB = 32 MB
Run time = 180 seconds
Snooze % = 0 percent
----> Running test with block Size=131072 (128KB) .
Proc - <-----Disk IO----> | <-----Throughput------> RunTime
Num - TOTAL IO/sec | MB/sec KB/sec Seconds
1 - 72896 405.0 | 50.62 51834.47 180.01
在旧版本的 PowerVM 中,SSP 由于有诸多限制(例如只能被一个 VIOS 管理)或者由于管理不方便而应用不是很广。从 VIOS 2.2.2.0 开始,SSP 无论从功能还是管理方面,都有了很大的提高,在以后向客户提供的方案中,对于 PowerVM 环境下实现 VSCSI 的高可用性以及可管理性还是非常有意义的。