免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12下一页
最近访问板块 发新帖
查看: 4034 | 回复: 15
打印 上一主题 下一主题

最近做了个测试,希望对大家有用(测试数据部分) [复制链接]

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2009-08-12 13:56 |只看该作者 |倒序浏览
操作系统 aix 530902
DS4800 -82A 初始操作
datavg1包含5个100GB的pv(hdisk2,hdisk3,hdisk4,hdisk5,hdisk10),分别属于DS4800阵列的control-A和control-B两个控制器, pp-size为128M
datavg2包含4个100GB的pv(hdisk6,hdisk7,hdisk8,hdisk9),分别属于DS4800阵列的control-A和control-B两个控制器,pp-size为128M
创建32K的STRIP的LV为data1,data2,文件系统/data1,/data2, 文件系统为默认RW方式。(/etc/filesystems文件中对应的option=rw)


磁盘状态操作if=of=平均速度
no strip/dev/zero/data1/test-20g.dd
/data1/test-40g.dd
232MB
/data1/test-40g.dd
/data1/test-20g.dd
/dev/null207MB
 
strip写×1/dev/zero/data1/test-20g1.dd202MB
写 ×2/dev/zerodata1/test-20g1.dd
data1/test-20g2.dd
265MB
写 ×3if=/dev/zerodata1/test-20g1.dd
data1/test-20g2.dd
data1/test-20g3.dd
280MB
读×1data1/test-20g1.dd/dev/null34MB
读×2data1/test-20g1.dd
data1/test-20g2.dd
/dev/null120MB
读×3data1/test-20g1.dd
data1/test-20g2.dd
data1/test-20g3.dd
/dev/null228MB
 
strip,fs=cio,rbrw/dev/zero/data1/test-20g1.dd187MB
写×2/dev/zerodata1/test-20g1.dd
data1/test-20g2.dd
214MB

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
2 [收藏(0)] [报告]
发表于 2009-08-12 14:00 |只看该作者 |倒序浏览
 
disktest-strip-6(dd 40G St=128K wr bs=128k)CONTROLLER IN SLOT A
1921540090112599.6
CONTROLLER IN SLOT B
19138700104832658.5
Logical Drive lun0
960780045056300.2
Logical Drive lun1
956530052416320.5
Logical Drive lun2
960560045056299
Logical Drive lun3
957230052416337.8
STORAGE SUBSYSTEM TOTALS38354100185932.81258.1
 
disktest-strip-7(ddX2 40G 128K wr blocksize=128k)CONTROLLER IN SLOT A
2789910093798.4732.8
Logical Drive lun0
1404400046899.2382.8
Logical Drive lun2
1385310046899.2366.4
CONTROLLER IN SLOT B
2775630091008716.2
Logical Drive lun1
1386580045516.8356.6
Logical Drive lun3
1388950045491.2369
STORAGE SUBSYSTEM TOTALS55655400183833.61436.2
 
disktest-strip-9(filecopy strip=32K )CONTROLLER IN SLOT A
8718934330.677597.66841.2
Logical Drive Lun18
26185310030.651609.6364.8
CONTROLLER IN SLOT B
8430982062.3321678041.8
Logical Drive lun3
1592379062.544141103.5
Logical Drive lun1
1592385061.944141103.5
Logical Drive lun0
1592672027.642801070
Logical Drive lun2
1592714025.942821070.5
Logical Drive lun4
2648083020.210418.42604.6
Logical Drive lun7
2623056063.7127493187.2
Logical Drive lun5
2623127060.9127643191
Logical Drive lun6
262361201910283.22570.8
STORAGE SUBSYSTEM TOTALS171499161.530.7109102.814783.4

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
3 [报告]
发表于 2009-08-12 13:59 |只看该作者
模式Devices
Total IOs
Read Percentage
Cache Hit PercentageMaximum
KB/second
Maximum
IO/second
disktest-nostrip-1CONTROLLER IN SLOT A
00000
CONTROLLER IN SLOT B
223442.832.1234320532.4
Logical Drive Lun19
23882632.11541.6358.6
Logical Drive lun10
1995400234291.2231.6
Logical Drive lun5
00000
STORAGE SUBSYSTEM TOTALS
224092.832.1234320544.8
 
disktest-nostrip-2(copyfile)CONTROLLER IN SLOT A
174791711.799.9330976.82320.2
Logical Drive Lun18
154291500231094.42320.2
CONTROLLER IN SLOT B
36015710099.3313353615.8
Logical Drive lun1
20704710099.1195840553.8
Logical Drive lun5
11510910099.4186892368
Logical Drive lun2
20495110099.9152473.6297.8
Logical Drive lun3
37976100100155545.6304.4
STORAGE SUBSYSTEM TOTALS210807426.899.5483667.42804.2
 
disktest-nostrip-2(应用压力)Logical Drive lun5
570399100391269881530.3
Logical Drive lun4
24002.70.7
Logical Drive lun3
6379795.510.34282.7301.7
Logical Drive lun2
2082485.92.97296548
Logical Drive lun1
24333099.899193536584.3
Logical Drive Lun18
715166016.9140586.72693.7
CONTROLLER IN SLOT B
87754899.653.61959841530.3
CONTROLLER IN SLOT A
7360462.43145202.72755.3
STORAGE SUBSYSTEM TOTALS161359455.352.63301283981.7
 
disktest-nostrip-3(ddX2 BS=1M)CONTROLLER IN SLOT A
2546900229990.4585.2
Logical Drive exlun1
2543000229990.4585.2
CONTROLLER IN SLOT B
807400114685.6273.6
Logical Drive lun8
806700114682.4273.6
STORAGE SUBSYSTEM TOTALS3354300259686.4697.2
 
disktest-strip-1(dd 40G 128K wr).perfCONTROLLER IN SLOT B
830280.890.7107270420.5
CONTROLLER IN SLOT A
830560.888.989609.6352.4
Logical Drive lun3
415160.890.353638211
Logical Drive lun1
415050.891.353632209.8
Logical Drive lun2
415050.890.344809.6177.4
Logical Drive lun0
415380.887.744800176.2
STORAGE SUBSYSTEM TOTALS1660840.889.8190845.2751.1
 
disktest-strip-2(dd 40G 32K wr)CONTROLLER IN SLOT A
825820.898.388588.8349.2
Logical Drive lun0
412880.899.144294.4174.6
Logical Drive lun2
412840.897.544294.4174.6
CONTROLLER IN SLOT B
825660.869.890325.6355
Logical Drive lun1
412800.899.145164.8178
Logical Drive lun3
412810.840.645160.8177
STORAGE SUBSYSTEM TOTALS1651480.884.1173793.6684
 
disktest-strip-3(dd 40G 32K re)CONTROLLER IN SLOT A
81926100100261939.21023.2
Logical Drive lun0
40962100100130969.6511.6
Logical Drive lun2
40960100100130969.6511.6
CONTROLLER IN SLOT B
819231001002657281038
Logical Drive lun1
40960100100132864519
Logical Drive lun3
40961100100132864519
STORAGE SUBSYSTEM TOTALS163849100100525004.82050.8
 
disktest-strip-4(dd 40G 128K re)CONTROLLER IN SLOT A
819301001002647041034
CONTROLLER IN SLOT B
819241001003179521242
Logical Drive lun0
40963100100132352517
Logical Drive lun2
40960100100132352517
Logical Drive lun1
40960100100158976621
Logical Drive lun3
40961100100158976621
STORAGE SUBSYSTEM TOTALS163854100100577945.62257.6
 
disktest-strip-5(dd 40G 128K wr)CONTROLLER IN SLOT A
692610.889.189712353.4
Logical Drive lun0
346190.887.244851.2180
Logical Drive lun2
346330.891.244860.8177.6
CONTROLLER IN SLOT B
691590.889.991092.8358.2
Logical Drive lun1
345650.790.345516.8181.8
Logical Drive lun3
345890.89045576181.6
STORAGE SUBSYSTEM TOTALS1384200.889.5178403.2702.4
 
disktest-strip-8(ddX3 40G strip=128K wr bs=1m)CONTROLLER IN SLOT A
17554400163840640
CONTROLLER IN SLOT B
17510700165376646
Logical Drive lun0
878510081920320
Logical Drive lun2
876780081920320
Logical Drive lun1
874840082688323
Logical Drive lun3
876160082688323
STORAGE SUBSYSTEM TOTALS350651003292161286

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
4 [报告]
发表于 2009-08-12 14:19 |只看该作者

另外一个Windows系统的测试,810 14hdd,Raid5,2hs

序号
文件类型
系统Cache设置状态
Cache Hit
Current
Current
MAX
MAX
所用时间
Percentage
KPS
IOPS
KPS
IOPS
1
大文件读(52.2G)
 
 
 
 
 
 
 
1.1
默认配置
100%
234,593
3718
 
3,756
4’
1.2
关闭所有Cache
0
 
 
 
 
8’
1.3
仅打开Enable Read Caching
100%
126,515
1,976
137,817
2,153
8’
1.4
仅打开Enable dynamic cache read prefetch
100%
130,012
2,032
132,288
2,067
8’
1.5
打开Enable Read Caching 项和Enable dynamic cache read prefetch
100%
230,144
3,596
237,593
3,712
4’
2
大文件写(52.2G)
 
 
 
 
 
 
 
2.1
默认配置
100%
152,794
2,387
176,973
2,756
7’
2.2
关闭所有Cache
0%
30,619
478
33,996
531
170’
2.3
仅打开Enable write Caching
0%
157,301
2,458
161,064
2,517
7’
2.4
打开Enable write Caching项和Enable write caching with mirroring
0%
137,372
2,147
143,080
2,236
6’
2.5
打开 Enable write Caching项和Enable write caching without batteries
0%
154,690
2,417
177,166
2,768
6’
2.6
打开Enable write Caching项和Enable write caching with mirroring项和Enable write caching without batteries
0%
147,276
2,301
164,352
2,568
 


[ 本帖最后由 spook 于 2009-8-12 15:01 编辑 ]

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
5 [报告]
发表于 2009-08-12 15:02 |只看该作者
3
小文件读(8.71G 1019目录13532文件)
 
 
 
 
 
 
 
3.1
默认配置
98.6
53,972.20
987.6
59782.2
1,077.40
3’
3.2
关闭所有Cache
0
49,333.60
851.2
49,333.60
890.2
4’
3.3
仅打开Enable Read Caching
0
23,063.60
590.4
30,776.80
590.4
7’
3.4
仅打开Enable dynamic cache read prefetch
0
31,553.40
592
42,297.20
842.6
3’~4’
3.5
打开Enable Read Caching 项和Enable dynamic cache read prefetch
98.3
56,257
1,068.20
66,808
1,176.60
3’
4
小文件写
 
 
 
 
 
 
 
4.1
默认配置
99.9
33,683.40
593.8
48,261.20
851.8
4’30
4.2
关闭所有Cache
0
33,535.80
567
34,486.60
585.4
6’
4.3
仅打开Enable write Caching
0
50,520.40
889.6
54,627.60
955.4
4’
4.4
打开Enable write Caching项和Enable write caching with mirroring
0
54,982.60
1028.8
54,982.60
1028.8
4’
4.5
打开 Enable write Caching项和Enable write caching without batteries
0
51,239.80
880.6
54,233.20
957
4’
4.6
打开Enable write Caching项和Enable write caching with mirroring项和Enable write caching without batteries
0
33,820.80
624.8
48,262.80
854.2
6’

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
6 [报告]
发表于 2009-08-12 15:03 |只看该作者

结论

1.        DS 4800 14块300G  FC Raid10  参考性能 大约是 200MB/S 附近,文件系统对磁盘性能影响较大。
2.        新版本RDAC支持多路并行
3.        JFS2 下 ,磁盘写比读快,NTFS下,磁盘读比写快;
4.        Aix 下,条带化能够明显增加磁盘IOPS
5.        Aix下,条带化会降低 单个IO进程读取性能;但是随任务量增加接近等倍数增加
6.        文件系统数据数据在阵列Cache中将被打散重组
7.        NTFS文件的种类对阵列表现有很大影响



大家对以上结论有什么意见?或者能够有其他结论探讨一下……
不要提重新测试的问题,没环境了。
欢迎大家讨论!

[ 本帖最后由 无牙 于 2009-8-12 16:26 编辑 ]

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
7 [报告]
发表于 2009-08-12 17:03 |只看该作者
exp 810-212345678910111213141516
两个磁盘笼1号盘
热备

两个磁盘笼2~7号盘
Raid 0+1

exp 810-112345678
两个磁盘笼9、10号盘
Raid 5



9
10111213141516
空槽位区


[ 本帖最后由 spook 于 2009-8-12 17:11 编辑 ]

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
8 [报告]
发表于 2009-08-12 17:08 |只看该作者
所属Raid组Lun名称容量Mapping对象所属控制器所属VGmount点备注
Raid0+1 1673G     
cache-vg1-1100Gp6 570-1Ctrl-A/datavg1/data1570-1中预计使用12个lun 其中
Lun0~7各100G,Lun8~11各50G
Lun0、2、4、6属于Ctrl-A;
Lun1、3、5、7属于Ctrl-B;
Lun0、1、2、3属于datavg1;
Lun4、5、6、7属于datavg2;
Lun8~11预留分配
datavg1可用空间400G
datavg2可用空间400G
未分配50G
    
cache-vg1-2100Gp6 570-1Ctrl-B/datavg1/data1
cache-vg1-3100Gp6 570-1Ctrl-A/datavg1/data1
cache-vg1-4100Gp6 570-1Ctrl-B/datavg1/data1
cache-vg2-1100Gp6 570-1Ctrl-A/datavg2/data2
cache-vg2-2100Gp6 570-1Ctrl-B/datavg2/data2
cache-vg2-3100Gp6 570-1Ctrl-A/datavg2/data2
cache-vg2-4100Gp6 570-1Ctrl-B/datavg2/data2
cache-vg1-5100Gp6 570-1Ctrl-A/datavg1/data1
Shadow-vg1-1100Gp6 570-2
预留
Ctrl-A/datavg1/data1570-2中预计使用12个lun 其中
Lun12~19各100G,Lun20~23各50G
Lun12、14、16、18属于Ctrl-A;
Lun13、15、17、19属于Ctrl-B;
Lun12、13、14、15属于datavg1;
Lun16、17、18、19属于datavg2;
Lun20~23预留分配
datavg1可用空间400G
datavg2可用空间400G
Shadow-vg1-2100GCtrl-B/datavg1/data1
Shadow-vg1-3100GCtrl-A/datavg1/data1
Shadow-vg1-4100GCtrl-B/datavg1/data1
Shadow-vg1-5100GCtrl-A/datavg2/data2
shadow-vg2-1100GCtrl-B/datavg2/data2
shadow-vg2-2100GCtrl-A/datavg2/data2
shadow-vg2-3100GCtrl-B/datavg2/data2
shadow-vg2-4100G   
未分配178G     
 
Raid5 8340G     
Cache-bakvg600Gp6 570-1Ctrl-A/bakvg/databak用于存放备份文件(256K BS)
Cache-jfvg210Gp6 570-1Ctrl-B/jfvg/jfile用于存放系统日志文件(64K BS)
未分配0G     

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
9 [报告]
发表于 2009-08-12 17:17 |只看该作者
原帖由 mike79 于 2009-8-12 16:13 发表

1,14块磁盘作RAID10的吞吐量肯定不止200MBPS
3,JFS2写比读快,有没有考虑文件系统缓存的因素?是否采用了DIO?
4和5,选择合适的stripe size可以明显提升性能,stripe size应该和LTG联动考虑



最高有300MB+的,但是我的意思是用来给应用做参考


JFS2 写比读快,Cache肯定有影响,但是用了 40G 的 数据 来保障系统的FileCache 被吃掉,而且是个平均值;

strip size 有128K 和32K 2种的测试值。 那个文件系统时间的是在32K下测的

ds4800 monitor 有 128K 的 信息。

aix下文件读写都是用dd 来做的,除非注明了是 文件复制的……



[ 本帖最后由 spook 于 2009-8-12 17:23 编辑 ]

论坛徽章:
1
CU十二周年纪念徽章
日期:2013-10-24 15:41:34
10 [报告]
发表于 2009-08-12 19:24 |只看该作者
为了方便大家浏览,把数据都集中到了一个帖中。评论在另外一个贴上。

http://bbs2.chinaunix.net/thread-1538454-1-1.html

[ 本帖最后由 无牙 于 2009-8-12 19:26 编辑 ]
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP