免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
1234下一页
最近访问板块 发新帖
查看: 17516 | 回复: 31
打印 上一主题 下一主题

[RAID与磁盘阵列] IBM XIV与HDS USP VM的取舍 [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2011-05-31 10:47 |只看该作者 |倒序浏览
今年单位准备做系统升级,由于原来小机、存储就是用的IBM的,所以首先找来了IBM。小机型号很好选,问题到了存储部分。IBM销售一个劲的推荐XIV。对于在国内没有多少成功案例的产品我很是担心,所以找来了HDS来作比较。
     IBM XIV的担心:XIV有个缺点就是当在不同模块两块硬盘坏了的情况下,这个机柜的数据就丢失了。
     HDS USP VM的担心:最近在网上查了一下相关资料,有网友提出USP有个性能瓶颈就是cache size为256K,其他存储品牌一般在64k左右,由于这种情况有一部分用户就出现性能连IBM DS4000系列都比不上的感觉。
       单位的系统原来是p6 570+DS4800的配置,在担心XIV数据丢失情况下,就想使用USP VM,通过其虚拟化把DS4800利用起来作为数据镜像,就比单单使用xiv多了一套保障,但网上查到的一些客户反映情况又让我有些犹豫。单位的需求是用oracle rac建集群,请各位高手讨论讨论呢,给小弟一些建议。

论坛徽章:
12
CU大牛徽章
日期:2013-09-18 15:20:4815-16赛季CBA联赛之同曦
日期:2016-02-01 20:28:25IT运维版块每日发帖之星
日期:2015-11-10 06:20:00操作系统版块每日发帖之星
日期:2015-10-28 06:20:002015亚冠之塔什干棉农
日期:2015-06-04 11:41:56丑牛
日期:2014-05-10 16:11:33技术图书徽章
日期:2013-09-23 13:25:58CU大牛徽章
日期:2013-09-18 15:21:17CU大牛徽章
日期:2013-09-18 15:21:12CU大牛徽章
日期:2013-09-18 15:21:06CU大牛徽章
日期:2013-09-18 15:20:58数据库技术版块每日发帖之星
日期:2016-02-08 06:20:00
2 [报告]
发表于 2011-05-31 11:30 |只看该作者
像你这样没法选型,任何产品都有缺点。即使有十全十美的产品出现,你也肯定觉得怎么这么贵

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
3 [报告]
发表于 2011-05-31 11:41 |只看该作者
256K 你都觉得大了,1MB的 XIV不是更坑爹了?

你担心 XIV坏同样两块盘的问题不会出现,就是坏2个笼子都没有关系的。

你要是担心被一家绑定, 那么就用 Veritas 的 SF 吧……

亲,还有富士通哦,亲,买富士通包邮包安装包调试哦,亲买富士通找我哦……

论坛徽章:
12
CU大牛徽章
日期:2013-09-18 15:20:4815-16赛季CBA联赛之同曦
日期:2016-02-01 20:28:25IT运维版块每日发帖之星
日期:2015-11-10 06:20:00操作系统版块每日发帖之星
日期:2015-10-28 06:20:002015亚冠之塔什干棉农
日期:2015-06-04 11:41:56丑牛
日期:2014-05-10 16:11:33技术图书徽章
日期:2013-09-23 13:25:58CU大牛徽章
日期:2013-09-18 15:21:17CU大牛徽章
日期:2013-09-18 15:21:12CU大牛徽章
日期:2013-09-18 15:21:06CU大牛徽章
日期:2013-09-18 15:20:58数据库技术版块每日发帖之星
日期:2016-02-08 06:20:00
4 [报告]
发表于 2011-05-31 12:06 |只看该作者
256K 你都觉得大了,1MB的 XIV不是更坑爹了?

你担心 XIV坏同样两块盘的问题不会出现,就是坏2个笼子都没 ...
spook 发表于 2011-05-31 11:41

XIV的cache size没那么大

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
5 [报告]
发表于 2011-05-31 12:47 |只看该作者
ORION VERSION 11.1.0.7.0

Commandline:
-run advanced -testname mytest -num_disks 6 -size_small 8 -size_large 8 -type rand

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 8 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8,      9,     10,     11,     12
Total Data Points: 43

Name: /dev/rhdisk2        Size: 206158430208
Name: /dev/rhdisk3        Size: 206158430208
Name: /dev/rhdisk4        Size: 206158430208
Name: /dev/rhdisk5        Size: 206158430208
Name: /dev/rhdisk6        Size: 206158430208
Name: /dev/rhdisk7        Size: 206158430208
6 FILEs found.

Maximum Large MBPS=94.58 @ Small=0 and Large=12
Maximum Small IOPS=18855 @ Small=30 and Large=0
Minimum Small Latency=0.60 @ Small=2 and Large=0

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
6 [报告]
发表于 2011-05-31 12:48 |只看该作者
本帖最后由 spook 于 2011-05-31 12:50 编辑

./orion_aix_ppc64 -run advanced -testname mytest -num_disks 6 -size_small 1024 -size_large 1024 -type seq

Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8,      9,     10,     11,     12
Total Data Points: 43

Name: /dev/rhdisk2        Size: 206158430208
Name: /dev/rhdisk3        Size: 206158430208
Name: /dev/rhdisk4        Size: 206158430208
Name: /dev/rhdisk5        Size: 206158430208
Name: /dev/rhdisk6        Size: 206158430208
Name: /dev/rhdisk7        Size: 206158430208
6 FILEs found.

Maximum Large MBPS=261.48 @ Small=0 and Large=11
Maximum Small IOPS=191 @ Small=25 and Large=0
Minimum Small Latency=11.56 @ Small=1 and Large=0

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
7 [报告]
发表于 2011-05-31 12:49 |只看该作者
希望对你有帮助

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
8 [报告]
发表于 2011-05-31 14:13 |只看该作者
6.1.2 Caching mechanisms
The XIV Storage System caching management is unique, by dispersing the cache into each
module as opposed to a central memory cache. The distributed cache enables each module
to concurrently service host I/Os and cache to disk access, as opposed to the central memory
caching algorithm, which implements memory locking algorithms that generate access
contention.
To improve memory management, each Data Module uses a PCI Express (PCI-e) bus
between the cache and the disk modules, which provides a sizable interconnect between the
disk and the cache. This design aspect allows large amounts of data to be quickly transferred
between the disks and the cache by the bus.
Having a large bus “pipe” permits the XIV Storage System to have small cache pages. More
so, a large bus “pipe” between the disk and the cache allows the system to perform many
small requests in parallel, again improving the performance.
A Least Recently Used (LRU) algorithm is the basis for the cache management algorithm.
This feature allows the system to generate a high hit ratio for frequently utilized data. In other
words, the efficiency of the cache usage for small transfers is very high, when the host is
accessing the same data set.
The cache algorithm starts with a single 4 KB page and gradually increase the number of
pages prefetched until an entire partition, 1 MB, is read into cache. If the access results in a
cache hit, the algorithm doubles the amount of data prefetched into the system.
Chapter 6. Performance characteristics 199
The prefetching algorithm continues to double the prefetch size until a cache miss occurs, or
the prefetch size maximum of 1 MB is obtained. Because the modules are managed
independently if a prefetch crosses a module boundary, then the logically adjacent module
(for that volume) is notified in order to begin pre-staging the data into its local cache.

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-02-18 06:20:00
9 [报告]
发表于 2011-05-31 14:15 |只看该作者
6.2.2 Host configuration considerations
There are various key points when configuring the host for optimal performance. Because the
XIV Storage System is distributing the data across all the disks an additional layer of volume
management at the host, such as Logical Volume Manager (LVM), might hinder performance
for workloads. Multiple levels of striping can create an imbalance across a specific resource.
Therefore, it is best to disable host striping of data for XIV Storage System volumes and allow
the XIV Storage System to manage the data.
Based on your host workload, you might need to modify the maximum transfer size that the
host generates to the disk to obtain the peak performance. For applications with large transfer
sizes, if a smaller maximum host transfer size is selected, the transfers are broken up,
causing multiple round-trips between the host and the XIV Storage System. By making the
host transfer size as large or larger than the application transfer size, fewer round-trips occur,
and the system experiences improved performance. If the transfer is smaller than the
maximum host transfer size, the host only transfers the amount of data that it has to send.
Due to the distributed data features of the XIV Storage System, high performance is achieved
by parallelism. Specifically, the system maintains a high level of performance as the number
of parallel transactions occur to the volumes. Ideally, the host workload can be tailored to use
multiple threads or spread the work across multiple volumes.
Changing the queue depth
The XIV Storage architecture was designed to perform under real-world customer production
workloads, with lots of I/O requests at the same time. Queue depth is an important host bus
adapter (HBA) setting because it essentially controls how much data is allowed to be “in flight”
onto the SAN from the HBA. A queue depth of 1 requires that each I/O request be completed
before another is started. A queue depth greater than one indicates that multiple host I/O
requests might be waiting for responses from the storage system. So, the higher the host
HBA queue depth, the more parallel I/O goes to the XIV Storage System.
Chapter 6. Performance characteristics 201
The XIV Storage architecture eliminates the legacy storage concept of a large central cache.
Instead, each component in the XIV grid has its own dedicated cache. The XIV algorithms
that stage data between disk and cache work most efficiently when multiple I/O requests are
coming in parallel - this is where the queue depth host parameter becomes an important
factor in maximizing XIV Storage I/O performance.
Sample queue depth comparison
Figure 6-1 shows a queue depth comparison for a database I/O workload (70 percent reads,
30 percent writes, 8k block size, DBO = Database Open).
Note that the performance numbers in this example are valid for this special test at an IBM lab
only. The numbers do not describe the general capabilities of IBM XIV Storage System as you
might observe them in your environment.
Figure 6-1 Host side queue depth comparison
A good practice is starting with a queue depth of 64 per HBA, to ensure exploitation of the
XIV’s parallel architecture.
Nevertheless the initial queue depth value might need to be adjusted over time. While higher
queue depth in general yields better performance with XIV one must consider the limitations
per port on the XIV side. Each HBA port on the XIV Interface Module is designed and set to
sustain up to 1400 concurrent I/Os (except for port 3 when port 4 is defined as initiator, in
which case port 3 is set to sustain up to 1000 concurrent I/Os). With a queue depth of 64 per
host port as suggested, one XIV port is limited to 21 concurrent host ports given that each
host will fill up the entire 64 depth queue for each request.
If in a very large environment, the relationship of 21 host ports connected to one XIV port is
not sufficient, lower queue depth values have to be configured. This method can also be used
as a “poor man’s” Quality of Service (QoS) mechanism.

论坛徽章:
0
10 [报告]
发表于 2011-05-31 16:42 |只看该作者
建议都用一家的,要么都用IBM的,要么都用HP的,否则出点问题你就等着头疼吧,别指望一个厂家的产品在设计时还全面兼容别的厂家的,标准再细也不可能细致到这个程度。不要担心什么绑给一家,防垄断也不是你们公司的责任。少给自己找点麻烦吧。
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP