免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 1773 | 回复: 0
打印 上一主题 下一主题

XIV - Storage Presentation [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2008-08-08 11:30 |只看该作者 |倒序浏览
Management and Use
The second part of our XIV tour outlines how the XIV's storage capacity is
organized, presented, managed and used. It follows on from the initial XIV
system introduction
here
.
XIV Storage Presentation
All disk capacity is put into a storage pool. There will be, at a minimum, a
default pool, and others can be created to isolate capacity, but not for
performance reasons. The minimum pool size is 17GB (decimal, 16GB binary) and
the maximum is the entire system. Pools can be dynamically resized by Sysadms
using the CLI or GUI management SW interface. Such resizing is limited by the
amount of free and consumed space.  
No individual drives or groups of drives can be allocated to a pool. The pool
is just a logical construct. As new pools are created the default pool size
reduces.
Host server applications get volumes or logical units assigned to them and
these are parts of a pool. The virtualization stack is LUN (volume) to pool to
physical disks without the pools actually owning specific physical disks. The
actual disks are owned in a global sense by the XIV Manager SW and in a specfic
sense by individual DMs.
LUNs are allocated as GB or blocks and in 17GB (decimal) increments. Smaller
sizes are rounded up. The volume size can be dynamically changed.
Host volumes are mapped to LUNs in LUN maps. This is similar to a DS8000's
Volume Groups. Up to 1,000 hosts can be connected to a single XIV and hence have
LUN maps.
There is no way to attach specific LUNs to specific data modules or to direct
I/Os to specific data modules.
A volume can be moved between pools. Any snapshot targets and point-in-time
copies reside in the same pool as their source. All volumes in a consistency
group will reside in the same pool.
A volume is divided into 1MB partitions or chunks or segments. Each partition
is mirrored to two different data modules. The 1MB size was chosen as being
large enough to get a good amount of data from a drive in a single I/O. It's
also small enough in size to facilitate the random distribution which avoids I/O
hot spot creation. A 1MB chunk can also be loaded into a DM's cache if it is
accessed often enough. This helps increase the cache hit ratio.
The IM distribution map knows where each LUN's many 1MB partitions are.
That's how it can direct read requests to the right DM. This level of
granularity is, IBM claims, unique. Compellent's Dynamic
Block Architecture  is another highly granular mapping system.
A distribution algorithm, in the IM Manager software, automatically
distributes partitions across all disks in the system so that all spindles are
utilized and there are no I/O access hot spots and no need for optimization
software. The distribution algorithm is a key piece of intellectual property for
the XIV system.
There is no short-stroking here or positioning of tier 1 data specifically
closer to read heads as with the Pillar Axiom implementation. Sysadms have no
control over how data is actually laid out in an XIV system, which simplifies
their job as that aspect is simply removed. There is no need to tune the system
continually.
If more drives are added to an XIV then they are added to the base default
pool and all the existing data in the XIV will be dynamically extended out to
use these new drives by being striped across them as well. That rebalances the
I/O workload across all the increased number of drives in the XIV. This
rebalancing is done in the background without any host involvement. The
distribution algorithm IP includes this rebalancing capability.
Thin Provisioning
The XIV system implements thin provisioning on a storage pool and not a
system-wide basis. A storage pool will have a hard or physical provisioning
limit and a soft one.
The hard limit is up to what is physically available. The soft limit is the
amount of space that can be allocated logically; it limits the total size of all
volumes. Storage pools are independent of each other so as to isolate them from
thin provisioning problems in one pool affecting critical applications in
another.
Eah storage pool can have its hard and soft capacity limits defined. A hard
pool limit is the total size that can be used. The soft limit is to total size
of all the volumes in that pool. Different pools can be fully or thinly
provisioned.
Protection beyond Mirroring
The XIV supports differential snapshots, full copy snapshots, multiple target
copies and consistency groups. Snapshots are read-only by default but can be
made writable. There is a 16,000 limit on the number of snapshot volumes.
There is synchronous remote replication support in the XIV system which uses
either iSCSI or FC. Volumes are paired as a primary and its secondary volume.
These must be the same size. This function mirrors one XIV to  remote one for
disaster recovery purposes.
Source Link:http://www.blocksandfiles.co.uk/article/6283


本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u2/75575/showart_1117531.html
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP