- 论坛徽章:
- 0
|
原文来源:
http://www-128.ibm.com/developerworks/tivoli/library/t-snaptsm1/index.html
其中比较关键的两种类型是Copy-on-write和redirect-on-write. 之所以说关键是因为,目前两家主要的NAS设备商, EMC和Netapp各用了其中的一种。 EMC NAS是copy-on-write, Netapp是redirect-on-write. 如果说一定要在两种实现方式中找点差别出来,我能想出来两个区别,
1. copy-on-write, 对于一个写操作, 在后台要写两次, 读若干次(读若干次是因为RAID,不是因为SNAPSHOP), 而redirect-on-write不需要擦除旧块,直接再写一个新块, 然后修改指针。在这点上, redirect-on-write有性能上的优势。
2. copy-on-write可以把snapshot放在一个独立的卷上, 有自己独立的空间; redirect-on-write把生产文件系统的数据和snapshot的数据都放在一个卷上了,彼此犬牙交错, 难以独立管理snapshot所使用的实际空间。 在这点上, 我认为copy-on-write赢回一记。
下表列出其他几种方式,严格意义来讲, 有些是clone,不是snapshop。
Copy-on-write
Redirect-on-write
Split mirror
Log structure file architecture
Copy-on-write with background copy (IBM FlashCopy)
IBM incremental FlashCopy
Continuous data protection
Snapshot requires original copy of data
Yes: the unchanged data is accessed from the original copy
Yes: the unchanged data is accessed from the original copy
No: the mirror contains full copy of the data
Yes: the unchanged data is accessed from the original copy
Only until background copy is complete
Only until background copy is complete
No-Most implementations include a replica of the original copy
Space-efficient
Yes: in most cases space required only for changed data – exceptions such as IBM FlashCopy exist. Check with the vendor
Yes:in most cases space required only for changed data. Check with the vendor
No: requires same amount of space as original data
Yes: spaces required for the changed data
No: requires same amount of space as original data
No: requires same amount of space as original data
Yes: space required depends on the amount and frequency of changes to data when multiple point-in-time copies need to be kept.
I/O and CPU performance overhead on the system with original copy of the data
High: software based snapshot None: hardware-based snapshots (performed by the storage hardware)
High: software based snapshot None: hardware-based snapshots (Impact on the storage hardware)
Low: after mirror is split High: prior to the split to keep the mirror synchronized
High: overhead incurred in logging the writes
Low: performed by the storage hardware
Low : performed by the storage hardware
Implementation specific: Check with the vendor
Write overhead on the original copy of the data
High: first write to data block results in additional write
None: writes are directed to new blocks
None: write overhead is incurred before the split
High: writes must be logged
High: first write to data block results in additional write
High: first write to data block results in additional write
High: Each write results in a corresponding write to the storage space
Protection against logical data errors
Yes: changes can be rolled back or synched back into the original copy
Yes: changes can be rolled back or synched back into the original copy
Yes: data from the mirror must be copied. Typically slower since changes are not tracked.
Yes: the changes can be rolled back
Yes: another FlashCopy can be created in the reverse direction
Yes: another FlashCopy can be created in the reverse direction. Typically faster, since only the changed blocks are copied
Yes: changes can be synched back into the original copy
Protection against physical media failures of the original data
None: valid original copy must exist
None: valid original copy must exist
Yes: the split mirror is a full clone
None: valid original copy must exist
Full protection after background copy is complete
Full protection after background copy is complete
Implementation-specific: Check with the vendor
本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/14914/showart_476561.html |
|