免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 1868 | 回复: 0
打印 上一主题 下一主题

Sun StorEdge 99xx: Using Shadow Image and SVM [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2006-04-26 22:16 |只看该作者 |倒序浏览

Keyword(s):Shadow Image, SVM, Split Mirror Backup, StorEdge 99x0
Description:
This document describe the entire process of Backup & Restore using Shadow Image for StorEdge[TM] 99x0 and Solaris[TM] Volume Manager. Though there are BluePrint documents available, but these help only in providing a broad guideline on how a 'Split Mirror Backup' is configured.
This document is intended for those already familliar with the RAID Manager CCI and/or Storage Navigator interfaces for Shadow Image. The command line detail is provided for rebuilding the Solaris Volume Manager metaset on the backup server.
Document Body:
Using Shadow Image for Backup with StorEdge 99x0 & Solaris Volume Manager
When doing a snapshot using Shadow Image on a logical volume, the entire content of the physical disks is cloned. This includes the configuration section (called the private region in VERITAS, or metadb in the Solaris[TM] Volume Manager (SVM), and the data section (also called the public region). However, this private region (or metadb) holds disk identifications parameters. Therefore, the cloned disks and original disks have the same ID. This is not a major issue if the cloned data is to be accessed on two different hosts, but it can be a difficult issue to solve if you want to access the cloned data on the same host.
Cloned Data On a Different Host
Accessing the replicated data on a secondary host is equivalent to importing the logical group (or diskset) that contains the logical volumes you want to access. However, because the disks are cloned, the volume manager on the secondary host will believe this diskset is already imported on the primary host. This information is stored in the diskset metadatabases on the disks. It is necessary to clean up this information on every cloned disk, making it possible to take ownership of the diskset, and access its metadevices.
Definitions/Setup
Data Server: Sun[TM] Cluster 3.1 node using Solaris Logical Volume Manager
Backup Server: Some kind of backup software(Veritas Netbackup/Enterprise BackUp)
Storage: Sun StorEdge 9980
ShadowImage is used to clone the LUNs for the purpose of Backup. The data to access is configured as a metadevice using SLVM patched at the latest level. In this example, the primary and secondary volumes (P-Vols and S-Vols) are accessed from two different hosts, a Data Server and Backup Server. In this situation, the Data Server has access only to the P-Vols, and Backup Server sees only the S-Vols.
This constraint forces to reconstruct the metaset and metadevices on the secondary site before accessing the data. There is no possibility of importing or exporting the metaset from one host to the other (take and release ownership of a metaset implies that every disk is visible on both hosts)
ShadowImage is a track for track asynchronous copy technology at the logical device level. It has no concept of file systems or applications, so the data must be properly prepared on the P-VOL by the host to ensure data integrity.
A typical implementation would be as follows.
1. Pair create.
2. Quiesce the file system or place the database in hot Backup mode (e.g. SAP or Oracle backup mode).
3. Flush and lock the file system cache with lockfs -w or an unforced umount on the volume(requires database shutdown).
4. Split the pair
5. Unlock the filesystem with lockfs -u, or remount it
6. Take the database out of hot backup mode (SAP or Oracle backup mode).
7. Create the Metaset config on the Backup Server and take ownership.. If this config exists and Backup server doesn't own the diskset, purge it and recreate the meta devices & meta volume structure using the md.tab file (config from the primary production host).
============================================================
Example:
Create a metaset on secondary host:
root@Back1 # devfsadm
root@Back1 # metaset -s myds -a -h Back1
Populate the new metaset with cloned disks:
root@Back1 # metastat -s myds -a \
c3t500060E8000000000000ED160000020Ad0\
c3t500060E8000000000000ED160000020Bd0 \
c3t500060E8000000000000ED160000020Cd0 \
c3t500060E8000000000000ED160000020Dd0 \
Create new configuration for the secondary metaset.
Start by obtaining the metadevice configuration of the primary host:
root@Node1 # metastat -s myds -p
myds/d101 -p myds/d100 -o 1 -b 10485760
myds/d100 1 4 c3t500060E8000000000000ED160000020Ad0\
c3t500060E8000000000000ED160000020Bd0 \
c3t500060E8000000000000ED160000020Cd0 \
c3t500060E8000000000000ED160000020Dd0 \ -i 2048b
On the secondary host (Backup Server), create a metadevice configuration file called /etc/lvm/md.tab containing the previous output with the correct secondary LUNs.
The order of appearance must be respected:
root@Back1 # cat /etc/lvm/md.tab
myds/d101 -p myds/d100 -o 1 -b 10485760
myds/d100 1 4 c3t500060E8000000000000ED160000020Ad0s0 \
c3t500060E8000000000000ED160000020Bd0s0 \
c3t500060E8000000000000ED160000020Cd0s0 \
c3t500060E8000000000000ED160000020Dd0s0 -i 2048b
Apply the metadevice configuration file to the replicated host:
root@Back1 # metainit -s myds -a
myds/d100: Concat/Stripe is setup
myds/d101: Soft Partition is setup
If you face any of the following issues, please follow the steps listed below them.
root@back1 # metainit -s myds -a
metainit: bakpsbs1: myds: must be owner of the set for this command
root@back1 # metaset -s myds -t
metaset: back1: myds: there are no existing databases
This is for the metaset clear problem that you may face.
Its a bug which is fixed by patch 113026-13 for Solaris 9.
This patch gives you new options in metaset :
-P metaset -s  -P
-C metaset -s  -C purge
These are used to clear the metasets with stale or no DB Replicas.
How to use the command
----------------------
In a non-cluster environment -
On each node within the configuration run the command:
       metaset -s  -P
In a Sun Cluster 3.x environment -
If the disk set is in the CCR (i.e. seen in the scstat -D output), on
each node within the configuration run the command:
        metaset -s  -P
The node on which the command is run will be removed from the CCR for
that set. On the last node the set will be completely removed from the CCR.
If the diskset is not in the CCR (i.e. not seen in the scstat -D output),
run the command:
       metaset -s  -C purge
This will cause the command to have no interaction with the Sun Cluster framework.
============================================================
8. Verify the file system on the S-VOL.
root@Back1 # fsck /dev/md/myds/rdsk/d101
9.Mount and verify the integrity of the database on the S-VOL if possible.
root@Back1 # mount /dev/md/myds/dsk/d101 /mnt/LAB
10. Backup the S-VOL.
11. Unmount the S-VOL.
12. Resync the S-VOL and repeat from step 2.
NOTE: The SLVM config requires a recreation by using the md.tab entries, which may require a metaset purge as well. Currently configurations using SVM cannot
avoid the procedure outlined in step 7.


本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/18167/showart_105283.html
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP