免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
1234下一页
最近访问板块 发新帖
查看: 12988 | 回复: 32
打印 上一主题 下一主题

Solstice Disk Suite 完全使用手册(EN版) [复制链接]

论坛徽章:
2
双鱼座
日期:2014-02-23 12:10:03操作系统版块每日发帖之星
日期:2015-12-17 06:20:00
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2005-08-25 19:38 |只看该作者 |倒序浏览
Installing DiskSuite

The DiskSuite packages have moved around a bit with each release of Solaris. For Solaris 2.6 and Solaris 7, the DiskSuite packages are located on the Easy Access CD. With Solaris 8, DiskSuite moved to the "Solaris 8 Software" cdrom number two, in the EA directory. Starting the Solaris 9, DiskSuite is now included with the operating system.

At the time of this writing, Solaris 8 is the most commonly deployed version of Solaris, so we'll use that as the basis for this example. The steps are basically identical for the other releases.

1.After having completed the installation of the Solaris 8 operating system, insert the Solaris 8 software cdrom number two into the cdrom drive. If volume management is enabled, it will automatically mount to /cdrom/sol_8_401_sparc_2 (depending on the precise release iteration of Solaris 8, the exact path may differ in your case):



  1. # df -k
  2. Filesystem            kbytes    used   avail capacity  Mounted on
  3. /dev/dsk/c0t0d0s0    6607349  826881 5714395    13%    /
  4. /proc                      0       0       0     0%    /proc
  5. fd                         0       0       0     0%    /dev/fd
  6. mnttab                     0       0       0     0%    /etc/mnttab
  7. /dev/dsk/c0t0d0s4    1016863    8106  947746     1%    /var
  8. swap                 1443064       8 1443056     1%    /var/run
  9. swap                 1443080      24 1443056     1%    /tmp
  10. /vol/dev/dsk/c0t6d0/sol_8_401_sparc_2
  11.                       239718  239718       0   100%    /cdrom/sol_8_401_sparc_2

复制代码


2.Change to the directory containing the DiskSuite packages:
  1. # cd /cdrom/sol_8_401_sparc_2/Solaris_8/EA/products/Disksuite_4.2.1/sparc/Packages
复制代码


3.Add the required packages (we're taking everything except the Japanese-specific package):

  1. # pkgadd -d .

  2. The following packages are available:
  3.   1  SUNWmdg      Solstice DiskSuite Tool
  4.                   (sparc) 4.2.1,REV=1999.11.04.18.29
  5.   2  SUNWmdja     Solstice DiskSuite Japanese localization
  6.                   (sparc) 4.2.1,REV=1999.12.09.15.37
  7.   3  SUNWmdnr     Solstice DiskSuite Log Daemon Configuration Files
  8.                   (sparc) 4.2.1,REV=1999.11.04.18.29
  9.   4  SUNWmdnu     Solstice DiskSuite Log Daemon
  10.                   (sparc) 4.2.1,REV=1999.11.04.18.29
  11.   5  SUNWmdr      Solstice DiskSuite Drivers
  12.                   (sparc) 4.2.1,REV=1999.12.03.10.00
  13.   6  SUNWmdu      Solstice DiskSuite Commands
  14.                   (sparc) 4.2.1,REV=1999.11.04.18.29
  15.   7  SUNWmdx      Solstice DiskSuite Drivers(64-bit)
  16.                   (sparc) 4.2.1,REV=1999.11.04.18.29

  17. Select package(s) you wish to process (or 'all' to process
  18. all packages). (default: all) [?,??,q]: 1 3 4 5 6 7

  19. Processing package instance <SUNWmdg>; from </cdrom/sol_8_401_sparc_2/Solaris_8/EA/products/DiskSuite_4.2.1/sparc/Packages>;
  20. .
  21. .
  22. .
  23. postinstall: configure driver

  24.                 (This may take a while.)

  25. Installation of <SUNWmdx>; was successful.

  26. The following packages are available:
  27.   1  SUNWmdg      Solstice DiskSuite Tool
  28.                   (sparc) 4.2.1,REV=1999.11.04.18.29
  29.   2  SUNWmdja     Solstice DiskSuite Japanese localization
  30.                   (sparc) 4.2.1,REV=1999.12.09.15.37
  31.   3  SUNWmdnr     Solstice DiskSuite Log Daemon Configuration Files
  32.                   (sparc) 4.2.1,REV=1999.11.04.18.29
  33.   4  SUNWmdnu     Solstice DiskSuite Log Daemon
  34.                   (sparc) 4.2.1,REV=1999.11.04.18.29
  35.   5  SUNWmdr      Solstice DiskSuite Drivers
  36.                   (sparc) 4.2.1,REV=1999.12.03.10.00
  37.   6  SUNWmdu      Solstice DiskSuite Commands
  38.                   (sparc) 4.2.1,REV=1999.11.04.18.29
  39.   7  SUNWmdx      Solstice DiskSuite Drivers(64-bit)
  40.                   (sparc) 4.2.1,REV=1999.11.04.18.29

  41. Select package(s) you wish to process (or 'all' to process
  42. all packages). (default: all) [?,??,q]: q

  43. *** IMPORTANT NOTICE ***
  44.         This machine must now be rebooted in order to ensure
  45.         sane operation.  Execute
  46.                shutdown -y -i6 -g0
  47.         and wait for the "Console Login:" prompt.
  48. # eject cdrom
  49. # shutdown -y -i6 -g0
复制代码


4.Once the system reboots, apply any DiskSuite patches. At the time of this writing, the latest recommended DiskSuite patch available from sunsolve.sun.com is 106627-18 (DiskSuite 4.2) or 108693-13 (DiskSuite 4.2.1). Note that the patch installation instructions require that a reboot be performed after the patch is installed.

[ 本帖最后由 东方蜘蛛 于 2007-3-19 09:23 编辑 ]

论坛徽章:
2
双鱼座
日期:2014-02-23 12:10:03操作系统版块每日发帖之星
日期:2015-12-17 06:20:00
2 [报告]
发表于 2005-08-26 00:04 |只看该作者

Solstice Disk Suite 完全使用手册

Mirroring the operating system

In the steps below, I'm using DiskSuite to mirror the active root disk (c0t0d0) to a mirror (c0t1d0). I'm assuming that partitions five and six of each disk have a couple of cylinders free for DiskSuite's state database replicas.

Introduction

First, we start with a filesystem layout that looks as follows:

  1. Filesystem            kbytes    used   avail capacity  Mounted on
  2. /dev/dsk/c0t0d0s0    6607349  826881 5714395    13%    /
  3. /proc                      0       0       0     0%    /proc
  4. fd                         0       0       0     0%    /dev/fd
  5. mnttab                     0       0       0     0%    /etc/mnttab
  6. /dev/dsk/c0t0d0s4    1016863    8106  947746     1%    /var
  7. swap                 1443064       8 1443056     1%    /var/run
  8. swap                 1443080      24 1443056     1%    /tmp
复制代码


We're going to be mirroring from c0t0d0 to c0t1d0. When the operating system was installed, we created unassigned slices five, six, and seven of roughly 10 MB each. We will use slices five and six for the DiskSuite state database replicas. The output from the "format" command is as follows:

  1. # format
  2. Searching for disks...done


  3. AVAILABLE DISK SELECTIONS:
  4.        0. c0t0d0 <SEAGATE-ST19171W-0024 cyl 5266 alt 2 hd 20 sec 168>;
  5.           /pci@1f,4000/scsi@3/sd@0,0
  6.        1. c0t1d0 <SEAGATE-ST19171W-0024 cyl 5266 alt 2 hd 20 sec 168>;
  7.           /pci@1f,4000/scsi@3/sd@1,0
  8. Specify disk (enter its number): 0

  9. selecting c0t0d0
  10. [disk formatted]
  11. ...
  12. partition>; p
  13. Current partition table (original):
  14. Total disk cylinders available: 5266 + 2 (reserved cylinders)

  15. Part      Tag    Flag     Cylinders        Size            Blocks
  16.   0       root    wm       0 - 3994        6.40GB    (3995/0/0) 13423200
  17.   1       swap    wu    3995 - 4619        1.00GB    (625/0/0)   2100000
  18.   2     backup    wm       0 - 5265        8.44GB    (5266/0/0) 17693760
  19.   3 unassigned    wu       0               0         (0/0/0)           0
  20.   4        var    wm    4620 - 5244        1.00GB    (625/0/0)   2100000
  21.   5 unassigned    wm    5245 - 5251       11.48MB    (7/0/0)       23520
  22.   6 unassigned    wm    5252 - 5258       11.48MB    (7/0/0)       23520
  23.   7 unassigned    wm    5259 - 5265       11.48MB    (7/0/0)  23520
复制代码
  


DiskSuite Mirroring

Note that much of the process of mirroring the root disk has been automated with the sdsinstall script. With the exception of the creation of device aliases, all of the work done in the following steps can be achieved via the following:

  1. # ./sdsinstall -p c0t0d0 -s c0t1d0 -m s5 -m s6
复制代码


1.Ensure that the partition tables of both disks are identical:

  1. # prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2
复制代码


2.Add the state database replicas. For redundancy, each disk has two state database replicas.

  1. # metadb -a -f c0t0d0s5
  2. # metadb -a c0t0d0s6
  3. # metadb -a c0t1d0s5
  4. # metadb -a c0t1d0s6
复制代码



Note that there appears to be a lot of confusion regarding the recommended number and location of state database replicas. According the the DiskSuite reference manual:

State database replicas contain configuration and status information of all metadevices and hot spares. Multiple copies (replicas) are maintained to provide redundancy. Multiple copies also prevent the database from being corrupted during a system crash (at most, only one copy if the database will be corrupted).

State database replicas are also used for mirror resync regions. Too few state database replicas relative to the number of mirrors may cause replica I/O to impact mirror performance.

At least three replicas are recommended. DiskSuite allows a maximum of 50 replicas. The following guidelines are recommended:

For a system with only a single drive: put all 3 replicas in one slice.

For a system with two to four drives: put two replicas on each drive.

For a system with five or more drives: put one replica on each drive.

In general, it is best to distribute state database replicas across slices, drives, and controllers, to avoid single points-of-failure.

Each state database replica occupies 517 KB (1034 disk sectors) of disk storage by default. Replicas can be stored on: a dedicated disk partition, a partition which will be part of a metadevice, or a partition which will be part of a logging - device.

Note - Replicas cannot be stored on the root (/), swap, or /usr slices, or on slices containing existing file systems or data.

Starting with DiskSuite 4.2.1, an optional /etc/system parameter exists which allows DiskSuite to boot with just 50% of the state database replicas online. For example, if one of the two boot disks were to fail, just two of the four state database replicas would be available. Without this /etc/system parameter (or with older versions of DiskSuite), the system would complain of "insufficient state database replicas", and manual intervention would be required on bootup. To enable the "50% boot" behaviour with DiskSuite 4.2.1, execute the following command:

  1. # echo "set md:mirrored_root_flag=1" >;>; /etc/system
复制代码


3.Define the metadevices on c0t0d0 (/):

  1. # metainit -f d10 1 1 c0t0d0s0
  2. # metainit -f d20 1 1 c0t1d0s0
  3. # metainit d0 -m d10
复制代码


The metaroot command edits the /etc/vfstab and /etc/system files:

  1. # metaroot d0
复制代码


Define the metadevices for c0t0d0s1 (swap):

  1. # metainit -f d11 1 1 c0t0d0s1
  2. # metainit -f d21 1 1 c0t1d0s1
  3. # metainit d1 -m d11
复制代码


Define the metadevices for c0t0d0s4 (/var):

  1. # metainit -f d14 1 1 c0t0d0s4
  2. # metainit -f d24 1 1 c0t1d0s4
  3. # metainit d4 -m d14
复制代码


4.Edit /etc/vfstab so that it references the DiskSuite metadevices instead of simple slices:

  1. #device           device          mount   FS      fsck    mount   mount
  2. #to mount         to fsck         point   type    pass    at boot options
  3. #
  4. fd               -                /dev/fd fd      -       no      -
  5. /proc            -                /proc   proc    -       no      -
  6. /dev/md/dsk/d1   -                -       swap    -       no      -
  7. /dev/md/dsk/d0   /dev/md/rdsk/d0  /       ufs     1       no      logging
  8. /dev/md/dsk/d4   /dev/md/rdsk/d4  /var    ufs     1       no      logging
  9. swap             -                /tmp    tmpfs   -       yes     -
复制代码


5.Reboot the system:

  1. # lockfs -fa

  2. # sync;sync;sync;init 6
复制代码


6.After the system reboots from the metadevices for /, /var, and swap, set up mirrors:

  1. # metattach d0 d20
  2. # metattach d1 d21
  3. # metattach d4 d24
复制代码


The process of synchronizing the data to the mirror disk will take a while. You can monitor its progress via the command:

  1. # metastat|grep -i progress
复制代码


7.Capture the DiskSuite configuration in the text file md.tab. With Solaris 2.6 and Solaris 7, this text file resides in the directory /etc/opt/SUNWmd; however, more recent versions of Solaris place the file in the /etc/lvm directory. We'll assume that we're running Solaris 8 here:

  1. # metastat -p | tee /etc/lvm/md.tab
复制代码


8.In order for the system to be able to dump core in the event of a panic, the dump device needs to reference the DiskSuite metadevice:

  1. # dumpadm -d /dev/md/dsk/d1
复制代码


9.If the primary boot disk should fail, make it easy to boot from the mirror. Some sites choose to alter the OBP "boot-device" variable; in this case, we choose to simply define the device aliases "sds-root" and "sds-mirror". In the event that the primary boot device ("disk" or "sds-root" should fail, the administrator simply needs to type "boot sds-mirror" at the OBP prompt.

Determine the device path to the boot devices for both the primary and mirror:

  1. # ls -l /dev/dsk/c0t0d0s0 /dev/dsk/c0t1d0s0
  2. lrwxrwxrwx   1 root     root          41 Oct 17 11:48 /dev/dsk/c0t0d0s0 ->; ../..
  3. /devices/pci@1f,4000/scsi@3/sd@0,0:a
  4. lrwxrwxrwx   1 root     root          41 Oct 17 11:48 /dev/dsk/c0t1d0s0 ->; ../..
  5. /devices/pci@1f,4000/scsi@3/sd@1,0:a
复制代码


Use the device paths to define the sds-root and sds-mirror device aliases (note that we use the label "disk" instead of "sd" in the device alias path):

  1. # eeprom "nvramrc=devalias sds-root /pci@1f,4000/scsi@3/disk@0,0
  2. devalias sds-mirror /pci@1f,4000/scsi@3/disk@1,0"
  3. # eeprom "use-nvramrc?=true"
复制代码


Test the process of booting from either sds-root or sds-mirror.

Once the above sequence of steps has been completed. the system will look as follows:

  1. # metadb
  2.         flags           first blk       block count
  3.      a m  p  luo        16              1034            /dev/dsk/c0t0d0s5
  4.      a    p  luo        16              1034            /dev/dsk/c0t0d0s6
  5.      a    p  luo        16              1034            /dev/dsk/c0t1d0s5
  6.      a    p  luo        16              1034            /dev/dsk/c0t1d0s6

  7. # df -k
  8. Filesystem            kbytes    used   avail capacity  Mounted on
  9. /dev/md/dsk/d0       6607349  845208 5696068    13%    /
  10. /proc                      0       0       0     0%    /proc
  11. fd                         0       0       0     0%    /dev/fd
  12. mnttab                     0       0       0     0%    /etc/mnttab
  13. /dev/md/dsk/d4       1016863    8414  947438     1%    /var
  14. swap                 1443840       8 1443832     1%    /var/run
  15. swap                 1443848      16 1443832     1%    /tmp
复制代码


Trans metadevices for logging

UFS filesystem logging was first supported with Solaris 7. Prior to that release, one could create trans metadevices with DiskSuite to achieve the same effect. For Solaris 7 and up, it's much easier to simply enable ufs logging by adding the word "logging" to the last field of the /etc/vfstab file. The following section is included for those increasingly rare Solaris 2.6 installations.

The following two steps assume that you are have an available (<=64MB) slice 3 available for logging.

1.Define the trans metadevice mirror (c0t0d0s3):

  1. # metainit d13 1 1 c0t0d0s3
  2. # metainit d23 1 1 c0t1d0s3
  3. # metainit d3 -m d13
  4. # metattach d3 d23
复制代码


2.Make /var use the trans meta device for logging:

  1. # metainit -f d64 -t d4 d3
复制代码


Edit vfstab as follows:

  1. /dev/md/dsk/d64 /dev/md/rdsk/d64 /var ufs 1 no -
复制代码



Ensure that no volumes are syncing before running the following:

  1. # sync;sync;sync;init 6
复制代码

论坛徽章:
2
双鱼座
日期:2014-02-23 12:10:03操作系统版块每日发帖之星
日期:2015-12-17 06:20:00
3 [报告]
发表于 2005-08-26 00:10 |只看该作者

Solstice Disk Suite 完全使用手册

Disksuite examples

Here are some quick examples on doing a few other DiskSuite tasks:
creating a striped metadevice:
  1. # metainit d10 1 2 c0t1d0s0 c0t2d0s0 -i 32k
复制代码


creating a mirror of two slices:
  1. # metainit d60 1 1 c0t2d0s0
  2. # metainit d61 1 1 c0t3d0s0
  3. # metainit d6 -m d60
  4. # metattach d6 d61
复制代码


creating a concatenation of 2 slices:
  1. # metainit d25 2 1 c0t1d0s0 1 c0t2d0s0
复制代码


creating a concatenation of 4 slices:
  1. # metainit d25 4 1 c0t1d0s0 1 c0t2d0s0 1 c0t3d0s0 1 c0t4d0s0
复制代码


creating a concatenation of 2 stripes:
  1. # metainit d0 2 4 c2t0d0s6 c2t1d0s6 c2t2d0s6 c2t3d0s6 -i 64k 2 c3t0d0s6 c3t1d0s6 -i 64k
复制代码


creating a raid5 metadevice from three slices:
  1. # metainit d45 -r c2t3d0s0 c3t0d0s0 c4t0d0s0
复制代码

论坛徽章:
2
双鱼座
日期:2014-02-23 12:10:03操作系统版块每日发帖之星
日期:2015-12-17 06:20:00
4 [报告]
发表于 2005-08-26 00:15 |只看该作者

Solstice Disk Suite 完全使用手册

Replacing a failed bootdisk
In the following example, the host has a failed bootdisk (c0t0d0). Fortunately, the system is using DiskSuite, with a mirror at c0t1d0. The following sequence of steps can be used to restore the system to full redundancy.

System fails to boot

When the system attempts to boot, it fails to find a valid device as required by the boot-device path at device alias "disk". It then attempts to boot from the network:

  1. screen not found.
  2. Can't open input device.
  3. Keyboard not present.  Using ttya for input and output.

  4. Sun Ultra 30 UPA/PCI (UltraSPARC-II 296MHz), No Keyboard
  5. OpenBoot 3.27, 512 MB memory installed, Serial #9377973.
  6. Ethernet address 8:0:20:8f:18:b5, Host ID: 808f18b5.



  7. Initializing Memory
  8. Timeout waiting for ARP/RARP packet
  9. Timeout waiting for ARP/RARP packet
  10. Timeout waiting for ARP/RARP packet
  11. ...
复制代码


Boot from mirror

At this point, the administrator realizes that the boot disk has failed, and queries the device aliases to find the one corresponding to the disksuite mirror:

  1. ok devalias
  2. sds-mirror               /pci@1f,4000/scsi@3/disk@1,0
  3. sds-root                 /pci@1f,4000/scsi@3/disk@0,0
  4. net                      /pci@1f,4000/network@1,1
  5. disk                     /pci@1f,4000/scsi@3/disk@0,0
  6. cdrom                    /pci@1f,4000/scsi@3/disk@6,0:f
  7. ...
复制代码


The administrator then boots the system from the mirror device "sds-mirror":

  1. ok boot sds-mirror
复制代码


The system starts booting off of sds-mirror. However, because there are only two of the original four state database replicas available, a quorum is not achieved. The system requires manual intervention to remove the two failed state database replicas:

Starting with DiskSuite 4.2.1, an optional /etc/system parameter exists which allows DiskSuite to boot with just 50% of the state database replicas online. For example, if one of the two boot disks were to fail, just two of the four state database replicas would be available. Without this /etc/system parameter (or with older versions of DiskSuite), the system would complain of "insufficient state database replicas", and manual intervention would be required on bootup. To enable the "50% boot" behaviour with DiskSuite 4.2.1, execute the following command:

  1. # echo "set md:mirrored_root_flag=1" >;>; /etc/system
复制代码

  1. Boot device: /pci@1f,4000/scsi@3/disk@1,0  File and args:
  2. SunOS Release 5.8 Version Generic_108528-07 64-bit
  3. Copyright 1983-2001 Sun Microsystems, Inc.  All rights reserved.
  4. WARNING: md: d10: /dev/dsk/c0t0d0s0 needs maintenance
  5. WARNING: forceload of misc/md_trans failed
  6. WARNING: forceload of misc/md_raid failed
  7. WARNING: forceload of misc/md_hotspares failed
  8. configuring IPv4 interfaces: hme0.
  9. Hostname: pegasus
  10. metainit: pegasus: stale databases

  11. Insufficient metadevice database replicas located.

  12. Use metadb to delete databases which are broken.
  13. Ignore any "Read-only file system" error messages.
  14. Reboot the system when finished to reload the metadevice database.
  15. After reboot, repair any broken database replicas which were deleted.

  16. Type control-d to proceed with normal startup,
  17. (or give root password for system maintenance): ******

  18. single-user privilege assigned to /dev/console.
  19. Entering System Maintenance Mode

  20. Oct 17 19:11:29 su: 'su root' succeeded for root on /dev/console
  21. Sun Microsystems Inc.   SunOS 5.8       Generic February 2000

  22. # metadb -i
  23.         flags           first blk       block count
  24.     M     p             unknown         unknown         /dev/dsk/c0t0d0s5
  25.     M     p             unknown         unknown         /dev/dsk/c0t0d0s6
  26.      a m  p  lu         16              1034            /dev/dsk/c0t1d0s5
  27.      a    p  l          16              1034            /dev/dsk/c0t1d0s6
  28. o - replica active prior to last mddb configuration change
  29. u - replica is up to date
  30. l - locator for this replica was read successfully
  31. c - replica's location was in /etc/lvm/mddb.cf
  32. p - replica's location was patched in kernel
  33. m - replica is master, this is replica selected as input
  34. W - replica has device write errors
  35. a - replica is active, commits are occurring to this replica
  36. M - replica had problem with master blocks
  37. D - replica had problem with data blocks
  38. F - replica had format problems
  39. S - replica is too small to hold current data base
  40. R - replica had device read errors


  41. # metadb -d c0t0d0s5 c0t0d0s6
  42. metadb: pegasus: /etc/lvm/mddb.cf.new: Read-only file system

  43. # metadb -i
  44.         flags           first blk       block count
  45.      a m  p  lu         16              1034            /dev/dsk/c0t1d0s5
  46.      a    p  l          16              1034            /dev/dsk/c0t1d0s6
  47. o - replica active prior to last mddb configuration change
  48. u - replica is up to date
  49. l - locator for this replica was read successfully
  50. c - replica's location was in /etc/lvm/mddb.cf
  51. p - replica's location was patched in kernel
  52. m - replica is master, this is replica selected as input
  53. W - replica has device write errors
  54. a - replica is active, commits are occurring to this replica
  55. M - replica had problem with master blocks
  56. D - replica had problem with data blocks
  57. F - replica had format problems
  58. S - replica is too small to hold current data base
  59. R - replica had device read errors

  60. # reboot -- sds-mirror
复制代码


Check extent of failures

Once the reboot is complete, the administrator then logs into the system and checks the status of the DiskSuite metadevices. Not only have the state database replicas failed, but all of the DiskSuite metadevices previously located on device c0t0d0 need to be replaced. Clearly the disk has completely failed.

  1. pegasus console login: root
  2. Password:  ******
  3. Oct 17 19:14:03 pegasus login: ROOT LOGIN /dev/console
  4. Last login: Thu Oct 17 19:02:42 from rambler.wakefie
  5. Sun Microsystems Inc.   SunOS 5.8       Generic February 2000

  6. # metastat
  7. d0: Mirror
  8.     Submirror 0: d10
  9.       State: Needs maintenance
  10.     Submirror 1: d20
  11.       State: Okay         
  12.     Pass: 1
  13.     Read option: roundrobin (default)
  14.     Write option: parallel (default)
  15.     Size: 13423200 blocks

  16. d10: Submirror of d0
  17.     State: Needs maintenance
  18.     Invoke: metareplace d0 c0t0d0s0 <new device>;
  19.     Size: 13423200 blocks
  20.     Stripe 0:
  21.         Device              Start Block  Dbase State        Hot Spare
  22.         c0t0d0s0                   0     No    Maintenance  


  23. d20: Submirror of d0
  24.     State: Okay         
  25.     Size: 13423200 blocks
  26.     Stripe 0:
  27.         Device              Start Block  Dbase State        Hot Spare
  28.         c0t1d0s0                   0     No    Okay         


  29. d1: Mirror
  30.     Submirror 0: d11
  31.       State: Needs maintenance
  32.     Submirror 1: d21
  33.       State: Okay         
  34.     Pass: 1
  35.     Read option: roundrobin (default)
  36.     Write option: parallel (default)
  37.     Size: 2100000 blocks

  38. d11: Submirror of d1
  39.     State: Needs maintenance
  40.     Invoke: metareplace d1 c0t0d0s1 <new device>;
  41.     Size: 2100000 blocks
  42.     Stripe 0:
  43.         Device              Start Block  Dbase State        Hot Spare
  44.         c0t0d0s1                   0     No    Maintenance  


  45. d21: Submirror of d1
  46.     State: Okay         
  47.     Size: 2100000 blocks
  48.     Stripe 0:
  49.         Device              Start Block  Dbase State        Hot Spare
  50.         c0t1d0s1                   0     No    Okay         


  51. d4: Mirror
  52.     Submirror 0: d14
  53.       State: Needs maintenance
  54.     Submirror 1: d24
  55.       State: Okay         
  56.     Pass: 1
  57.     Read option: roundrobin (default)
  58.     Write option: parallel (default)
  59.     Size: 2100000 blocks

  60. d14: Submirror of d4
  61.     State: Needs maintenance
  62.     Invoke: metareplace d4 c0t0d0s4 <new device>;
  63.     Size: 2100000 blocks
  64.     Stripe 0:
  65.         Device              Start Block  Dbase State        Hot Spare
  66.         c0t0d0s4                   0     No    Maintenance  


  67. d24: Submirror of d4
  68.     State: Okay         
  69.     Size: 2100000 blocks
  70.     Stripe 0:
  71.         Device              Start Block  Dbase State        Hot Spare
  72.         c0t1d0s4                   0     No    Okay         
复制代码



Replace failed disk and restore redundancy

The administrator replaces the failed disk with a new disk of the same geometry. Depending on the system model, the disk replacement may require that the system be powered down. The replacement disk is then partitioned identically to the mirror, and state database replicas are copied onto the replacement disk. Finally, the metareplace command copies that data from the mirror to the replacement disk, restoring redundancy to the system.

  1. # prtvtoc /dev/rdsk/c0t1d0s2 | fmthard -s - /dev/rdsk/c0t0d0s2
  2. fmthard:  New volume table of contents now in place.

  3. # installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

  4. # metadb -f -a /dev/dsk/c0t0d0s5

  5. # metadb -f -a /dev/dsk/c0t0d0s6

  6. # metadb -i
  7.         flags           first blk       block count
  8.      a        u         16              1034            /dev/dsk/c0t0d0s5
  9.      a        u         16              1034            /dev/dsk/c0t0d0s6
  10.      a m  p  luo        16              1034            /dev/dsk/c0t1d0s5
  11.      a    p  luo        16              1034            /dev/dsk/c0t1d0s6
  12. o - replica active prior to last mddb configuration change
  13. u - replica is up to date
  14. l - locator for this replica was read successfully
  15. c - replica's location was in /etc/lvm/mddb.cf
  16. p - replica's location was patched in kernel
  17. m - replica is master, this is replica selected as input
  18. W - replica has device write errors
  19. a - replica is active, commits are occurring to this replica
  20. M - replica had problem with master blocks
  21. D - replica had problem with data blocks
  22. F - replica had format problems
  23. S - replica is too small to hold current data base
  24. R - replica had device read errors

  25. # metareplace -e d0 c0t0d0s0
  26. d0: device c0t0d0s0 is enabled

  27. # metareplace -e d1 c0t0d0s1
  28. d1: device c0t0d0s1 is enabled

  29. # metareplace -e d4 c0t0d0s4
  30. d4: device c0t0d0s4 is enabled
复制代码


Once the resync process is complete, operating system redundancy has been restored.

论坛徽章:
0
5 [报告]
发表于 2005-08-26 00:27 |只看该作者

Solstice Disk Suite 完全使用手册

收了先     

论坛徽章:
0
6 [报告]
发表于 2005-08-26 09:04 |只看该作者

Solstice Disk Suite 完全使用手册

强烈支持!!
顶!!

论坛徽章:
2
双鱼座
日期:2014-02-23 12:10:03操作系统版块每日发帖之星
日期:2015-12-17 06:20:00
7 [报告]
发表于 2005-08-26 09:16 |只看该作者

Solstice Disk Suite 完全使用手册

Performing maintenance when booted from cdrom

Introduction
If the system has to be booted from a cdrom or from the network ("boot cdrom" or "boot net" in order to perform maintenance, the operator needs to adjust for the existence of a mirrored operating system. Because these alternate boot devices do not include the drivers necessary for disksuite, they cannot be used to operate on state database replicas and Disksuite metadevices. This raises subtle issues addressed below.

Typically, the administrator is often under pressure while performing these types of maintenance. Because simple mistakes at this stage can render the system unusable, it is important that the process be well documented and tested prior to using it in production. Fortunately, this process is simpler than that for Veritas volume manager because there are no "de-encapsulation" issues to address.

Booting from cdrom with DiskSuite mirrored devices

In the example below, the server pegasus has two internal disks (c0t0d0 and c0t1d0) under Disksuite control. The operating system is mirrored between the two devices, with slices five and six on each disk employed for state database replicas. Assume that the administrator has forgotten the root password on this server, and needs to boot from cdrom in order to edit the shadow file.

1.Insert the Solaris operating system CD into the cdrom drive and boot from it into single-user mode:

  1. ok boot cdrom -s

  2. Initializing Memory
  3. Rebooting with command: boot cdrom -s
  4. Boot device: /pci@1f,4000/scsi@3/disk@6,0:f  File and args: -s
  5. SunOS Release 5.8 Version Generic_108528-07 64-bit
  6. Copyright 1983-2001 Sun Microsystems, Inc.  All rights reserved.
  7. Configuring /dev and /devices
  8. Using RPC Bootparams for network configuration information.
  9. Skipping interface hme0

  10. INIT: SINGLE USER MODE
  11. #
复制代码


2.Fsck and mount the root disk's "/" partition in order to edit the /etc/shadow file:

  1. # fsck -y /dev/rdsk/c0t0d0s0

  2. # mount /dev/dsk/c0t0d0s0 /a
复制代码


3.Remove the encrypted password from the /a/etc/shadow file:

  1. # TERM=vt100; export TERM

  2. # vi /a/etc/shadow
复制代码


For example, if the entry for the root user looks like the following:

  1. root:NqfAn3tWOy2Ro:6445::::::
复制代码


Change it so that is looks as follows:

  1. root::6445::::::
复制代码


4.Comment out the rootdev entry from the /a/etc/system file:

  1. # vi /a/etc/system
复制代码


For example, change the line:

  1. rootdev:/pseudo/md@0:0,0,blk
复制代码


to

  1. * rootdev:/pseudo/md@0:0,0,blk
复制代码


5.Update the /a/etc/vfstab file so that it references simple disk slices instead of disksuite metadevices. Note that you only have to change those enties that correspond to operating system slices.

For example, one would change the following /a/etc/vfstab file:

  1. #device             device             mount   FS      fsck    mount   mount
  2. #to mount           to fsck            point   type    pass    at boot options
  3. #
  4. fd                  -                  /dev/fd fd      -       no      -
  5. /proc               -                  /proc   proc    -       no      -
  6. /dev/md/dsk/d1      -                  -       swap    -       no      -
  7. /dev/md/dsk/d0      /dev/md/rdsk/d0    /       ufs     1       no      logging
  8. /dev/md/dsk/d4      /dev/md/rdsk/d4    /var    ufs     1       no      logging
  9. swap                -                  /tmp    tmpfs   -       yes     -
复制代码


to:
  1. #device             device             mount   FS      fsck    mount   mount
  2. #to mount           to fsck            point   type    pass    at boot options
  3. #
  4. fd                  -                  /dev/fd fd      -       no      -
  5. /proc               -                  /proc   proc    -       no      -
  6. /dev/dsk/c0t0d0s1   -                  -       swap    -       no      -
  7. /dev/dsk/c0t0d0s0   /dev/rdsk/c0t0d0s0 /       ufs     1       no      logging
  8. /dev/dsk/c0t0d0s4   /dev/rdsk/c0t0d0s4 /var    ufs     1       no      logging
  9. swap                -                  /tmp    tmpfs   -       yes     -
复制代码


6.Unmount the root filesystem, fsck it, and return to the ok prompt:

  1. # cd /; umount /a; fsck -y /dev/rdsk/c0t0d0s0

  2. # stop-a
  3. ok
复制代码


7.Boot from c0t0d0 into single user mode. It is important to boot just to single user mode so that DiskSuite does not start automatically:

  1. ok boot -sw
复制代码


8.When prompted for the root password, just press the ENTER key. Once at the shell prompt, clear the metadevices that referenced the filesystems that you updated in the /a/etc/vfstab file above:

  1. # metaclear -f -r d0 d1 d4
复制代码


Now would be an appropriate time to create a new password for the root account, via the passwd root command

9.Exit from single user mode and continue the boot process.

  1. # exit
复制代码

论坛徽章:
0
8 [报告]
发表于 2005-08-26 09:27 |只看该作者

Solstice Disk Suite 完全使用手册

好贴!

论坛徽章:
0
9 [报告]
发表于 2005-08-26 09:51 |只看该作者

Solstice Disk Suite 完全使用手册

强贴留名

论坛徽章:
0
10 [报告]
发表于 2005-08-26 10:03 |只看该作者

Solstice Disk Suite 完全使用手册

这个帖子很不错
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP