- 论坛徽章:
- 0
|
Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: 17
Exclude Devices
Menu: VolumeManager/Disk/ExcludeDevices
This operation might lead to some devices being suppressed from VxVM's view
or prevent them from being multipathed by vxdmp (This operation can be
reversed using the vxdiskadm command).
Do you want to continue ? [y,n,q,?] (default: y) [Enter]
Volume Manager Device Operations
Menu: VolumeManager/Disk/ExcludeDevices
1 Suppress all paths through a controller from VxVM's view
2 Suppress a path from VxVM's view
3 Suppress disks from VxVM's view by specifying a VIDID combination
4 Suppress all but one paths to a disk
5 Prevent multipathing of all disks on a controller by VxVM
6 Prevent multipathing of a disk by VxVM
7 Prevent multipathing of disks by specifying a VIDID combination
8 List currently suppressed/non-multipathed devices
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: 5
Exclude controllers from DMP
Menu: VolumeManager/Disk/ExcludeDevices/CTLR-DMP
Use this operation to exclude all disks on a controller from being multipathed
by vxdmp.
As a result of this operation, all disks having a path through the specified
controller will be claimed in the OTHER_DISKS category and hence, not
multipathed by vxdmp. This operation can be reversed using the vxdiskadm
command.
You can specify a controller name at the prompt. A controller name is of
the form c#, example c3, c11 etc. Enter 'all' to exclude all paths on all
the controllers on the host. To see the list of controllers on the system,
type 'list'.
Enter a controller name [,all,list,list-exclude,q,?] list
The following controllers were found on the system :
c0 c1 c2
注意:此处的c0对应本地磁盘,所以下方填写不应该把c0填写进去,c1和c2对应为磁盘阵列的LUN
Hit RETURN to continue. [Enter]
Exclude controllers from DMP
Menu: VolumeManager/Disk/ExcludeDevices/CTLR-DMP
Use this operation to exclude all disks on a controller from being multipathed
by vxdmp.
As a result of this operation, all disks having a path through the specified
controller will be claimed in the OTHER_DISKS category and hence, not
multipathed by vxdmp. This operation can be reversed using the vxdiskadm
command.
You can specify a controller name at the prompt. A controller name is of
the form c#, example c3, c11 etc. Enter 'all' to exclude all paths on all
the controllers on the host. To see the list of controllers on the system,
type 'list'.
Enter a controller name [,all,list,list-exclude,q,?] list-exclude
Devices excluded from VxVM:
--------------------------
Paths : None
Controllers : None
VIDID : None
Devices excluded from multipathing by vxdmp:
-------------------------------------------
Paths : None
VIDID : None
Pathgroups : None
----------
Hit RETURN to continue. [Enter]
Exclude controllers from DMP
Menu: VolumeManager/Disk/ExcludeDevices/CTLR-DMP
Use this operation to exclude all disks on a controller from being multipathed
by vxdmp.
As a result of this operation, all disks having a path through the specified
controller will be claimed in the OTHER_DISKS category and hence, not
multipathed by vxdmp. This operation can be reversed using the vxdiskadm
command.
You can specify a controller name at the prompt. A controller name is of
the form c#, example c3, c11 etc. Enter 'all' to exclude all paths on all
the controllers on the host. To see the list of controllers on the system,
type 'list'.
Enter a controller name [,all,list,list-exclude,q,?] c1
All disks on the following enclosures will be excluded from DMP ( ie
claimed in the OTHER_DISKS category and hence not multipathed by vxdmp) as a
result of this operation :
RDAC0
Continue operation? [y,n,q,?] (default: y) [Enter]
Do you wish to exclude more controllers ? [y,n,q,?] (default: n) y
Exclude controllers from DMP
Menu: VolumeManager/Disk/ExcludeDevices/CTLR-DMP
Use this operation to exclude all disks on a controller from being multipathed
by vxdmp.
As a result of this operation, all disks having a path through the specified
controller will be claimed in the OTHER_DISKS category and hence, not
multipathed by vxdmp. This operation can be reversed using the vxdiskadm
command.
You can specify a controller name at the prompt. A controller name is of
the form c#, example c3, c11 etc. Enter 'all' to exclude all paths on all
the controllers on the host. To see the list of controllers on the system,
type 'list'.
Enter a controller name [,all,list,list-exclude,q,?] c2
All disks on the following enclosures will be excluded from DMP ( ie
claimed in the OTHER_DISKS category and hence not multipathed by vxdmp) as a
result of this operation :
RDAC0
Continue operation? [y,n,q,?] (default: y) [Enter]
Do you wish to exclude more controllers ? [y,n,q,?] (default: n) [Enter]
Volume Manager Device Operations
Menu: VolumeManager/Disk/ExcludeDevices
1 Suppress all paths through a controller from VxVM's view
2 Suppress a path from VxVM's view
3 Suppress disks from VxVM's view by specifying a VIDID combination
4 Suppress all but one paths to a disk
5 Prevent multipathing of all disks on a controller by VxVM
6 Prevent multipathing of a disk by VxVM
7 Prevent multipathing of disks by specifying a VIDID combination
8 List currently suppressed/non-multipathed devices
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: 8
Devices hidden from VxVM / not multipathed by vxdmp
Menu: VolumeManager/Disk/ExcludeDevices/listexclude
The following is the list of devices currently hidden from VxVM or not
multipathed by vxdmp:
Devices excluded from VxVM:
引用
回复
![]()
游客
未注册
#12
使用道具
发表于 2007-4-6 14:50
资料
短消息
加为好友
[/url]
Paths : None
Controllers : None
VIDID : None
Devices excluded from multipathing by vxdmp:
-------------------------------------------
Paths : c1t5d0 c1t5d1 c2t5d0 c2t5d1
VIDID : None
Pathgroups : None
----------
Hit RETURN to continue. [Enter]
Volume Manager Device Operations
Menu: VolumeManager/Disk/ExcludeDevices
1 Suppress all paths through a controller from VxVM's view
2 Suppress a path from VxVM's view
3 Suppress disks from VxVM's view by specifying a VIDID combination
4 Suppress all but one paths to a disk
5 Prevent multipathing of all disks on a controller by VxVM
6 Prevent multipathing of a disk by VxVM
7 Prevent multipathing of disks by specifying a VIDID combination
8 List currently suppressed/non-multipathed devices
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: q
Volume Manager Support Operations
Menu: VolumeManager/Disk
1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
6 Mirror volumes on a disk
7 Move volumes from a disk
8 Enable access to (import) a disk group
9 Remove access to (deport) a disk group
10 Enable (online) a disk device
11 Disable (offline) a disk device
12 Mark a disk as a spare for a disk group
13 Turn off the spare flag on a disk
14 Unrelocate subdisks back to a disk
15 Exclude a disk from hot-relocation use
16 Make a disk available for hot-relocation use
17 Prevent multipathing/Suppress devices from VxVM's view
Hit RETURN to continue. [Enter]
18 Allow multipathing/Unsuppress devices from VxVM's view
19 List currently suppressed/non-multipathed devices
20 Change the disk naming scheme
21 Get the newly connected/zoned disks in VxVM view
list List disk information
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform: q
Please wait while the device suppression/unsuppression operations take effect.
Goodbye.
root@a1 #
root@a1 # scshutdown -y -g0
1.3.2 创建卷happydg
注意:
1、 此操作只需在一台机器上操作即可
2、以下命令内容可以存为一个 名为 creat_volume.sh的文件,然后chmod 755 create_volume.sh 来使文件变为可执行,sh create_volume.sh 运行该脚本即可,估计运行时间应该在数小时,根据磁盘的容量而定
3、下方脚本内容中的 c1t5d1都应为实际磁盘阵列的编号,可用format命令查看,如下方例子中看到的蓝色字体为我们需要的磁盘阵列(这里为4块盘)的编号。并注意红色的编号仅为a1(运行format命令的用机)的本机磁盘,非磁盘阵列的磁盘。
1root@a1 # format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0
/pci@1f,4000/scsi@3/sd@0,0
1. c0t1d0
/pci@1f,4000/scsi@3/sd@1,0
2. c1t5d0
/pseudo/rdnexus@1/rdriver@5,0
3. c1t5d1
/pseudo/rdnexus@1/rdriver@5,1
4. c2t5d0
/pseudo/rdnexus@2/rdriver@5,0
5. c2t5d1
/pseudo/rdnexus@2/rdriver@5,1
Specify disk (enter its number): ^D
root@a1 #
4、下方脚本中的happydg01-happydg02等编号仅是对应磁盘阵列A1000所作的LUN1 的情况,其中front与back的空间32768m 都需要根据实际LUN1的容量情况做相应的修改。
5、要分的卷上述命令有四个:front 、back 、log-alarm、happydg-stat。其中happydg-stat为系统使用。各部分空间计算方法如下:
总共空间为 LUN1的空间,此次的空间共有6×34G=204G,因为做成raid5+1,其中给happydg-stat 52m ,给 log-alam 2048m,剩余72×1024m -52m-2048m 让front 和 back平分(建议front和back都取为1024的整倍数值,,以使避免实际空间不够的错误)。
1、编写creat_volume.sh的脚本:
/usr/lib/vxvm/bin/vxdisksetup -i c1t5d1
/usr/lib/vxvm/bin/vxdisksetup -i c2t5d1
vxdg init happydg happydg01=c1t5d1
vxdg -g happydg adddisk happydg02=c2t5d1
vxassist -g happydg -U fsgen make happydg-stat 100m layout=nolog happydg01
vxassist -g happydg -U fsgen make log-alam 4096m layout=nolog happydg01
vxassist -g happydg -U fsgen make front 92180m layout=nolog happydg01
vxassist -g happydg -U fsgen make back 92180m layout=nolog happydg01
vxassist -g happydg mirror happydg-stat layout=nolog happydg02
vxassist -g happydg mirror log-alam layout=nolog happydg02
vxassist -g happydg mirror front layout=nolog happydg02
vxassist -g happydg mirror back layout=nolog happydg02
2、运行脚本
在脚本所在的目录下运行 sh vol_create.sh
1.1.1 将happydg卷加入SC中的卷管理系统
root@a1 #
root@a1 # scsetup
*** Main Menu ***
Please select from one of the following options:
1) Quorum
2) Resource groups
3) Cluster interconnect
4) Device groups and volumes
5) Private hostnames
6) New nodes
7) Other cluster properties
?) Help with menu options
q) Quit
Option: 4
*** Device Groups Menu ***
Please select from one of the following options:
1) Register a VxVM disk group as a device group
2) Synchronize volume information for a VxVM device group
3) Unregister a VxVM device group
4) Add a node to a VxVM device group
5) Remove a node from a VxVM device group
6) Change key properties of a device group
?) Help
q) Return to the Main Menu
Option: 1
>>> Register a VxVM Disk Group as a Device Group >> Create a Resource Group
[url=http://tech.techweb.com.cn/post.php?action=reply&fid=43&tid=184274&repquote=367949&extra=page%3D8&page=2]引用
回复
![]()
游客
未注册
#13
使用道具
发表于 2007-4-6 14:50
资料
短消息
加为好友
[/url]
Option: 3
The type of resource you selected is not yet registered. Each
resource type must be registered with the cluster before any
resources of the selected type can be added to a resource group.
Registration of a resource type simply updates the cluster-wide
configuration with the resource type data copied to an individual
node at the time that the resource type was installed. However, it is
important that the same resource type, or data service, software has
been installed on each node in the cluster.
Is the software for this service installed on each node (yes/no) [yes]? [Enter]
Is it okay to register this resource type now (yes/no) [yes]? [Enter]
scrgadm -a -t SUNW.HAStoragePlus
Command completed successfully.
Hit ENTER to continue: [Enter]
What is the name of the resource you want to add? storg-front
Please wait - looking up resource properties .........................
Some resource types support the setting of certain extension
properties. Please check the documentation for your data service to
determine whether or not you need to set any extension properties for
the resource you are adding.
Any extension properties you would like to set (yes/no) [yes]? [Enter]
Here are the extension properties for this resource:
Property Name Default Setting
============= ===============
GlobalDevicePaths
FilesystemMountPoints
AffinityOn True
FilesystemCheckCommand
RunBeforeStartMethod
RunAfterStartMethod
RunBeforeStopMethod
RunAfterStopMethod
Please enter the list of properties you want to set:
(Type Ctrl-D to finish OR "?" for help)
Property name: GlobalDevicePaths
Property description: The list of HA global device paths
Property value: /dev/vx/dsk/happydg/front
Property name: FilesystemMountPoints
Property description: The list of file system mountpoints
Property value: /var/frontsave
Property name: ^D
Here is the list of extension properties you entered:
GlobalDevicePaths=/dev/vx/dsk/happydg/front
FilesystemMountPoints=/var/frontsave
Is it correct (yes/no) [yes]? [Enter]
Is it okay to proceed with the update (yes/no) [yes]? [Enter]
scrgadm -a -j storg-front -g happyrg -t SUNW.HAStoragePlus -x GlobalDevicePaths=/dev/vx/dsk/happydg/front -x FilesystemMountPoints=/var/frontsave
Command completed successfully.
Hit ENTER to continue: [Enter]
Hit ENTER to continue:
Do you want to add any additional data service resources (yes/no) [no]?
Do you want to bring this resource group online now (yes/no) [yes]?
scswitch -Z -g happyrg
Command completed successfully.
Hit ENTER to continue:
Hit ENTER to continue: [Enter]
*** Resource Group Menu ***
Please select from one of the following options:
1) Create a resource group
2) Add a network resource to a resource group
3) Add a data service resource to a resource group
?) Help
q) Return to the previous Menu
Option: 3
>>> Add a Data Service Resource to a Resource Group
FilesystemMountPoints
AffinityOn True
FilesystemCheckCommand
RunBeforeStartMethod
RunAfterStartMethod
RunBeforeStopMethod
RunAfterStopMethod
Please enter the list of properties you want to set:
(Type Ctrl-D to finish OR "?" for help)
Property name: GlobalDevicePaths
Property description: The list of HA global device paths
Property value: /dev/vx/dsk/happydg/back
Property name: FilesystemMountPoints
Property description: The list of file system mountpoints
Property value: /var/backsave
Property name: ^D
Here is the list of extension properties you entered:
GlobalDevicePaths=/dev/vx/dsk/happydg/back
FilesystemMountPoints=/var/backsave
Is it correct (yes/no) [yes]? [Enter]
Is it okay to proceed with the update (yes/no) [yes]? [Enter]
scrgadm -a -j storg-back -g happyrg -t SUNW.HAStoragePlus -x GlobalDevicePaths=/dev/vx/dsk/happydg/back -x FilesystemMountPoints=/var/backsave
Command completed successfully.
Hit ENTER to continue: [Enter]
Do you want to enable this resource (yes/no) [yes]? [Enter]
Is it okay to proceed with the update (yes/no) [yes]? [Enter]
scswitch -e -j storg-back
scswitch -e -M -j storg-back
Commands completed successfully.
Hit ENTER to continue: [Enter]
*** Resource Group Menu ***
Please select from one of the following options:
1) Create a resource group
2) Add a network resource to a resource group
3) Add a data service resource to a resource group
?) Help
q) Return to the previous Menu
Option: 3
>>> Add a Data Service Resource to a Resource Group
FilesystemMountPoints
AffinityOn True
FilesystemCheckCommand
RunBeforeStartMethod
RunAfterStartMethod
RunBeforeStopMethod
RunAfterStopMethod
Please enter the list of properties you want to set:
(Type Ctrl-D to finish OR "?" for help)
Property name: GlobalDevicePaths
Property description: The list of HA global device paths
Property value: /dev/vx/dsk/happydg/log-alam
Property name: FilesystemMountPoints
Property description: The list of file system mountpoints
Property value: /var/other
Property name: ^D
Here is the list of extension properties you entered:
GlobalDevicePaths=/dev/vx/dsk/happydg/log-alam
FilesystemMountPoints=/var/other
Is it correct (yes/no) [yes]? [Enter]
Is it okay to proceed with the update (yes/no) [yes]? [Enter]
scrgadm -a -j storg-other -g happyrg -t SUNW.HAStoragePlus -x GlobalDevicePaths=/dev/vx/dsk/happydg/log-alam -x FilesystemMountPoints=/var/other
Command completed successfully.
Hit ENTER to continue: [Enter]
Do you want to enable this resource (yes/no) [yes]? [Enter]
Is it okay to proceed with the update (yes/no) [yes]? [Enter]
scswitch -e -j storg-other
scswitch -e -M -j storg-other
Commands completed successfully.
Hit ENTER to continue: [Enter]
*** Resource Group Menu ***
Please select from one of the following options:
1) Create a resource group
2) Add a network resource to a resource group
3) Add a data service resource to a resource group
?) Help
q) Return to the previous Menu
Option: q
*** Main Menu ***
Please select from one of the following options:
1) Quorum
2) Resource groups
3) Cluster interconnect
4) Device groups and volumes
5) Private hostnames
6) New nodes
7) Other cluster properties
?) Help with menu options
q) Quit
Option: q
配置Sun Cluster的公网
root@A # pnmset -p
current configuration is:
nafo0 qfe1
nafo1 qfe3
root@A # pnmset -c nafo0 -o add qfe2
root@A # pnmset -p
current configuration is:
nafo0 qfe1 qfe2
nafo1 qfe3
运行以下命令,使双机重新启动
实际情况为
nafo0 qfe0
nafo1 qfe1 qfe2
nafo2 qfe3
root@a1 # scshutdown -y -g0
[url=http://tech.techweb.com.cn/post.php?action=reply&fid=43&tid=184274&repquote=367950&extra=page%3D8&page=2]引用
回复
![]()
游客
未注册
#14
使用道具
发表于 2007-4-6 14:50
资料
短消息
加为好友
[/url]
安装happy监控脚本
把sc30_script.tar文件用二进制方式,分别上传到 a1 和 b1 的/opt 下,
执行以下命令
(1)cd /opt
(2)tar xvf sc30_script.tar
运行以下命令,使双机重新启动
root@a1 # scshutdown -y -g0
核查配置
1、检查一下是否可以加入节点以及倒换
a、在a1机上执行
root@a1 # scstat |more 看节点的状态是否正常;
root@a1 # df -k 看是否有共享磁盘阵列的各个卷;
root@a1 # scswitch -S -h A 看happy应用程序是否运行正常。
b、在b1机上执行
root@a1 # scstat |more 看节点的状态是否正常;
root@a1 # df -k 看是否有共享磁盘阵列的各个卷;
附例:
root@A # scstat -g
-- Resource Groups and Resources --
Group Name Resources
---------- ---------
Resources: happyrg cg1 cg2 storg-front storg-back storg-other
-- Resource Groups --
Group Name Node Name State
---------- --------- -----
Group: happyrg A Online
Group: happyrg B Offline
-- Resources --
Resource Name Node Name State Status Message
------------- --------- ----- --------------
Resource: cg1 A Online Online - LogicalHostname online.
Resource: cg1 B Offline Offline
Resource: cg2 A Online Online - LogicalHostname online.
Resource: cg2 B Offline Offline
Resource: storg-front A Online Online
Resource: storg-front B Offline Offline
Resource: storg-back A Online Online
Resource: storg-back B Offline Offline
Resource: storg-other A Online Online
Resource: storg-other B Offline Offline
root@a1 #
(上述红色字体表明现在所有的资源都是a1掌握控制)
2、核查一下IP(只有在掌握资源的机器上才可看到虚拟ip):
root@A # ifconfig -a
lo0: flags=1000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=1008849 mtu 8232 index 1
inet 172.16.193.1 netmask ffffffff
qfe0: flags=1000843 mtu 1500 index 2
inet 10.243.80.176 netmask ffffffe0 broadcast 10.243.80.191
ether 0:3:ba:12:8b:2c
qfe1: flags=1000843 mtu 1500 index 3
inet 10.243.80.129 netmask fffffff8 broadcast 10.243.80.135
ether 0:3:ba:12:8b:2c
qfe1:1: flags=1000843 mtu 1500 index 3
inet 10.243.80.131 netmask fffffff8 broadcast 10.243.80.135
qfe3: flags=1000843 mtu 1500 index 4
inet 10.243.80.145 netmask fffffff0 broadcast 10.243.80.159
ether 0:3:ba:12:8b:2c
qfe3:1: flags=1000843 mtu 1500 index 4
inet 10.243.80.147 netmask fffffff0 broadcast 10.243.80.159
hme1: flags=1008843 mtu 1500 index 5
inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
ether 0:3:ba:12:8b:2c
hme1:2: flags=1008843 mtu 1500 index 5
inet 172.16.194.6 netmask fffffffc broadcast 172.16.194.7
hme0: flags=1008843 mtu 1500 index 6
inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
ether 0:3:ba:12:8b:2c
本文来自ChinaUnix博客,如果查看原文请点:[url]http://blog.chinaunix.net/u2/65250/showart_973330.html |
|