免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12下一页
最近访问板块 发新帖
查看: 5015 | 回复: 11
打印 上一主题 下一主题

安装Veritas Cluster Server 4.0(转贴) [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2006-11-30 08:37 |只看该作者 |倒序浏览
硬件简述:
    1. Node1: Netra 20 (2 X UltraSPARC-III+, 2048M RAM, 72G*2 HardDisk)
    2. Node2: Netra 20 (2 X UltraSPARC-III+, 2048M RAM, 72G*2 HardDisk)
    3. Shared Storage: D1000 (36G*3 HardDisk)
    一. 安装操作系统
    安装2/04版本(2004年2月份)的Solaris 8。在安装过程中需要选择英文作为主要语言。
   
    二. 安装EIS-CD
    安装EIS-CD 2/04版本,EIS-CD用于设置使用cluster的环境变量。
   
    三. 安装patch
    为了避免CPU虚高的问题,需要安装117000-05补丁。该补丁可以从SUN公司官方网站下载。下载该补丁以后解压将生成117000-05目录。使用如下命令安装patch:
    patchadd 117000-05
   
    四. 安装共享磁盘
    在本次环境中,我们使用SUND1000盘阵作为共享磁盘,是SCSI接口的盘阵,如果是光纤接口的盘阵需要另行设置。
    1. 给盘阵加电
    2. 将Node1和盘阵用SCSI连线连接
    3. Node1加电
    4. 使Node1进入ok模式(在console窗口监控Node1的启动,当出现启动信息时,迅速按下Ctrl+Break,即可进入ok模式)
    5. {0} ok probe-scsi-all
    6. {0} ok boot ?Cr
    7. 重新启动以后进入操作系统,使用format命令确认盘阵已经被此系统加载
    8. Node1断电,Node2加电,重复4-7步,确认盘阵也可以被此台机器加载
    9. 为了使两台机器同时读取共享存储,需要修改其中一台机器的SCSI ID。给Node1加电,进入ok模式(此时的状态应该是Node1,Node2,存储均已加电,Node2目前使用format命令已经可以观察到盘阵被正常加载,Node1处于ok模式)。
    10. 设置Node1的SCSI ID为5(默认为7)
    {0} ok setenv scsi-initiator-id 5
    11. 通过第5步中的屏幕信息我们可以知道该SCSI设备的系统表示符为/pci@8,700000/scsi@2,1,这个标识符将在下一步使用
    12. 设置Node1的SCSI ID为5。(注意第2行的输入中" scsi-initiator-id",双引号之后有一个空格!)
    {0} ok nvedit
    0: probe-all
    1: cd /pci@8,700000/scsi@2,1
    2: 5 " scsi-initiator-id" integer-property
    3: device-end
    4: install-console
    5: banner
    13. 保存第12步的配置
    {0} ok nvstore
    14. 设置环境变量
    {0} ok setenv use-nvramrc? true
    use-nvramrc? = true
    {0} ok setenv auto-boot? true
    auto-boot? = true
    15. 重新启动Node1
    {0} ok reset-all
    16. 在Node1,Node2上均用format命令检查,确认都成功挂载了共享存储
   
    五. 安装VCS
    1. 设置rhosts文件,在两台机器上都要执行,以下以Node1为例。
    在/目录下添加rhosts文件,使远程登陆生效,用以简便地在两台机器上同时安装VCS
    root@uulab-s22 # echo “+” > /.rhosts
    root@uulab-s22 # more /.rhosts
    +
   
    2. 设置多个网卡IP
    由于Cluster系统中的心跳网卡最好是一个独立的网络设备,所以我们在两台机器上设置另外一个网卡和IP地址,专门用于心跳设备。在两台机器上都要执行,以下以Node1为例。
    root@uulab-s22 # echo "uulab-p22" > /etc/hostname.qfe0
    root@uulab-s22 # touch /etc/notrouter
    root@uulab-s22 # vi /etc/hosts
    添加如下行:
    192.168.0.6 uulab-p22
    192.168.0.8 uulab-p23
    root@uulab-s22 # vi /etc/netmasks
    添加如下行:
    192.168.0.0 255.255.255.0
    root@uulab-s22 # sync
    root@uulab-s22 # reboot
   
    3. 开始安装
    VCS 4.0的安装已经可以简化到只使用一个命令,将同时安装Veritas Volume Manager 4.0, Veritas File System 4.0,Veritas Cluster Server 4.0以及其他一些相关软件。以下命令只需要在一个节点中执行即可。
    root@uulab-s22 # cd /opt/sf_ha.4.0.sol/storage_foundation
    root@uulab-s22 # ./installsf
    安装过程一路都是选择或者按照提示输入,总的来说比较简单,不再赘述。
   
    六. 创建磁盘组
    VCS安装完毕以后,将要求使用如下命令重新启动所有节点:
    shutdown -y -i6 -g0
   
    重启完毕,开始创建磁盘组。以下命令只需要在一个节点中执行即可。
    使用format命令确认我们需要添加到磁盘组中的共享磁盘为c4t0d0,c4t8d0,c4t9d0
    root@uulab-s22 # format
    Searching for disks...done
   
   
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0
    /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c50569190,0
    1. c1t1d0
    /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c5056c1a7,0
    2. c4t0d0
    /pci@8,700000/scsi@2,1/sd@0,0
    3. c4t8d0
    /pci@8,700000/scsi@2,1/sd@8,0
    4. c4t9d0
    /pci@8,700000/scsi@2,1/sd@9,0
    Specify disk (enter its number):
   
    root@uulab-s22 # vxdisksetup -i c4t0d0
    如果此命令执行以后报错如下,那么参看附录中的解决方法1。
    VxVM vxdisksetup ERROR V-5-2-3535 c4t0d0s2: Invalid dmpnodename for disk device c4t0d0.
    如果此命令执行以后报错如下,那么参看附录中的解决方法2。
    VxVM vxdisksetup ERROR V-5-2-1813 c4t0d0: Disk is part of ipasdg disk group, use -f option to force setup.
   
    root@uulab-s22 # vxdisksetup -i c4t8d0
    root@uulab-s22 # vxdisksetup -i c4t9d0
   
    root@uulab-s22 # vxdg init hlrdg hlrdg-01=c4t0d0
    root@uulab-s22 # vxdg -g hlrdg adddisk hlrdg-02=c4t8d0
    root@uulab-s22 # vxdg -g hlrdg adddisk hlrdg-03=c4t9d0
   
    七. 创建卷
    root@uulab-s22 # vxassist -g hlrdg -b make oradata_vol 15g layout=nostripe,nolog nmirror=2 &
    root@uulab-s22 # vxassist -g hlrdg -b make oraredo_vol 5g layout=nostripe,nolog nmirror=2 &
    root@uulab-s22 # vxassist -g hlrdg -b make oraarch_vol 8g layout=nostripe,nolog nmirror=2 &
    root@uulab-s22 # vxassist -g hlrdg -b make hlr_vol 4g layout=nostripe,nolog nmirror=2 &
   
    八. 使用VxFS创建文件系统
    root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/oradata_vol
    root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/oraredo_vol
    root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/oraarch_vol
    root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/hlr_vol
   
    九. 配置VCS
    我们的环境是创建了4个文件系统,分别是oradata_vol, oraredo-vol, oraarch_vol, hlr_vol,文件系统可以被使用需要挂载(mount)到相应的目录中。先创建相应的目录,在两台机器上都要执行,以下以Node1为例。(假设系统中dba组和oracle用户都已经创建)
    root@uulab-s22 # mkdir -p /opt/oracle/data
    root@uulab-s22 # mkdir -p /opt/oracle/redo
    root@uulab-s22 # mkdir -p /opt/oracle/arch
    root@uulab-s22 # mkdir ?Cp /opt/hlr
   
    root@uulab-s22 # chown oracle:dba /dev/vx/rdsk/hlrdg/oradata_vol
    root@uulab-s22 # chown oracle:dba /dev/vx/rdsk/hlrdg/oraredo_vol
    root@uulab-s22 # chown oracle:dba /dev/vx/rdsk/hlrdg/oraarch_vol
   
    root@uulab-s22 # chown oracle:dba /opt/oracle/data
    root@uulab-s22 # chown oracle:dba /opt/oracle/redo
    root@uulab-s22 # chown oracle:dba /opt/oracle/arch
   
    修改/etc/VRTSvcs/conf/config/main.cf文件,这是VCS的配置文件,该文件的修改可以使用命令行(比如hagrp,hares等命令)修改,也可以使用任何文本编辑器(比如vi)直接修改,此处我们选择使用vi进行修改。粗体字部分为需要新添加的行。
   
    修改完毕以后该文件如下:
    include "types.cf"
   
    cluster vcs_hlr_cluster (
    UserNames = { admin = hijBidIfjEjjHrjDig }
    ClusterAddress = "10.7.1.7" --此处为Cluster环境的虚拟IP
    Administrators = { admin }
    CounterInterval = 5
    )
   
    system uulab-s22 (
    )
   
    system uulab-s23 (
    )
   
    group ClusterService (
    SystemList = { uulab-s22 = 0, uulab-s23 = 1 }
    UserStrGlobal = "LocalCluster@https://10.7.1.7:8443;LocalCluster@https://10.7.1.7:8443;"
    AutoStartList = { uulab-s22, uulab-s23 }
    OnlineRetryLimit = 3
    OnlineRetryInterval = 120
    )
   
    DiskGroup hlrdg (
    DiskGroup = hlrdg
    MonitorReservation = 1
    )
   
    IP webip (
    Device = eri0
    Address = "10.7.1.7"
    NetMask = "255.255.0.0"
    )
   
    Mount arch_mnt (
    MountPoint = "/opt/oracle/arch"
    BlockDevice = "/dev/vx/dsk/hlrdg/oraarch_vol"
    FSType = vxfs
    FsckOpt = "-y"
    )
   
    Mount data_mnt (
    MountPoint = "/opt/oracle/data"
    BlockDevice = "/dev/vx/dsk/hlrdg/oradata_vol"
    FSType = vxfs
    FsckOpt = "-y"
    )
   
    Mount hlr_mnt (
    MountPoint = "/opt/hlr"
    BlockDevice = "/dev/vx/dsk/hlrdg/hlr_vol"
    FSType = vxfs
    FsckOpt = "-y"
    )
   
    Mount redo_mnt (
    MountPoint = "/opt/oracle/redo"
    BlockDevice = "/dev/vx/dsk/hlrdg/oraredo_vol"
    FSType = vxfs
    FsckOpt = "-y"
    )
   
    NIC csgnic (
    Device = eri0
    )
   
    VRTSWebApp VCSweb (
    Critical = 0
    AppName = vcs
    InstallDir = "/opt/VRTSweb/VERITAS"
    TimeForOnline = 5
    RestartLimit = 3
    )
   
    VCSweb requires webip
    arch_mnt requires hlr_mnt
    data_mnt requires redo_mnt
    hlr_mnt requires hlrdg
    redo_mnt requires arch_mnt
    webip requires csgnic
   
    修改完毕,使用如下命令进行语法检查:
    root@uulab-s22 # hacf -verify /etc/VRTSvcs/conf/config
   
    十. 测试VCS
    配置完成以后,同时重新启动两台机器,启动完毕,使用如下命令检查VCS是否运行正常。
    root@uulab-s22 # hares -display -group ClusterService
    正常情况应该如下显示:
    #Resource Attribute System Value
    VCSweb Group global ClusterService
    VCSweb Type global VRTSWebApp
    VCSweb AutoStart global 1
    VCSweb Critical global 0
    VCSweb Enabled global 1
    VCSweb LastOnline global uulab-s22
    VCSweb MonitorOnly global 0
    VCSweb ResourceOwner global unknown
    VCSweb TriggerEvent global 0
    VCSweb ArgListValues uulab-s22 vcs /opt/VRTSweb/VERITAS 5
    VCSweb ArgListValues uulab-s23 vcs /opt/VRTSweb/VERITAS 5
    VCSweb ConfidenceLevel uulab-s22 100
    VCSweb ConfidenceLevel uulab-s23 0
    VCSweb Flags uulab-s22
    VCSweb Flags uulab-s23
    VCSweb IState uulab-s22 not waiting
    VCSweb IState uulab-s23 not waiting
    VCSweb Probed uulab-s22 1
    VCSweb Probed uulab-s23 1
    VCSweb Start uulab-s22 1
    VCSweb Start uulab-s23 0
    VCSweb State uulab-s22 ONLINE
    VCSweb State uulab-s23 OFFLINE
    VCSweb AppName global vcs
    VCSweb ComputeStats global 0
    VCSweb InstallDir global /opt/VRTSweb/VERITAS
    VCSweb ResourceInfo global State Valid Msg TS
    VCSweb RestartLimit global 3
    VCSweb TimeForOnline global 5
    VCSweb MonitorTimeStats uulab-s22 Avg 0 TS
    VCSweb MonitorTimeStats uulab-s23 Avg 0 TS
    #
    …………
   
    也可以使用df命令检查是否所有需要挂载的文件系统都已正确挂载在Node1上,而在Node2上则无法看到。
   
    做切换测试:
    root@uulab-s22 # hagrp -switch ClusterService -to uulab-s23
    正常情况大概是5秒左右,所有的资源包括IP,文件系统加载点都会转移到Node2上。
   
   
    十一. 附录
    解决方法1:
    root@uulab-s22 # vxdiskadm
   
   
    Volume Manager Support Operations
    Menu: VolumeManager/Disk
   
    1 Add or initialize one or more disks
    2 Encapsulate one or more disks
    3 Remove a disk
    4 Remove a disk for replacement
    5 Replace a failed or removed disk
    6 Mirror volumes on a disk
    7 Move volumes from a disk
    8 Enable access to (import) a disk group
    9 Remove access to (deport) a disk group
    10 Enable (online) a disk device
    11 Disable (offline) a disk device
    12 Mark a disk as a spare for a disk group
    13 Turn off the spare flag on a disk
    14 Unrelocate subdisks back to a disk
    15 Exclude a disk from hot-relocation use
    16 Make a disk available for hot-relocation use
    17 Prevent multipathing/Suppress devices from VxVM's view
    18 Allow multipathing/Unsuppress devices from VxVM's view
    19 List currently suppressed/non-multipathed devices
    20 Change the disk naming scheme
    21 Get the newly connected/zoned disks in VxVM view
    22 Change/Display the default disk layouts
    23 Mark a disk as allocator-reserved for a disk group
    24 Turn off the allocator-reserved flag on a disk
    list List disk information
   
   
    ? Display help about menu
    ?? Display help about the menuing system
    q Exit from menus
   
    Select an operation to perform: 17
   
   
    Exclude Devices
    Menu: VolumeManager/Disk/ExcludeDevices
    VxVM INFO V-5-2-1239
    This operation might lead to some devices being suppressed from VxVM's view
    or prevent them from being multipathed by vxdmp (This operation can be
    reversed using the vxdiskadm command).
   
    Do you want to continue ? [y,n,q,?] (default: y) y
   
    Volume Manager Device Operations
    Menu: VolumeManager/Disk/ExcludeDevices
   
    1 Suppress all paths through a controller from VxVM's view
    2 Suppress a path from VxVM's view
    3 Suppress disks from VxVM's view by specifying a VIDID combination
    4 Suppress all but one paths to a disk
    5 Prevent multipathing of all disks on a controller by VxVM
    6 Prevent multipathing of a disk by VxVM
    7 Prevent multipathing of disks by specifying a VIDID combination
    8 List currently suppressed/non-multipathed devices
   
    ? Display help about menu
    ?? Display help about the menuing system
    q Exit from menus
   
    Select an operation to perform: 5
   
    Exclude controllers from DMP
    Menu: VolumeManager/Disk/ExcludeDevices/CTLR-DMP
    Use this operation to exclude all disks on a controller from being multipathed
    by vxdmp.
   
    As a result of this operation, all disks having a path through the specified
    controller will be claimed in the OTHER_DISKS category and hence, not
    multipathed by vxdmp. This operation can be reversed using the vxdiskadm
    command.
    VxVM INFO V-5-2-1263
    You can specify a controller name at the prompt. A controller name is of
    the form c#, example c3, c11 etc. Enter 'all' to exclude all paths on all
    the controllers on the host. To see the list of controllers on the system,
    type 'list'.
   
    Enter a controller name [,all,list,list-exclude,q,?] c4
    VxVM INFO V-5-2-1129
    All disks on the following enclosures will be excluded from DMP ( ie
    claimed in the OTHER_DISKS category and hence not multipathed by vxdmp) as a
    result of this operation :
   
   
    Disk OTHER_DISKS
   
   
    Continue operation? [y,n,q,?] (default: y) y
   
    Do you wish to exclude more controllers ? [y,n,q,?] (default: n) n
   
    Volume Manager Device Operations
    Menu: VolumeManager/Disk/ExcludeDevices
   
    1 Suppress all paths through a controller from VxVM's view
    2 Suppress a path from VxVM's view
    3 Suppress disks from VxVM's view by specifying a VIDID combination
    4 Suppress all but one paths to a disk
    5 Prevent multipathing of all disks on a controller by VxVM
    6 Prevent multipathing of a disk by VxVM
    7 Prevent multipathing of disks by specifying a VIDID combination
    8 List currently suppressed/non-multipathed devices
   
    ? Display help about menu
    ?? Display help about the menuing system
    q Exit from menus
   
    Select an operation to perform: q
   
    VxVM vxdiskadm NOTICE V-5-2-1187 Please wait while the device suppression/unsuppression operations take effect.


又转了一个好帖子.把veritas的volume  manager , VCS ,  NBU这些做一个汇总吧.希望对大家以后安装调试有帮助.

论坛徽章:
0
2 [报告]
发表于 2006-11-30 10:28 |只看该作者
沙发

论坛徽章:
0
3 [报告]
发表于 2006-11-30 11:32 |只看该作者
太强了

论坛徽章:
0
4 [报告]
发表于 2006-11-30 22:29 |只看该作者
顶起,这么好的帖

论坛徽章:
0
5 [报告]
发表于 2006-12-01 15:49 |只看该作者
版主辛苦了
真是好贴啊!

论坛徽章:
0
6 [报告]
发表于 2006-12-24 19:56 |只看该作者
真是好帖,好久没看到了....

论坛徽章:
0
7 [报告]
发表于 2007-07-03 15:09 |只看该作者
cscscscscscs

论坛徽章:
1
数据库技术版块每日发帖之星
日期:2015-11-08 06:20:00
8 [报告]
发表于 2007-07-03 15:11 |只看该作者
顶一下!

论坛徽章:
0
9 [报告]
发表于 2008-05-26 13:43 |只看该作者
请问node2 需要用scsi和盘阵相连嘛?
心跳网卡,需要特别物理网卡嘛?如果是,那对外的网卡是在node1上嘛?就是node1有2块网卡,node2一块心跳网卡就可?
还是node1/2都是2块网卡,各拿一块做心跳,各一块连外网,在虚拟一个IP?

论坛徽章:
0
10 [报告]
发表于 2008-05-26 23:01 |只看该作者
这东西不错
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP