免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
楼主: peanut
打印 上一主题 下一主题

Sun Cluster installation guide [复制链接]

论坛徽章:
0
21 [报告]
发表于 2003-01-27 22:10 |只看该作者

Sun Cluster installation guide

最新的SC3.0速成手册版本是1.3.7.5。建议大家按照最新的版本做,可以最大限度避免已知的但你未知的问题。

How to install SC 3.0 05/02 (update 3) on Solaris 8 with the EIS process:
Send all comments to Raffaele LaPietra. Please do not open any bugs or RFEs
against EIS for this document, as it is individually maintained.
This doc can be found on the EIS cd1 under sun/docs/MISC/
The latest revision of this document can be found at:
http://neato.east/suncluster/scinstall.html
Version: 1.3.7.5
Last Modified Wed Jan 22 07:14:00 EST 2003

Important note - Patches marked with a ## are not found on the 31Dec2002 EIS
                 cd (either a different rev or missing) and will need to
                 be downloaded seperatly and noted in the install report

I.  Paper Work -
  A. Please visit the Americas Cluster Install Process web page at:
     http://acts.ebay/suncluster/install.html . Note: This does not yet apply
     to EMEA or APAC. This document primarily covers step 10 of the process.
  B. Fill out the EIS checklists from <eis-cd1>;/sun/docs/EISchecklists/pdf
    1. Fill out the appropriate server checklist for each node and the admin
       work station. This can be used to complete the EISdoc Tool documentation
  C. Get customer to order the license keys from the licensing center for
     VxVM and VxFS if either will be used. It is UNSUPPORTED by engineering to
     run a cluster on temporary licenses
  
II. Install admin station
  A. Install per the appropriate EIS checklist (WGS?)
    1. Don't forget to run setup-standard
  B. pkgadd SUNWccon
    1. if desired, pkgadd SUNWscman
      a. add patch <eis-cd1>;/sun/patch/SunCluster/3.0/8            
         111554-09   SC30 Man Pages
  C. Create a file /etc/clusters with one entry:
     <clustername>; <node1>; ... <node n>;
  D. Create a file /etc/serialports with one entry for each node
    1. The entry is  - <host>; <tc>; <500x>; ,  x=tc port number
    2. If a SunFire  - <host>; <sc>; <500x>; ,  x=1-4 for domains A-D
    3. If an E10K, entry is      - <host>; <ssp>; 23
    4. If a SF15K or SF12K, entry is - <host>; <sc>; 23
  E. Add all entries (nodes+logical hosts) to /etc/hosts

III. Install TC - only if using Sun's (optional) tc
  A. Add an entry to /etc/remote: (for serial port a)
     tc:dv=/dev/term/a:br#9600:el=^C^S^Q^U^D:ie=%$e=^D:
  B. Connect port 1 to admin station serial port a and tip tc
  C. Get into monitor mode by holding down test button while you power on
     the TC. When the green light blinks, release the test button and then
     press it again.
    1. set IP address - monitor:: addr
    2. check boot image is oper.52.enet - monitor:: image
    3. set to boot from self - monitor:: seq
    4. Power cycle the tc
    5. Quit your tip session (~.)
    6. telnet into the tc and select cli
    7. become su and go into admin mode
       annex: su
       passwd: <tc ip address>;
       annex# admin
    8. configure serial ports :
       admin: set port=1-8 type dial_in imask_7bits Y
       admin: set port=2-8 mode slave ps_history_buffer 32767
       admin: quit
    9. (optional) - configure rotaries
       a. annex# edit config.annex
       b. Add the following lines
         %rotary
         <node1>;:<tc port>;@<tc ip addr>;
          ...
         <node#>;>;:<tc port>;@<tc ip addr>;
       c. ctrl-w     
   10. (optional) - configure default router
      a. #edit config.annex
      b. Add the following lines
        %gateway
        net default gateway <router-ip>; metric 1 hardwired
      c. ctrl-w
      d. annex# admin set annex routed n
   11. Reboot the tc
       annex# boot
       bootfile: <return>;
       warning: <return>;
   12. exit your telnet session (ctrl + ] and quit)
  
IV. Install cluster nodes
  A. If using a tc:
    1. On admin station -  ccp <clustname>;.
    2. Choose cconsole
  B. from the ok prompt: setenv local-mac-address? false
  C. If Multi Initiated SCSI, follow infodoc 20704
  D. Install Solaris 8 2/02
    1. You may want to select xterms or other and enter dtterm for your term
       type if using a tc/cconsole
    2. Software package selection
      a. Entire Distribution + OEM is RECOMMENDED
      b. At least the END USER software is REQUIRED
        1. Entire Distribution + OEM is required for E10K
        2. If you use END USER, you may need to also add -
          a. Apache software packages (SUNWapchr and SUNWapchu) if you want to
            use SunPlex Manager
          b. RSMAPI software packages (SUNWrsm, SUNWrsmx, SUNWrsmo,SUNWrsmox
             and SUNWscrdt) if you want to use PCI-SCI adapters for the
             interconnect transport or use Remote Shared Memory Application
             Programming Interface
            1. Add patch <eis-cd1>;/sun/patch/SunCluster/3.0/8
              112866-06## SC30 Reliable Datagram Transport
    3. Root Disk partitioning
      a. Combining / , /usr and /opt is RECOMMENDED
        1. root (/) should be at least 100mb greater than normal
        2. If you use a separate partition for usr, it should be at least 40mb
           greater than normal
      b. If you use a separate partition for var, it should be at least
         100mb+40% of memory if a separate partition (in order to be able to
         capture a core dump or 2)
      c. swap must be at least 750mb. Calculate the normal amount of swap needed by
         Solaris, add in any required by third party applications, and then add
         512mb for SUnCluster.
      d. Make and mount a 100mb filesystem called /globaldevices
      e. Leave 20MB for SDS metadbs or 2 cylinders fot the VxVM private region
        and encapsulation area
        1. For VxVM leave slice 3 and 4 unassigned
        2. For SDS assign the 20MB to slice 7        
  E. After reboot, setup according to the proper EIS checklist. Notes follow.
  F. In the "Run setup-standard:" section:
    command
    1. Answer yes to the "allow remote logins for root" question
    2. Modify the root .profile
      a. at the end of  .profile
        1. Uncomment the "cluster node" display section
        2. To set your term type when using cconsole, add the following lines:
          a. if [ "`tty`" = "/dev/console" ]; then
               TERM=xterms;export TERM
             fi
          b. a TERM of "vt100" or "dtterm" may also work for you
      b. Per SunAlert 42797, If using VxVM, at the beginning of the .profile
        1. Change the lines which read:
           LD_LIBRARY_PATH=/usr/openwin/lib
           LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/openwin/lib
           to read
           LD_LIBRARY_PATH=/usr/lib:/usr/openwin/lib
           LD_LIBRARY_PATH=/usr/libLD_LIBRARY_PATH:/usr/openwin/lib
    3. Activate the EIS environment
      a. log out of the console
      b. go back to the ccp and choose ctelnet. This will allow faster access
         for the rest of the install process. Use the cconsole windows to see
         any error messages logged to the console
  G. Install additional packages and patches:
    1. If you will be using VxVM or Sun Traffic Manager Software (MPXIO), you
       need to install the SUNWsan package so you can add the needed patches
      a. You can find the SUNWsan package in <eis-cd1>;/sun/progs/SAN in the
         SFKpackages.tar.Z file
      b. If you will be using QLGC ISP200 Cards, or really installing a SAN,
         also install the packages SUNWcfpl and SUNWcfplx
    2. Install EIS patches
      a. cd <eis-cd1>;/sun/patch/8
      b. unpack-patches
      c. cd /tmp/8
      d. install-all-patches
    3. Install the following additional patches (per Infodoc 49704)
      a. In <eis-cd1>;/sun/patch/firmware
        109115-11   T3 Firmware
        109400-03   Hardware/Fcode: FC100/S HBA
        111853-01   Hardware/Fcode: PCI Single FC HBA
        112276-04   T3+ Firmware
      b. In <eis-cd1>;/sun/patch/SAN/8
      a. cd <eis-cd1>;/sun/patch/8
      b. unpack-patches
      c. cd /tmp/8
      d. ./install-patches
  H. Edit /etc/nsswitch.conf.
    1. Put files first for hosts, netmasks, group, services.
    2. Add [SUCCESS=return] after files for the hosts entry.
    3. If you plan on running oracle_server or oracle_listerner on a node, also
       make sure files is first for passwd, publickey and project
  I. Add entries in /etc/hosts for each log host,phys host,admin and tc
  J. If using A3500 or A3500FC install and configure RM6.22.1:
    1. pkgadd rm6.22.1 from <eis-cd2>;/sun-internal/progs/RM
      a. add patch <eis-cd1>;/sun/patch/RM/6.22.1/
        112126-06   RM6.22.1 Solaris 8
    2. edit /etc/osa/rmparams. Change the line System_MaxLunsPerController=8
       to the # of LUNs needed.
      a. if you have more than one A3500FC on a loop
        1. Change Rdac_HotAddDisabled=PARTIAL to Rdac_HotAddDisabled=FALSE
        2. Add the other A3500FC targets to Rdac_HotAddIDs:4:5:y:z
    3. /usr/lib/osa/bin/genscsiconf
    4. edit /etc/raid/rdac_address. Distribute your LUNs over the controllers
    5. init 6
    6. upgrade firmware and create luns (thru the rm6 GUI)
    7. disable paritychk on 1 node of each pair attached to the storage (use
       the rm6 GUI)
    8. /usr/lib/osa/bin/rdac_disks
  K. If you are going to use VxFS 3.4
    1. pkgadd VRTSlic and VRTSvxfs (and VRTSfsdoc if desired). The packages are
      in <eiscd2>;/sun-internal/progs/veritas-vm/3.2/ in the
      foundationproduct3.4sunw.tar.gz file
    2. add patch <eiscd1>;/sun/patch/veritas-fs/3.4/8
      110435-07   VxFS 3.4 multiple fixes
      a. You may also want to add point patch 112375-01.zip. See SunAlert 43142
         for details
    3. add license with vxlicense -c
  L. Edit the /etc/system file and add any needed entries. Shared memory sample
    entries for popular databases are below.
    1. For all systems:
      a. Add the following:
       exclude: lofs
       set ip:ip_enable_group_ifs=0
       forceload: misc/obpsym
       set nopanicdebug = 1
       set lwp_default_stksize=0x8000
      b. Change or the following:
       set rpcmod:svc_default_stksize=0x8000
      c. Note, If you installed VxFS, the set lwp_default_stksize and the
       set rpcmod:svc_default_stksize will have been added with lower values.
       You should comment out those entries.
      d. If you are using a ce interface for your public network, add:
       set ce:ce_reclaim_pending=1
        1. You must also add the following line to your /etc/iu.ap file, in the
           section identified as entries added by the SUNWscr package:
           ...
           # Start of lines added by SUNWscr ...
           ce -1 0 clhbsndr
           # End of lines added by SUNWscr ...
        2. The ce driver version must be at least 1.115. To determine the ce
           driver version, run the following command.
           # modinfo | grep CE
    2. If you plan on using dynamic reconfiguration:
       set kernel_cage_enable=1
    3. For Oracle HA or OPS/RAC:
       set shmsys:shminfo_shmmax=0xffffffff (or 0xffffffffffffffff for 64 bit)
       set shmsys:shminfo_shmmin=1
       set shmsys:shminfo_shmmni=200
       set shmsys:shminfo_shmseg=200
       set semsys:seminfo_semmap=1024
       set semsys:seminfo_semmni=2048
       set semsys:seminfo_semmns=2048
       set semsys:seminfo_semmnu=2048
       set semsys:seminfo_semume=200
       set semsys:seminfo_semmsl=2048
       set semsys:seminfo_semopm=100
       set semsys:seminfo_semvmx=32767
       forceload: sys/shmsys
       forceload: sys/semsys
       forceload: sys/msgsys
    4. For Sybase:
       set shmsys:shminfo_shmmax=0xffffffff (or 0xffffffffffffffff for 64 bit)
       set shmsys:shminfo_shmseg=200
       set rlim_fd_cur=1024
    5. For Informix:
       set shmsys:shminfo_shmmax=4026531839 (3.86GB = max)
       set shmsys:shminfo_shmmni=50
       set shmsys:shminfo_shmseg=200
  M. Add the logging option to the / filesystem (and any others that you want)
     in the /etc/vfstab file
  N. If you are using GBE as your private interconnect, you may have to create
     a /kernel/drv/ge.conf file. Please read InfoDoc 18395 for instructions.

V. Install SC on the nodes
  A. scinstall the first node (in the Tools subdirectory)
    1. Choose option 1) "Establish a new cluster using this machine as the
         first node"
    2. Do NOT have the scinstall program reboot for you
    3. Make a note of the did number in /etc/name_to_major
    4. Add patches <eis-cd1>;/sun/patch/SunCluster/3.0/8
      a. 110648-25   SC30 Core/Sys Admin
      b. 111488-07   SC30 mediator
      c. 111554-09   SC30 Man Pages
      d. 112108-06   SC30 SunPlex Manager
      e. 113505-01   SC30 Apache SSL Components
      f. If you are using PCI-SCI interconnects, add
         114271-01## SC30 SCI DR SUNWscivt
    5. touch /reconfigure
    6. init 6
  B. scinstall all remaining nodes
    1. make sure that the major number for did on the first node is not used by
       any of the remaining nodes. If it is, edit the /etc/name_to_major file
       and change whatever driver is conflicting to use another number.
    2. Choose option 2) "Add this machine as a node in an established cluster"
    3. Do NOT have the scinstall program reboot for you
    4. Add patches
      a. see step 5.A.4 above for list
    5. touch /reconfigure
    6. init 6
  C. Run scstat -n to make sure all the nodes are in the cluster
  D. If a 2 node cluster, set up the quorum disk
    1. scdidadm -L : to id the quorum disks. Pick a disk seen by both nodes
    2. On ONE node, run scsetup to assign the quorum disk
      a. Answer "Yes" to: Is it okay to reset "installmode"
  E. If a 3 or more node cluster, reset install mode
    1. On ONE node, run scsetup.
      a. Answer "no" to "Do you want to add any quorum disks?"
      b. Answer "Yes" to: Is it okay to reset "installmode"
  F. Configure/check NAFO groups on ALL nodes
    1. configure : pnmset
    2. To check : pnmstat -l
  G. Configure ntp.conf on ALL nodes
    1. cp /etc/inet/ntp.conf.cluster /etc/inet/ntp.conf
    2. Edit /etc/inet/ntp.conf and comment out the lines which read
       peer clusternode<#>;-priv for each node # which doesn't exist
  H. Check the /etc/nsswitch.conf file for cluster entries
    1. The "hosts" entry should read : cluster files [SUCCESS=return](plus
       whatever else you need, ie dns nis nisplus)
    2. The "netmasks" entry should read : cluster files (plus whatever else
       you need, ie ldap, nis, nisplus)
  I. Add the private interconnect addresses to the /etc/hosts files. You can
   find them by running:
   grep ip_address /etc/cluster/ccr/infrastructure
   Each IP address in this list must be assigned a unique hostname that does
   not conflict with any other hostname in the domain.
  J. Add the Diagnostic ToolKit package
    1. It is in <eiscd2>;/sun-internal/progs/SunCluster/3.0/SUNWscdtk_GA.tar.gz

VI. If using VxVM, Install VxVM 3.2
  A. On ALL nodes at once run scvxinstall -i [-d path to software]
  B. pkgadd VRTSvmsa if you want the gui
  C. Add patches
    1. <eiscd1>;/sun/patch/veritas-vm/3.2/8
       113201-02     VxVM 3.2 general patch
    2. If you added the vmsa gui,
       111904-06   VRTSvmsa 3.2 maintenance patch
  D. On ONE node: scshutdown -y -g0 and boot -r
  E. vxinstall
    1. Choose a Custom Installation
      a. Add licenses when asked
      b. For Oracle OPS be sure to add the Cluster Functionality license also
    2. Do NOT encapsulate the root drive (will do later in step XI)
    3. Initialize a small rootg
      a. If you are going to use VxVM to mirror root (later), only add the
        rootmirror disk as a new disk to rootdg. Name the disk rootmir<Node#>;
      b. If you are going to use DiskSuite to mirror root (later), you will
        need to initialize another disk (or 2) for your rootdg on each node.
        You can use these disks for local storage, not shared.
    4. Leave all other disks alone
    5. Do NOT have vxinstall shutdown and reboot
    6. Reminor the device in rootdg to be unique on each node. Set each node to
     have a base minor number of 100*<nodenumber>;. E.G. node1=100, node2=200
     vxdg reminor <x>;00
    7. On ONE node: scshutdown -y -g0
    8. boot all nodes
  F. If using VxVM 3.2, and you have multiple paths (controllers) from a
    single node to shared disks, and are not using MPXIO (STMS), prevent dmp
    on those controllers
    1. run vxdiskadm
    2. choose "revent multipathing/Suppress devices from VxVM's view"
    3. choose "revent multipathing of all disks on a controller by VxVM"
    4. suppress each controller, i.e. c2
  G. Create your disk groups. For each disk group:
    1. vxdiskadd c#t#d# c#t#d# ...
      a. When prompted, enter the new disk group name
    2. Register the disk group
      a. Do not register OPS CVM-shared (with cluster functionality) groups
      b. Run scsetup
        1. Select "Device groups and volumes"
        2. Select "Register a VxVM disk group as a device group"
        3. quit out
    3. scstat -D to determine the primary
    4. create your volumes, register and newfs them on the PRIMARY node. Note,
       vxfs operations may fail if not done on the primary.
      a. vxassist -g <dg>; -U fsgen make <vol>; <size>; layout=mirror-stripe,log
        or use vmsa
      b. Synchronize your volumes with scsetup
        1. Select "Device groups and volumes"
        2. Select "Synchronize volume information for a VxVM device group"
      c. newfs your volumes:
        1. for ufs : newfs /dev/vx/rdsk/<dg>;/<vol>;
        2. for vxfs: mkfs -F vxfs -o largefiles /dev/vx/rdsk/<dg>;/<vol>;
  H. Mount your volumes
    1. mkdir /global/<mnt>; ON ALL NODES
    2. add entries to vfstab ON ALL NODES THAT ARE DIRECTLY ATTACHED
      a. for ufs global:
       /dev/vx/dsk/<dg>;/<vol>; /dev/vx/rdsk/<dg>;/<vol>; /global/<mnt>; ufs 2 yes global,logging
      b. for ufs with HAStoragePlus local file system:
       /dev/vx/dsk/<dg>;/<vol>; /dev/vx/rdsk/<dg>;/<vol>; /global/<mnt>; ufs 2 no logging
      c. for vxfs global:
       /dev/vx/dsk/<dg>;/<vol>; /dev/vx/rdsk/<dg>;/<vol>; /global/<mnt>; vxfs 2 yes global,log
       Note: the log option with vxfs may cause a negative performance impact,
       but must me used. Per the release notes, it is unsupported to use the
       qlog, delaylog, or  tmplog vxfs mount options and logging is required.
      d. for vxfs with HAStoragePlus local file system:
       /dev/vx/dsk/<dg>;/<vol>; /dev/vx/rdsk/<dg>;/<vol>; /global/<mnt>; vxfs 2 no log
    3. run sccheck on the nodes you modified to make sure all is OK
    4. mount the file system ON THE PRIMARY NODE. If you are using vxfs, the
      mount may fail if you are not on the primary
      a. scstat -D to determine the primary node of the disk group/set
      b. mount /global/<mnt>;

VII. If using SDS on Solaris 8, install SDS 4.2.1
  A. pkgadd SUNWmdr SUNWmdu SUNWmdx ( + SUNWmdg if you want the gui)
  B. Add the patch <eiscd1>;/sun/patch/sds/4.2.1
     108693-14   Solstice DiskSuite 4.2.1
  C. If needed, change /kernel/drv/md.conf file:
       nmd=<max number metadevices, up to 1024 for SDS, 8192 for SVM>;
       md_nsets=<max number of disksets, up to 31>;
     Do not set these higher than you need, as it will increase your boot time
  D. ON ONE NODE shutdown all nodes with : scshutdown -y -g 0
  E. boot all nodes
  F. ON ONE NODE run /usr/cluster/bin/scgdevs
  G. Edit the /etc/group file and make root a member of group 14 (sysadmin)
  H. Create the metadb(s)
    1. metadb -afc 3 c#t#d#s#
    2. metadb to check
  I. Create your disk sets ON ONE NODE
    1. metaset -s <set>; -a -h <node1>; <node2>; ... <node#>;
    2. Add the disks to your set
      a. metaset -s <set>; -a /dev/did/rdsk/d# ...
      b. To check : metaset -s <set>;
    3. Repartition the disk using format, prtvtoc and fmthard.
      a. slice 7 should be 2mb and start at cylinder 0
      b. If using metatrans devices, slice 6 should be 1% of the disk or 64mb
           whichever is smaller
      c. slice 0 should be the rest of the disk
      d. slice 2 should be the entire disk
  J. Create your metadevices
   1. edit /etc/lvm/md.tab and add your metadevices. Use the /dev/did/rdsk/d#s0
      devices
     a. example entry for md.tab:
       <set>;/d101 1 1 /dev/did/rdsk/d4s0
       <set>;/d102 1 1 /dev/did/rdsk/d5s0
       <set>;/d100 -m nfs-dg/d101 nfs-dg/d102
   2. ON ONE NODE metainit -s <set>; -a
   3. ON ONE NODE newfs /dev/md/<set>;/rdsk/d#
  K. mount your file systems
   1. mkdir /global/<mnt>; ON ALL NODES
   2. Add to the vfstab ON ALL NODES DIRECT ATTACHED TO THE STORAGE.
     /dev/md/<set>;/dsk/d# /dev/md/<set>;/rdsk/d# /global/<mnt>; ufs 2 yes global,logging
      a. for ufs global:
       /dev/md/<set>;/dsk/d# /dev/md/<set>;/rdsk/d#>; /global/<mnt>; ufs 2 no global,logging
      b. for ufs with HAStoragePlus local file system:
       /dev/md/<set>;/dsk/d# /dev/md/<set>;/rdsk/d#  /global/<mnt>; ufs 2 no logging
   3. mount /global/<mnt>; ON ONE NODE
  L. If you have any disksets on exactly 2 arrays connected to 2 nodes, you
     must configure the dual string mediators on one node
    1. metaset -s <set>; -a -m <node1>; <node2>;
    2. To check : medstat -s <set>;

VIII. Configure ha-nfs (failover example)
  A. Install the data service software
    1. run scinstall on ALL nodes
    2. select "Add support for new data services to this cluster node"
    3. select "nfs"
    4. Add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
       111555-06   SC30 HA-NFS
  B. Add your failover hostname/ip to /etc/hosts on ALL nodes
  C. make the admin file system ON ONE NODE
    1. mkdir -p /global/<mnt>;/<nfs-adm>;/SUNW.nfs
    2. cd /global/<mnt>;/<nfs-adm>;/SUNW.nfs
    3. vi dfstab.<nfs-res>;
      a. add entries to share /global/<mnt>;
  D. Register the nfs resource on ONE node
    scrgadm -a -t SUNW.nfs
  E. Create the nfs resource group on ONE node
    scrgadm -a -g <nfs-rg>; -h <node1,..>; -y PATHPREFIX=/global/<mnt>;/<nfs-adm>;
  F. Add a failover hostname/ip resource to the nfs-rg on ONE node
    scrgadm -a -L -g <nfs-rg>; -l <failover hostname>; -n nafo<#>;@<node1>;,...
  G. Set up HAStoragePlus (optional)
    1. Register the HAStoragePlus resource type
      scrgadm -a -t SUNW.HAStoragePlus
    2. Create the SUNW.HAStoragePlus resource
      scrgadm -a -g <nfs-rg>; -j <nfs-hastp-res>; -t SUNW.HAStoragePlus \
      -x FilesystemMountPoints=/<mnt-point>;,</<mnt-point>; \
      -x AffinityOn=True
      a. Please note that the mount points listed must be in the same order
        that they are listed in the hanfs vfstab file
    3. Enable the HAStoragePlus resource with the nf resource group.
      scswitch -Z -g nfs-rg
  H. Add the nfs resource to the nfs-rg on ONE node
    scrgadm -a -g <nfs-rg>; -j <nfs-res>; -t SUNW.nfs
        or if using HAStoragePlus use
    scrgadm -a -g <nfs-rg>; -j <nfs-res>; -t SUNW.nfs \
     -y Resource_Dependencies=<nfs-hastp-res>;
  I. re-enable the modified nfs-rg on ONE node
    scswitch -Z -g <nfs-rg>;
  J. Check the status
    scstat -g
  K. Test a manual switch of the resource group
    scswitch -z -h <node#>; -g <nfs-rg>;
    scstat -g

IX. Configure ha-apache (shared address/scalable example)
  A. Install the data service software using scinstall option 4 on ALL nodes
    1. add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
       111551-03   SC30 HA-Apache
  B. Add your shared hostname/ip to /etc/hosts on all nodes
  C. Create a conf directory on one node
    1. mkdir /global/<mnt>;/conf
    2. cp /etc/apache/httpd.conf-example /global/<mnt>;/conf/httpd.conf
    3. vi /global/<mnt>;/conf/httpd.conf
      a. uncomment ServerName line and change it to the shared hostname
      b. change the DocumentRoot to /global/<mnt>;/htdocs
      c. change the ScriptAlias to /global/<mnt>;/cgi-bin/ <- trailing slash
  D. Create the binary and data directory
    1. ONE ONE NODE create directories for html and cgi files
      a. mkdir /global/<mnt>;/htdocs
         mkdir /global/<mnt>;/cgi-bin
      b. cp -rp /usr/apache/bin /global/apache
      c. cd /var/apache/htdocs
      d. cp -r ./* /global/<mnt>;/htdocs
      e. cp test-cluster.cgi /global/<mnt>;/cgi-bin
      f. chmod ugo+rw /global/<mnt>;/cgi-bin/test-cluster.cgi
    2. Edit the apachectl fle to work with a different httpd.conf file
      a. cd /global/apache/bin
      b. cp apachectl apachectl.orig
      c. vi apachectl
        1. search for the line which reads:
           HTTPD=/usr/apache/bin/httpd
        2. change it to read :
           HTTPD="/global/<mnt>;/bin/httpd -f /global/<mnt>;/conf/httpd.conf"
  E. Register the Apache data service
    scrgadm -a -t SUNW.apache
  F. Create the shared address resource group
    scrgadm -a -g <shared-rg>; -h <node1,...>;
  G. Assign an ip + nafo to the shared resource group
    scrgadm -a -S -g <shared-rg>; -l <shared hostname>; -n <nafo#@node#,...>;
  H. Create another resource group for apache dependent on shared-dg
    scrgadm -a -g <apache-rg>; \
      -y Maximum_primaries=2 \
      -y Desired_primaries=2 \
      -y RG_dependencies=<shared-rg>;
  I. Add a SUNW.apache resource to the apache-rg
    scrgadm -a -j <apache-res>; -g <apache-rg>; -t SUNW.apache \
      -x ConfDir_list=/etc/apache/conf \
      -x Bin_Dir=/global/apache/bin \
      -y Scalable=True \
      -y Network_resources_used=<shared hostname>;
  J. Bring the shares address and apache resource groups online
    1. scswitch -Z -g <shared-rg>;
    2. scswitch -Z -g <apache-rg>;
  K. run scstat -g to check the state
  L. Test the shared connection by visiting
    http://<shared hostname>;/cgi-bin/test-cluster.cgi
    You may need to refresh/reload several times to make sure you connect to
    another node

X. Configuring other data services
  A. Follow the documentation in one of 2 guides:
     - CLUSTDATASVC.pdf : Sun Cluster 3.0 Data Services Installation and
       Configuration Guide, 816-2024-10. It can be found on the Data Service
       CDROM at the following path:
      <cd>;/components/SunCluster_Data_Service_Answer_Book_3.0/Docs/locale/C/PDF
     - CLUSTSUPP.pdf - Sun Cluster 3.0 5/02 Supplement, 816-3380-10. It can be
       found on the Update 3 CDROM  at the following path:
       <cd>;/SunCluster_3.0/Docs/locale/C/PDF
    1. For Oracle OPS, CLUSTDATASVC chapter 8 - in summary:
      a. If using VxVM with Cluster functionality:
        1. pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm
        2. add patches <eiscd1>;/sun/patch/SunCluster/3.0/8
           111557-03   SC30 VXVM
           111857-05   SC30 OPS Core
           112442-05   SC30 OPS w/CVM
      b. If using HW raid:
        1. pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr
        2. add patches <eiscd1>;/sun/patch/SunCluster/3.0/8
           111857-05   SC30 OPS Core
      c. create the dba group and the oracle user
      d. pkgadd the oracle udlm, ORCLudlm
      e. On one node, scshutdown -g0 -y and then boot
      f. Find which node is the Veritas master using : vxdctl -c mode
      g. Deport the raw volume disk group : vxdg deport <dg>;
      h. Import the disk group shared on the master :
         vxdg -s import <dg>;
         vxvol -g <dg>; startall
      i. Make sure the disk group is seen as shared on both nodes: vxdg list
      j. install Oracle OPS
    2. For HA-Oracle, CLUSTSUPP chapter 5 page 78
      a. add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
         110651-11   SC30 HA-Oracle
    3. For iPlanet Web Server, CLUSTSUPP chapter 5 page 101
      a. add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
         111553-03   SC30 HA-IPlanet Web Server
    4. For iPlanet Directory Server on Solaris 8 , CLUSTDATASVC chapter 4
      a. add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
         111556-06   SC30 HA-Netscape LDAP
    5. For iPlanet Directory Server on Solaris 9,  CLUSTSUPP chapter 5 page 99
    6. For DNS, CLUSTDATASVC chapter 6
      a. add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
         111552-04   SC30 HA-DNS
    7. For HA SAP, CLUSTDATASVC chapter 9
      a. add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
         112270-04   SC30 HA-SAP
    8. For Scalable SAP , CLUSTSUPP Appendix B
    9. For Sybase ASE, Release Notes Supplelement, 816-5209-10, page 83
      a. add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
         112566-03   SC30 HA-Sybase
    10. For HA BroadVision, CLUSTDATASVC chapter 11
      a. add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
         112567-01   SC30 HA-Broad Vision
    11. For HA NetBackup, CLUSTDATASVC chapter 12
      a. add patch <eiscd1>;/sun/patch/SunCluster/3.0/8
         112562-02   SC30 HA-Netbackup
    12. For HA-Livecache, download software from edist.central. Docs in
        Sun Cluster 3.0 5/02 Release Notes Supplement, 2 Dec 2002, Appendix H
      a. add patch
         113498-02## SC30 HA-Livecache
    13. For HA-Siebel, download software from edist.central. Docs in
        SC_HA_for_Siebel.pdf
      a. add patch
          113499-01   SC30 HA-Siebel

XI. If using VxVM to encapsulate &amp; mirror root
  A. Edit /etc/vfstab so that the /global/.devices/node@<nodeid>; mounts from
     the physical c#t#d#s# instead of the /dev/did device. If you don't know
     the device, there should be a /globaldevices line commented out. DO NOT
     uncomment this line! Just use the physical device that it originally
     mounted as the device in the /global/.devices/node@<nodeid>; line. E.G. :
     /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3 /global/.devices/node@1 ufs 2 no global
  B. Shut down the cluster (on one node, will shut down both):
     scshutdown -y -g 0
  C. Boot all nodes outside of the cluster
     boot -x
  E. Use vxdiskadm option 2 "Encapsulate one or more disks" to encapsulate the
    root drive.
    1. Name each root disk rootdisk<node#>;
    2. init 0
    3. boot -x
      Systems will boot, encapsulate root disk, and reboot (into the cluster)
  F. Mirror your root disk using vxdiskadm or use:
    /etc/vx/bin/vxmirror rootdisk<node#>; rootmir<node#>;

XII. If using DiskSuite to mirror root
  A. metainit -f d1 1 1 <c#t#d#s0 (rootdisk - don't use the did device)>;
  B. metainit d2 1 1 <c#t#d#s0 (rootmir)>;
  C. metainit d0 -m d1
  D. metaroot d0
  E. scshutdown -y -g 0
  F. boot all nodes
  G. metattach d0 d2
  H. Note - you should also mirror swap and /global/.devices/node@<#>; (and any
     other partitions you used on the root disk). If you do, be sure to use
     the physical device for the metainit. Also be sure that the metadevice
     you choose for each /global/.devices/node@<#>; is unique in the cluster,
     since they will be mounted globally. And don't forget to make your changes
     in the /etc/vfstab file and the /etc/lvm/md.tab file.

XIII. Complete installation
  A. If using DiskSuite, set up the metacheck script
    1. create the metacheck.sh script. See  the example in the Solstice
      DiskSuite 4.2.1 User's Guide on docs.sun.com
    2. enter a crontab entry for it. e.g:  00 06 * * * /metacheck.sh
  B. If desired, install SunMC or test sunplex manager
  C. Follow the EIS Cluster Checklist for Install Verification and running
     checkup
  D. Use the EISdoc tool to document the cluster.
  
Revision History -
1.3.7.5 - 22 Jan 2003
        - Updated for new patch rev
1.3.7.4 - 17 Jan 2003
        - Updated for 31-Dec-2002 EIS cd
        - Added new PCI-SCI patch
1.3.7.3 - 07 Jan 2003
        - Updated for new patch rev
1.3.7.2 - 06 Jan 2003
        - Corrected Infodoc nuber to 49704
1.3.7.1 - 23 Dec 2002
        - Added/changed vxfs qlog, delaylog, tmplog notes
1.3.7.0 - 16 Dec 2002
        - Updated for 26-Nov-2002 EIS cd (2.0.10)
        - Added info about VxFS point patch

论坛徽章:
0
22 [报告]
发表于 2003-01-27 23:30 |只看该作者

Sun Cluster installation guide

原帖由 "Xylon"]to software 发表:

  B. pkgadd VRTSvmsa if you want the gui
  C. Add patches
    1. <eiscd1>;/sun/patch/veritas-vm/3.2/8
       113201-02     VxVM 3.2 general patch
    2. If you added the vmsa gu..........

呵呵,哥们怎么也来这溜达啊。知道你是who了。

论坛徽章:
0
23 [报告]
发表于 2003-02-04 12:21 |只看该作者

Sun Cluster installation guide

我也要,谢谢!
jiangnanbuyicn@yahoo.com.cn

完了,我现在说话怎么这么简单啊,怎么也得加上两句:我对你的感谢如滔滔江水,绵绵不绝。。。。哈哈

论坛徽章:
0
24 [报告]
发表于 2003-02-05 19:35 |只看该作者

Sun Cluster installation guide

me too.

many thanks.

mrcenter@sina.com

论坛徽章:
0
25 [报告]
发表于 2003-02-22 12:52 |只看该作者

Sun Cluster installation guide

你不是和UTSTAROM的人有什么关系吧:)

论坛徽章:
0
26 [报告]
发表于 2003-03-08 14:31 |只看该作者

Sun Cluster installation guide

谢谢了!!!
我也要一份,lei-sohu@sohu.com

论坛徽章:
0
27 [报告]
发表于 2003-03-08 15:29 |只看该作者

Sun Cluster installation guide

能否给我一份,在此谢过了!
sunt@ccdns.com.cn

论坛徽章:
0
28 [报告]
发表于 2003-03-08 21:04 |只看该作者

Sun Cluster installation guide

我也需要一份!拜托了!
lekong@163.net

论坛徽章:
0
29 [报告]
发表于 2003-03-08 22:08 |只看该作者

Sun Cluster installation guide

我要也,謝囉~~
lon@mail2000.com.tw
msscisd 该用户已被删除
30 [报告]
发表于 2003-03-09 10:22 |只看该作者
提示: 作者被禁止或删除 内容自动屏蔽
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP