免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12345下一页
最近访问板块 发新帖
查看: 14314 | 回复: 49
打印 上一主题 下一主题

Sun Cluster installation guide [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2003-01-20 16:38 |只看该作者 |倒序浏览
How to install SC 3.0 12/01 (update 2):

This doc cat be found at http://neato.east/suncluster/scinstall.html
Version: 1.1.2
Last Modified Fri Feb 15 09:40:41 EST 2002

I.  Paper Work -
  A. For the Americas get it from http://supportforum.central/suncluster/
    1. Read the Sun Cluster 3.x Installation Process
    2. Fill out the Planning Documentation
    3. Get customer to order the license keys from the licencing center for
       VxVM (and VxFS if it will be used)
  B. For other geos, use EIS or you own local paperwork
  
II. Install admin station
  A. Install HW & OS & recommended patches
  B. pkgadd SUNWccon and if desired SUNWscman
    1. add patch 111554 if you added SUNWscman
  C. Add /opt/SUNWcluster/bin to path (and /usr/cluster/man to MANPATH)
  D. Add all host to /etc/hosts
  E. Create a file /etc/clusters with one entry:
     <clustername>; <node1>; ... <node n>;
  F. Create a file /etc/serialports with one entry for each node
    1. The entry is  - <host>; <tc>; <500x>; ,  x=tc port number
    2. If a SunFire  - <host>; <sc>; <500x>; ,  x=1-4 for domains A-D
    3. If an E10K, entry is      - <host>; <ssp>; 23

III. Install TC - only if using Sun's (optional) tc
  A. Add an entry to /etc/remote: (for serial port a)
     tc:dv=/dev/term/a:br#9600:el=^C^S^Q^U^D:ie=%$e=^D:
  B. Connect port 1 to admin station serial port a and tip tc
  C. Get into monitor mode by holding down test button while you power on
     the TC. When the green light blinks, release the test button and then
     press it again.
    1. set IP address - monitor:: addr
    2. check boot image is oper.52.enet - monitor:: image
    3. set to boot from self - monitor:: seq
    4. Power cycle the tc
    5. Quit your tip session (~.)
    6. telnet into the tc and select cli
    7. become su and go into admin mode
       annex: su
       passwd: <tc ip address>;
       annex# admin
    8. configure serial ports :
       admin: set port=1-8 type dial_in imask_7bits Y
       admin: set port=2-8 mode slave
       admin: quit
       annex# boot
       bootfile: <return>;
       warning: <return>;
    9. exit your telnet session (ctrl + ] and quit)
  
IV. Install cluster nodes
  A. If using a tc:
    1. On admin station -  ccp <clustname>;.
    2. Choose cconsole
  B. If Multi Initiated SCSI, follow infodoc 20704
  C. from the ok prompt: setenv local-mac-address? false
  D. Install Solaris 8 10/01 or 2/02 (at least the END USER software)
    1. Make and mount a 100mb filesystem called /globaldevices
    2. For SDS. make a 2mb partition for the metadb's
       For VxVM leave 2 slices and 10mb free for encapsulation
    3. swap should be at least 750mb or 2x RAM, whichever is greater
    4. root (/) should be at least 100mb greater than normal
    5. If you use a seperate partition for var, it should be at least
       100mb+40% of memory if a seperate partition (in order to be able to
       capture a core dump)
    6. If you use a seperate partition for usr, it should be at least 40mb
       greater than normal
  E. If using a tc:
    1. Use TERM type "xterms". (in .profile : TERM=xterms;export TERM)
    2. After reboot, edit the /etc/default/login file to allow remote root
    3. Go back to the ccp and choose ctelnet. This will allow faster access for
      the rest of the process. Use the console mode windows to see any error
      messages logged to the console
  F. Set up root's .profile
    1. for all: PATH=/usr/sbin:/usr/bin:/usr/cluster/bin
      a. for VxVM add to PATH: /opt/VRTSvmsa/bin if you plan to use the GUI
      b. for RM6 add to PATH: /usr/sbin/osa:/usr/lib/osa/bin
    2. for all: MANPATH=/usr/share/man:/usr/cluster/man
      a. for VxVM 3.2 add to MANPATH: /opt/VRTS/man
      b. for VxVM 3.0.4 add to MANPATH: /opt/VRTSvxvm/man
    3. for all: TERM=xterms
    4. for all: export PATH MANPATH TERM
  G. Add entries in /etc/hosts for each log host,phys host,admin and tc
  H. Edit /etc/nsswitch.conf. Put files first for hosts, group and services
  I. Install the following patches:
    108528    kernel update
    108806    qfe driver
    108813    gigabit ethernet 3.0
    108901    rpcmod
    108981    hme driver
    108982    fctl/fp/fcp/usoc
    108983    fcip
    108984    qlc
    108987    patchadd/patchrm
    109115    T3 System Firmware Update
    109145    in.routd
    109189    ifp
    109234    Apache and NCA
    109400    Hardware/Fcode: FC100/S HBA
    109460    socal &amp; sf
    109524    ssd driver
    109529    luxadm
    109657    isp
    109667    xntpd &amp; ntpdate
    109885    glm
    109898    arp
    109900    network &amp; network.sh
    109902    in.ndpd
    110380    ufssnapshots support, libadm patch
    110934    pkgadd
    111023    mntfs
    111095    mpxio/fctl/fp/fcp/usoc
    111096    mpxio/fcip
    111097    mpxio/qlc
    111293    libdevinfo.so.1
    111412    mpxio/scsi_vhci
    111413    mpxio/luxadm, liba5k and libg_fc
    111853    Hardware/Fcode: PCI Single FC HBA
  J. If using A3500 install RM6.22.1:
    1. pkgadd rm6.22.1
      a. add patch 112136
    2. edit /etc/osa/rmparams. Change the line System_MaxLunsPerController=8
       to the # of LUNs needed.
      a. if you have more than one A3500FC on a loop
        1. Change Rdac_HotAddDisabled=PARTIAL to Rdac_HotAddDisabled=FALSE
        2. Add the other A3500FC targets to Rdac_HotAddIDs:4:5:y:z
    3. /usr/lib/osa/bin/genscsiconf
    4. edit /etc/raid/rdac_address. Distribute your LUNs
       over the two controllers
    5. init 6
    6. upgrade firmware and create luns
    7. /usr/lib/osa/bin/rdac_disks
  K. If you are using VxFS 3.4
    1. pkgadd VRTSlic and VRTSvxfs (and VRTSfsdoc if desired)
    2. add licence with vxlicense -c
  L. Edit the /etc/system file and add any needed entries
    1. For all systems:
       exclude: lofs
       set ip:ip_enable_group_ifs=0
       forceload: misc/obpsym
       set nopanicdebug = 1
    2. For Sybase:
       set shmsys:shminfo_shmmax=0xffffffff
       set shmsys:shminfo_shmseg=200
       set rlim_fd_cur=1024
    3. For Oracle HA or PDB:
       set shmsys:shminfo_shmmax=0xffffffff
       set shmsys:shminfo_shmmin=1
       set shmsys:shminfo_shmmni=200
       set shmsys:shminfo_shmseg=200
       set semsys:seminfo_semmap=1024
       set semsys:seminfo_semmni=2048
       set semsys:seminfo_semmns=2048
       set semsys:seminfo_semmnu=2048
       set semsys:seminfo_semume=200
       set semsys:seminfo_semmsl=2048
       forceload: sys/shmsys
       forceload: sys/semsys
       forceload: sys/msgsys
    4. For Informix:
       set shmsys:shminfo_shmmax=4026531839 (3.86GB = max)
       set shmsys:shminfo_shmmni=50
       set shmsys:shminfo_shmseg=200
  M. Add the logging option to the / filesystem (and any others that you want)
     in the /etc/vfstab file

V. Install SC on the nodes - Note: If you plan on using VxFS, make sure that
   you have already installed the VxFS packages before you coninue, or you will
   overwrite a setting in the /etc/system file when you do pkgadd. The setting
   is : set rpcmod:svc_default_stksize=0x6000
  A. scinstall the first node (in the Tools subdirectory)
    1. Choose option 1) "Establish a new cluster using this machine as the
         first node"
    2. Do NOT have the scinstall program reboot for you
    3. Make a note of the did number in /etc/name_to_major
    4. Add patches
      a. 110648 (rev -11 included in 12/01)
      b. 111488 (rev -01 included in 12/01)
      c. 111554 (rev -05 included in 12/01)
      d. 112108 (rev -02 included in 12/01)
    5. reboot
  B. scinstall all remaining nodes
    1. Choose option 2) "Add this machine as a node in an established cluster"
    2. Do NOT have the scinstall program reboot for you
    3. Make sure that the did number in /etc/name_to_major is the same as for
       the sponsering node
    4. Add patches
      a. 110648 (rev -11 included in 12/01)
      b. 111488 (rev -01 included in 12/01)
      c. 111554 (rev -05 included in 12/01)
      d. 112108 (rev -02 included in 12/01)
    5. reboot
  C. Run scstat -n to make sure all the nodes are in the cluster
  D. If a 2 node cluster, set up the quorum disk
    1. scdidadm -L : to id the quorum disks. Pick a disk seen by both nodes
    2. On ONE node, run scsetup to assign the quorum disk
      a. Answer "Yes" to: Is it okay to reset "installmode"
  E. If a 3 or more node cluster, reset install mode
    1. On ONE node, run scsetup.
      a. Answer "no" to "Do you want to add any quorum disks?"
      b. Answer "Yes" to: Is it okay to reset "installmode"
  F. Configure/check NAFO groups on ALL nodes
    1.configure : pnmset
    2.To check : pnmstat -l
  G. Configure ntp.conf on ALL nodes
    1. Edit /etc/inet/ntp.conf and comment out the lines which read
       peer clusternode<#>;-priv for each node # which doesn't exist

VI. Install VxVM 3.2, (or VxVM 3.0.4 if using Oracle OPS 8.1.6)
  A. check /etc/name_to_major and find the highest major number used.
    1. If the highest number is greater that 210, we need to set the
      environment variable SC_VXIO_MAJOR to the number 2 greater than that:
      e.g. : highest number in name_to_major was 256, use
      SC_VXIO_MAJOR=258;export SC_VXIO_MAJOR
  B. On ALL nodes at once run scvxinstall -i [-d path to software]
  C. pkgadd VRTSvmsa if you want the gui
  D. Add licenses with vxlicense -c
    1. For Oracle OPS be sure to add the Cluster Functionality license
  E. vxinstall
    1. Choose a Custom Installation
    2. Do NOT encapsulate the root drive (will do later in step XI)
    3. Only add the root mirror disk (as a new disk) to rootdg. Name the disk
       rootmir<Node#>;
    3. Leave all other disks alone
    4. Do NOT have vxinstall shutdown and reboot
    5. Reminor the device in rootdg to be unique on each node. Set each node to
     have a base minor number of 100*<nodenumber>;. E.G. node1=100, node2=200
     vxdg reminor <x>;00
  F. Add patches
    1. for 3.2 add patch 111909
      a. If you added the gui, add patch 111904
    2. for 3.0.4 add patch 110263
      a. If you added the gui add patch 110032
    4. On ONE node: scshutdown -y -g0
    5. boot -r
  G. If using VxVM 3.2, and you have multiple paths (controllers) from a
    single node to shared disks, prevent dmp on those controllers
    1. run vxdiskadm
    2. choose "revent multipathing/Suppress devices from VxVM's view"
    3. choose "revent multipathing of all disks on a controller by VxVM"
    4. suppress each controller, i.e. c2
  H. Create your disk groups (on ONE node). For each disk group:
    1. vxdiskadd c#t#d# c#t#d# ...
    2. When prompted, enter the new disk group name
    3. Register the disk group
      a. Do not register OPS CVM-shared groups
      b. Run scsetup
        1. Select "Device groups and volumes"
        2. Select "Register a VxVM disk group as a device group"
    4. create your volumes, register and newfs them
      a. vxassist -g <dg>; -U fsgen make <vol>; <size>; layout=mirror-stripe,log
        or use vmsa
      b. Synchronize your volumes
        1. Select "Device groups and volumes"
        2. Select "Synchronize volume information for a VxVM device group"
      c. newfs your volumes:
        1. for ufs : newfs /dev/vx/rdsk/<dg>;/<vol>;
        2. for vxfs: mkfs -F vxfs -o largefiles /dev/vx/rdsk/<dg>;/<vol>;
  I. Mount your volumes
    1. mkdir /global/<mnt>; ON ALL NODES
    2. add entries to vfstab ON ALL NODES THAT ARE DIRECTLY ATTACHED
      a. for ufs:
      /dev/vx/dsk/<dg>;/<vol>; /dev/vx/rdsk/<dg>;/<vol>; /global/<mnt>; ufs 2 yes global,logging
      b. for vxfs:
      /dev/vx/dsk/<dg>;/<vol>; /dev/vx/rdsk/<dg>;/<vol>; /global/<mnt>; vxfs 2 yes global,log
    3. mount the file system ON ONE NODE:
       mount /global/<mnt>;

VII. Install SDS 4.2.1 (if used)
  A. pkgadd SUNWmdr SUNWmdu SUNWmdx ( + SUNWmdg if you want the gui)
  B. patchadd 108693-xx
  C. If needed, change /kernel/drv/md.conf file:
       nmd=<max number metadevices, up to 8192>;
       md_nsets=<max number of disksets, up to 32>;
  D. ON ONE NODE shutdown all nodes with : scshutdown -y -g 0
  E. boot all nodes (with -r if you edited md.conf)
  F. ON ONE NODE run /usr/cluster/bin/scgdevs
  G. Edit the /etc/group file and make root a member of group 14 (sysadmin)
  H. Create the metadb(s)
    1. metadb -afc 3 c#t#d#s#
    2. metadb to check
  I. Create your disk sets ON ONE NODE
    1. metaset -s <set>; -a -h <node1>; <node2>; ... <host#>;
    2. format the disk using format, prtvtoc and fmthard
      a. slice 7 should be 2mb and start at cylinder 0
      b. If using metatrans devices, slice 6 should be 1% of the disk or 64mb
           whichever is smaller
      c. slice 0 should be th rest of the disk
      d. slice 2 should be the entire disk
    3. Add the disks to your set
      a. metaset -s <set>; -a /dev/did/rdsk/d# ...
      d. To check : metaset -s <set>;
  J. Create your metadevices
   1. edit /etc/lvm/md.tab and add your metadevices. Use the /dev/did/rdsk/d#s0
      devices
   2. ON ONE NODE metainit -s <set>; -a
   3. ON ONE NODE newfs /dev/md/<set>;/rdsk/d#
  K. mount your file systems
   1. mkdir /global/<mnt>; ON ALL NODES
   2. Add to the vfstab ON ALL NODES DIRECT ATTACHED TO THE STORAGE. e.g. :
     /dev/md/<set>;/dsk/d# /dev/mds/<set>;/rdsk/d# /global/<mnt>; ufs 2 yes global,logging
   3. mount /global/<mnt>; ON ONE NODE
  L. If you have any disksets on exactly 2 arrays connected to 2 nodes, you
     must configure the dual string mediators
    1. metaset -s <set>; -a -m <node1>; <node2>;
    2. To check : medstat -s <set>;


VIII. Configure ha-nfs (failover example)
  A. Install the data service software
    1. run scinstall on ALL nodes
    2. select "Add support for new data services to this cluster node"
    3. select "NFS"
    4. Add patch 111555
  B. Add your failover hostname/ip to /etc/hosts on ALL nodes
  C. make the admin file system ON ONE NODE
    1. mkdir -p /global/<mnt>;/<nfs-adm>;/SUNW.nfs
    2. cd /global/<nmt>;/<nfs-adm>;/SUNW.nfs
    3. vi dfstab.<nfs-res>;
      a. add entries to share /global/<mnt>;
  D. Register the nfs resource on ONE node
    scrgadm -a -t SUNW.nfs
  E. Create the nfs resource group on ONE node
    scrgadm -a -g <nfs-rg>; -h <node1,..>; -y PATHPREFIX=/global/<mnt>;/<nfs-adm>;
  F. Add a failover hostname/ip resource to the nfs-rg on ONE node
    scrgadm -a -L -g <nfs-rg>; -l <failover hostname>; -n nafo<#>;@<node1>;,...
  G. Add the nfs resource to the nfs-rg on ONE node
    scrgadm -a -g <nfs-rg>; -j <nfs-res>; -t SUNW.nfs
  H. enable the nfs-rg on ONE node
    scswitch -Z -g <nfs-rg>;
  I. Set up HAStorage (Optional)
    1. Register the SUNW.HAStorage resource type.
       scrgadm -a -t SUNW.HAStorage
    2. Create the SUNW.HAStorage resource
       scrgadm -a -g <nfs-rg>; -j <nfs-hastor-res>; -t SUNW.HAStorage \
        -x ServicePaths=<disk set or disk group>; -x AffinityOn=True
    3. Enable the hastorage resource.
       scswitch -e -j <nfs-hastor-res>;
    4. Set up the dependency for hastorage on the nfs resource
       scrgadm -c -j <nfs-res>; -y Resource_Dependencies=<nfs-hastor-res>;
  J. Check the status
    scstat -g
  K. Test a manual switch of the resource group
    scswitch -z -h <node#>; -g <nfs-rg>;
    scstat -g

IX. Configure ha-apache (shared address/scalable example)
  A. Install the data service software using scinstall option 4 on ALL nodes
    1. add patch 111551
  B. Add your shared hostname/ip to /etc/hosts on all nodes
  C. Create a conf directory on ALL NODES
    1. mkdir /etc/apache/conf
    2. cp /etc/apache/httpd.conf-example /etc/apache/conf/httpd.conf
    3. vi /etc/apache/conf/httpd.conf
      a. uncomment ServerName line and change it to the shared hostname
      b. change the DocumentRoot to /global/<mnt>;/htdocs
      c. change the ScriptAlias to /global/<mnt>;/cgi-bin/ <- trailing slash
  D. Create the data directory
    1. ONE ONE NODE create directories for html and cgi files
      a. mkdir /global/<mnt>;/htdocs
      b. mkdir /global/<mnt>;/cgi-bin
      c. cd /var/apache/htdocs
      d. cp -r ./* /global/<mnt>;/htdocs
      e. cp test-cluster.cgi /global/<mnt>;/cgi-bin
      f. chmod ugo+rw /global/<mnt>;/cgi-bin/test-cluster.cgi
  E. Register the Apache data service
    scrgadm -a -t SUNW.apache
  F. Create the shared address resource group
    scrgadm -a -g <shared-rg>; -h <node1,...>;
  G. Assign an ip + nafo to the shared resource group
    scrgadm -a -S -g <shared-rg>; -l <shared hostname>; -n <nafo#@node#,...>;
  H. Create another resource group for apache dependant on shared-dg
    scrgadm -a -g <apache-rg>; \
      -y Maximum_primaries=2 \
      -y Desired_primaries=2 \
      -y RG_dependencies=<shared-rg>;
  I. Add a SUNW.apache resource to the apache-rg
    scrgadm -a -j <apache-res>; -g <apache-rg>; -t SUNW.apache \
      -x ConfDir_list=/etc/apache/conf \
      -x Bin_Dir=/usr/apache/bin \
      -y Scalable=True \
      -y Network_resources_used=<shared hostname>;
  J. Bring the shares address and apache resource groups online
    1. scswitch -Z -g <shared-rg>;
    2. scswitch -Z -g <apache-rg>;
  K. run scstat -g to check the state
  L. Test the shared connection by visiting
    http://<shared hostname>;/cgi-bin/test-cluster.cgi
    You may need to refresh/reload several times to make sure you connect to
    another node

X. Configuring other data services
  A. Follow the documentation in 806-7071-xx Sun Cluster 3.0 Data Services
     Installation and Configuration Guide
    1. For HA-Oracle, chapter 2.
      a. add correct patch
        1. WARING! - (Please see SunAlert 41144 for more info):
           If using Oracle 8i, you must use 110651-02
           If using Oracle 9i, you must use 112264-02 on top of 110651-02
    2. For iPlanet Web Server, chapter 3.
      a. add patch 111553
    3. For Netscape Directory Server, chapter 4
      a. add patch 111556
    4. For DNS, chapter 6
      a. add patch 111552
    5. For Oracle OPS, chapter 8 - in summary:
      a. If using VxVM with Cluster functionality:
        1. pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm
        2. patchadd 111557, 111857 and 112442
      b. If using HW raid:
         pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr
      c. create the dba group and the oracle user
      d. pkgadd the oracle udlm, ORCLudlm
      e. On one node, scshutdown -g0 -y and then boot
      f. Find which node is the cvm master using : vxdctl -c mode
      g. Deport the raw volume disk group : vxdg deport <dg>;
      h. Import the disk group shared on the master :
         vxdg -s import <dg>;
         vxvol startall <dg>;
      i. Make sure the disk group is seen as shared on both nodes: vxdg list
      j. install Oracle OPS
     6. For HA SAP, chapter 9
      add patch 112270
     7. For Sybase ASE, chapter 10

XI. If using VxVM to encapsulate &amp; mirror root
  A. Edit /etc/vfstab so that the /global/.devices/node@<nodeid>; mounts from
     the physical c#t#d#s# instead of the /dev/did device. If you don't know
     the device, there should be a /globaldevices line commented out. DO NOT
     uncomment this line! Just use the physical device that it originally
     mounted as the device in the /global/.devices/node@<nodeid>; line. E.G. :
     /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3 /global/.devices/node@1 ufs 2 no global
  B. Shut down the cluster (on one node, will shut down both):
     scshutdown -y -g 0
  C. Boot all nodes outside of the cluster
     boot -x
  E. Use vxdiskadm option 2 to encapsulate the root drive.
    1. Name each noot disk rootdisk<node#>;
    2. init 0
    3. boot -x
      Systems will boot, encapsulate root disk, and reboot (into the cluster)
  F. Mirror your root disk
    1. /etc/vx/bin/vxrootmir rootmir<node#>;
    2. /etc/vx/bin/vxmirror rootdisk<node#>; rootmir<node#>;

XII. If using DiskSuite to encapsulate &amp; mirror root
  A. metainit -f d1 1 1 <c#t#d#s0 (rootdisk - don't use the did device)>;
  B. metainit d2 1 1 <c#t#d#s0 (rootmir)>;
  C. metainit d0 -m d1
  D. metaroot d0
  E. reboot
  F. metattach d0 d2
  G. Note - you should also mirror swap and /global/.devices/node@<#>;. If you
     do, be sure to use the physical device for the metainit. Also be sure that
     the metadevice you choose for each /global/.devices/node@<#>; is unique in
     the cluster, since they will be mounted globally

XIII. Complete installation
  A. If using DiskSuite, set up the metacheck script
    1. create the metacheck.sh script. See  the example in the Solstice
      DiskSuite 4.2.1 User's Guide on docs.sun.com
    2. enter a crontab entry for it. e.g:  00 06 * * * /metacheck.sh
  B. If desired, install SunMC
  C. If using RM6, make sure that paritychk is only enabled on 1 node of each
    pair attached to the storage (use gui)
  D. Follow the Acceptance Test Sign-off Checklist
  E. Register/track the cluster at http://acts.ebay/cluster/


Revision History -
1.1.2 - 15 Feb 2002
      - Added 2 new patches and
1.1.1 - 12 Feb 2002
      - Removed several patches obsolete by 108528-13
1.1.0 - 21 Dec 2001
      - Initial release for SC 3.0 update 2

论坛徽章:
0
2 [报告]
发表于 2003-01-23 13:17 |只看该作者

Sun Cluster installation guide

好呀,咚咚呀。

论坛徽章:
0
3 [报告]
发表于 2003-01-25 02:43 |只看该作者

Sun Cluster installation guide

嘻嘻!!这是SUN公司标准的手册咯

论坛徽章:
0
4 [报告]
发表于 2003-01-26 09:41 |只看该作者

Sun Cluster installation guide

peanut 你好,你的给的http://neta.easet/suncluster/scinstall.html
怎么无法显示

论坛徽章:
0
5 [报告]
发表于 2003-01-26 10:24 |只看该作者

Sun Cluster installation guide

原帖由 "ID" 发表:
peanut 你好,你的给的http://neta.easet/suncluster/scinstall.html
怎么无法显示

sun的内网,你能连进去么?

论坛徽章:
0
6 [报告]
发表于 2003-01-26 10:25 |只看该作者

Sun Cluster installation guide

[quote]原帖由 "sunmarmot"]嘻嘻!!这是SUN公司标准的手册咯[/quote 发表:

好眼力,是EIS-CD上的。

论坛徽章:
0
7 [报告]
发表于 2003-01-27 09:26 |只看该作者

Sun Cluster installation guide

我有安装过程的Trace File.

论坛徽章:
0
8 [报告]
发表于 2003-01-27 11:02 |只看该作者

Sun Cluster installation guide

好东西,能够mail给我一份吗?
proc_hgc@yahoo.com.cn
thanks

论坛徽章:
0
9 [报告]
发表于 2003-01-27 11:16 |只看该作者

Sun Cluster installation guide

好大的一堆补丁

论坛徽章:
0
10 [报告]
发表于 2003-01-27 11:48 |只看该作者

Sun Cluster installation guide

Thanks peanut.
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP