免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 3662 | 回复: 7
打印 上一主题 下一主题

[SCO UNIX] UnixWare 下双机软件ReliantHA的安装与配置。 [复制链接]

论坛徽章:
1
荣誉版主
日期:2011-11-23 16:44:17
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2003-03-27 17:14 |只看该作者 |倒序浏览
那地方不懂,可以问我。

How do I install ReliantHA?

--------------------------------------------------------------------------------

Keywords
reliant HA reliantha 1.1 unixware7 unixware 7 2.1 install installation clustering cluster node shared disk failover fail over availability
Release
UnixWare 7 ReliantHA 1.1
SCO ReliantHA for UnixWare 2.1

Problem
How do I install ReliantHA?

Note
It is important that you read the NOTES section at the end of this Technical Article, which details system requirements for SCO ReliantHA, prior to proceeding with the installation.

Solution
Before installing the UnixWare 7 operating system, select an IP address for both the public and private interface on each node. Private network interface IP addresses must have a unique network component for network routing.
SCO ReliantHA assigns default private IP addresses to each node in the cluster. These are:


              Node  IP address

              SYSA  192.168.0.1
              SYSB  192.168.0.2
              SYSC  192.168.0.3
              SYSD  192.168.0.4


These addresses do not have to be unique on the Internet because they are only used on the physically-isolated private network.
INSTALLING THE OPERATING SYSTEM:

The following tasks must be completed while the operating system is being installed. They should be undertaken in conjunction with the regular operating system installation procedures in the hardcopy UnixWare 7 Installation Guide.

Install UnixWare 7 on each node in the cluster. Perform the following steps during the installation:

1. If you are configuring an additional serial port device in addition to the two that are configured by default, you must configure the serial device driver software. To do so, enter the dcu(1M) utility. (See "Using the Device Configuration Utility (DCU)" in the UnixWare 7 online documentation set.) This can be performed as part of the installation after the HBA drivers are installed, or from the command line after the operating system is installed. For example, to configure the COM3 serial port, select the following dcu menu items:


              Software Device Drivers
              Communication Cards
              async (serial port driver)
              F5 = New
              Enter Unit 3; IRQ 5; IOStart 3e8; IOend 3ef
              F10 = Apply and Return
              Return to DCU Main Menu
              Apply Changes and Exit DCU


Additional serial ports (COM3 and COM4) are configured in this way, with the data specified in "Configuring serial ports."
If dcu is run from the command line, the kernel must be rebuilt using idbuild(1M).

2. Ensure that all public and private network adapters are visible to the Network Configuration Utility, netcfg(1M).

3. Ensure that network protocols are only configured for public network interfaces.

Note: Do not configure NetWare on the private network interface. If the NetWare packages are selected during the UnixWare 7 installation, NetWare is configured on all network interfaces. It must be deconfigured for the private network interface.

TESTING PUBLIC NETWORKS AND SHARED RESOURCES:

After the operating system and related software have been installed, test the shared resources to isolate any problems. Perform the following steps to test the configuration:

1. Test that the public networks are functioning correctly. Enter:


                 /usr/sbin/ping node


This should return:

                 node is alive


(Defer testing of the private network until after SCO ReliantHA is installed.)
2. Use the Hardware Setup menus for each SCSI HBA in the cluster to perform the following:

a. Scan the bus and ensure that all targets are there.

b. Verify that data can be transferred to each target.

Note: Access the Hardware Setup menu at boot time, by pressing:


                 <Ctrl>;A for Adaptec adapters

                 <Alt>;Q for Qlogic adapters


Refer to your hardware vendor's documentation for more details.
3. Boot each node.

4. For each shared disk:

a. Create disk partitions. See diskadd(1M).

b. Verify that data can be transferred to the disk. See dd(1M).

5. Test that each serial cable in the cluster is connected correctly by using the cat and echo commands.

For example, if the two nodes are connected via their /dev/term/00 ports, enter the following on the first node:


              cat >; /dev/term/00


On the second node, enter:

              echo hello >; /dev/term/00


You should see hello appear on the first node.
LICENSING SCO ReliantHA:

You must have a valid license as described by your Certificate of License and Authenticity (COLA) to install and use SCO ReliantHA.

Note: There is no evaluation license for ReliantHA. ReliantHA can be installed and run only if a valid COLA has been purchased.

During installation, the system checks for the license status of ReliantHA and displays a license status message. If a valid license is not found then you can either continue with the installation, or exit the process.

If you choose to continue, then the License Manager is executed to add the ReliantHA license. After a valid license is entered, the installation proceeds.

Note: You may enter a valid ReliantHA license at any time prior to the installation. The installation process will report that ReliantHA is licensed and will not ask for any license details.

Sometimes when you start ReliantHA the following warning is displayed:


     WARNING: Could not determine license status of ReliantHA on this system.


If you have previously licensed ReliantHA on this system and are in possession of a valid license, you can safely ignore this message.
Otherwise, you should immediately stop using ReliantHA and obtain a valid license from your SCO software supplier.

If you are in possession of a valid Certificate of License and Authenticity (COLA) for ReliantHA on the node where the message occurred, you may ignore this message. However, if you do not have a valid license you should comply with the message immediately, and cease use of ReliantHA until you have obtained and entered a valid license.

INSTALLING THE ReliantHA SOFTWARE SET:

Follow these steps:

1. Log in as root.

2. Insert the CD-ROM into the CD-ROM drive.

3. Mount the CD-ROM:


                 mount -F cdfs -r /dev/cdrom/* /mnt


4. Start the installation by invoking pkgadd(1M):

                 pkgadd -d /mnt


5. pkgadd displays the names of one or more sets:

              The following sets are available:

              1  NSlive1    Netscape LiveWire 1.01 for SCO UnixWare 7
                            (i386) 1.01
              2  NSproxy25  Netscape Proxy Server 2.5 for SCO UnixWare 7
                            (i386) 2.5
              3  ReliantHA  ReliantHA Host Monitoring Software
                            (IA32) 1.1.0
              4  TTA        Tarantella for SCO UnixWare 7
                            (i386) 1.0
              5  afps       SCO Advanced File and Print Server
                            (i386) 4.0.1
              6  arcserve   Data Management Services
                            (IA32) 7

              Select package(s) you wish to process (or 'all' to process
              all packages). (default: all) [?,??,quit]: 3


Enter the number of the package that you want to install. In this example, 3 was entered to install the ReliantHA set.
pkgadd now installs the software. The ReliantHA package set installs the gab, HAsupport, llt, msw, RelHAdoc and reliant packages:


              PROCESSING:

              Set: ReliantHA Host Monitoring Software (ReliantHA) from </mnt>;.

              ReliantHA Host Monitoring Software (IA32) 1.1.0

              Using </>; as the package base directory.

              VERITAS Software

              ## Processing package information.

              ## Processing system information.

              Installing ReliantHA Host Monitoring Software as <ReliantHA>;

              ## Executing preinstall script.


6. License ReliantHA.

              Checking ReliantHA license status on this system.

              No ReliantHA license found (or licensing policy daemon not
              responding).

              ReliantHA must be licensed before it will install and run.
              You will need to enter this license for installation to proceed.

              Do you wish to continue and license ReliantHA [y or n] ? y
              We will now run the License Manager so that you may enter
              the ReliantHA license details as described by your
              Certificate of License and Authenticity (COLA).


Enter the license details into the License Manager. The license status is then checked:

              Checking ReliantHA license status on this system.

              ReliantHA is licensed to run on this system.


7. Near the end of the installation, the following message appears:

              ==============================================================
              |                             RELIANT                        |
              |                                                            |
              |Welcome to the initialization script for the Reliant system |
              |You must first answer some questions about this system      |
              |The /etc/hosts and /etc/networks files will then be updated |
              |and the files .hosts, hvhosts, and  hvenv.host will be      |
              |created.                                                    |
              ==============================================================


At this point in the installation procedure, you are required to respond to various prompts as they appear on the screen. The following shows an example session when installing two nodes, with the software installing on SYSA:

              Creating hvhosts file...Done

              What is the size of the cluster? Enter number of nodes
              [ 2..4 / quit]?  2

              What is the private network IP address number of SYSA
              [ 192.168.0.1 ]

              What is the private network IP address number of SYSB
              [ 192.168.0.2 ]
              SYSA's IP address is: 192.168.0.1
              SYSB's IP address is: 192.168.0.2
              Is this correct? [y/n/quit]? y

              What is this system [SYSA / SYSB / SYSC / SYSD or quit]?:SYSA

              What is the private network device name [e.g. /dev/msw_0]?:

              Editing /usr/opt/reliant/etc/mswStart

              Done................

              Creating .hosts file for commd...Done.

              Appending hvhosts to the /etc/hosts file...Done.

              Appending hvnetworks to the /etc/networks file...Done.

              Creating the rcvmHelp.out file for the rcvm GUI...Done.

              Building the environment now...Done.

              Modifying /etc/confnet.d/inet/interface to configure private
              network IP... Done.

              initrc script completed.
              Done.
              Registering the SCOadmin Reliant object
              Done registering the SCOadmin Reliant object

              Installation of ReliantHA Host Monitoring Software (reliant) was
              successful.

              ## Executing set postinstall script.

              Processing of packages for set <ReliantHA>; is completed.


Note: The public and private networks should not have the same subnet addresses. Furthermore, it is recommended that the private subnet be in the range 192.168.0 to 198.168.255. The host should be in the 1 to 254 range. As an example, the private address of SYSA could be 192.168.0.1.
CONFIGURING THE PRIVATE NETWORK ADAPTERS:

Use netcfg(1M) to configure the private network adapters. You should choose the ReliantHA Private Network protocol from the list of available protocols.

Note: You should not add any other protocol to the ReliantHA private network.

CONFIGURING THE MAC SWITCH DRIVER (MSW):

For each node in the cluster:

1. The MSW devices must be configured with the mkmswtab(1Mha) command.

In the following example, the MSW device /dev/msw_0 is created using the Ethernet devices /dev/net1 and /dev/net2, and the serial port devices /dev/term/00h, /dev/term/01h and /dev/term/02h:


              /sbin/mkmswtab -i /dev/msw_0 /dev/net1 /dev/net2 /dev/term/00h \
              /dev/term/01h /dev/term/02h


Note: Serial port device names must refer to devices for hardware flow control (ending in "h" and must not appear as the first device.
This creates the /etc/mswtab and /etc/sdltab files shown below:


                 #cat /etc/mswtab

                 #MSW tab - control file for msw device driver
                 #
                 #This file automatically generated by mkmswtab
                 #
                 /dev/msw_0 1 00:AA:00:BD:6F:AA

                         /dev/net1    0       0
                         /dev/net2    0       0
                         /dev/sdl_0   0       1
                         /dev/sdl_1   0       1
                         /dev/sdl_2   0       1


                 #cat /etc/sdltab

                 #SDL tab - control file for sdl device driver
                 #
                 #This file automatically generated by sdltab
                 #
                 /dev/sdl_0
                 /dev/term/00h
                 /dev/sdl_1
                 /dev/term/01h
                 /dev/sdl_2
                 /dev/term/02h


Note: For non-MDI compliant device drivers, special instructions might be required to set the MAC address to be the same on all interfaces. Contact your Technical Support provider for details.
2. Shut down the node with hvshut(1Mha) and reboot.

Testing the MSW installation:

After MSW is installed and running on all nodes, check the state of all the MSW devices with /sbin/mswconfig -l.

All pieces of the MSW configuration for each device should be marked as ONLINE. If the pieces are not marked as ONLINE, review the configuration steps and check the hardware setup.

For example:


              #mswconfig -l

              Listing MAC Switch interface /dev/msw_0
                 MAC Address 00:AA:00:BD:6F:AA, 1 Interface, Status OFFLINE
                 Heartbeat ENABLED, Interval 560 msec, Misses 6
                 Interface 0: /dev/net1 unit 0 ONLINE, max peers 3, current
                   peers 3
                       Peer MACs:
                               00:AA:00:A8:07:0D - ONLINE
                               00:AA:00:A8:09:10 - ONLINE
                               00:AA:00:A8:6F:34 - ONLINE
                       Bound SAP info:
                               muxid 0x15, SAP 0xf00d, dlpistate 0x3, Heartbeat
                               muxid 0x14, SAP 0x0000, dlpistate 0x0
                               muxid 0x13, SAP 0x0000, dlpistate 0x0
                               muxid 0x12, SAP 0x0000, dlpistate 0x0
                 Interface 1: /dev/net2 unit 0 ONLINE, max peers 3, current
                   peers 3
                       Peer MACs:
                              00:AA:00:A8:07:0D - ONLINE
                              00:AA:00:A8:09:10 - ONLINE
                              00:AA:00:A8:6F:34 - ONLINE
                       Bound SAP info:
                              muxid 0x15, SAP 0xf00d, dlpistate 0x3, Heartbeat
                              muxid 0x14, SAP 0x0000, dlpistate 0x0
                              muxid 0x13, SAP 0x0000, dlpistate 0x0
                              muxid 0x12, SAP 0x0000, dlpistate 0x0
                 Interface 2: /dev/sdl_0 unit 0 (SLOW) ONLINE, max peers 1,
                   current peers 1
                       Peer MACs:
                              00:AA:00:A8:07:0D - ONLINE
                       Bound SAP info:
                              muxid 0x15, SAP 0xf00d, dlpistate 0x3, Heartbeat
                              muxid 0x14, SAP 0x0000, dlpistate 0x0
                              muxid 0x13, SAP 0x0000, dlpistate 0x0
                              muxid 0x12, SAP 0x0000, dlpistate 0x0
                 Interface 3: /dev/sdl_1 unit 0 (SLOW) ONLINE, max peers 1,
                   current peers 1
                       Peer MACs:
                              00:AA:00:A8:09:10 - ONLINE
                       Bound SAP info:
                              muxid 0x15, SAP 0xf00d, dlpistate 0x3, Heartbeat
                              muxid 0x14, SAP 0x0000, dlpistate 0x0
                              muxid 0x13, SAP 0x0000, dlpistate 0x0
                              muxid 0x12, SAP 0x0000, dlpistate 0x0
                 Interface 4: /dev/sdl_2 unit 0 (SLOW) ONLINE, max peers 1,
                   current peers 1
                       Peer MACs:
                              00:AA:00:A8:6F:34 - ONLINE
                       Bound SAP info:
                              muxid 0x15, SAP 0xf00d, dlpistate 0x3, Heartbeat
                              muxid 0x14, SAP 0x0000, dlpistate 0x0
                              muxid 0x13, SAP 0x0000, dlpistate 0x0
                              muxid 0x12, SAP 0x0000, dlpistate 0x0



CONFIGURING THE LOW LATENCY TRANSPORT (LLT):
1. Enable root rsh (remote shell, see rsh(1tcp)) between nodes. (This step is only required to run mkcluster(1Mha). These modifications can be reversed after mkcluster is executed to restore security to the cluster.)

For each node, enable rsh as follows:

a. Uncomment the appropriate line in the /etc/inetd.conf file. See the rshd(1Mtcp), inetd(1Mtcp) and inetd.conf(4tcp) manual pages for details.

b. Obtain the process ID (pid) of the inetd process with the command:


                 ps -ef | grep inetd


c. Issue the following command to have inetd reread its configuration file:

                 kill -HUP pid


d. Grant access to other nodes:

                 echo "+" >;>; /etc/hosts.equiv
                 echo "+" >;>; /.rhosts


2. Run mkcluster.
mkcluster requires a network device name that must be the same on each node. It also requires a list of nodes in the cluster. For example, to configure an MSW device /dev/msw_0 on four nodes (sysa4, sysb4, sysc4 and sysd4), enter the following command:


                 /sbin/mkcluster -i /dev/msw_0 sysa4 sysb4 sysc4 sysd4


Note: Use public network names in the mkcluster command. The names must be listed in the order corresponding to the SYSA, SYSB, SYSC, and SYSD names designated by ReliantHA.
mkcluster uses rsh to probe the MAC address from each node and create /etc/clustertab. This file is then distributed to each node. You only need to run mkcluster on one node.

3. Verify the clustertab file on all nodes using the following command:


                 cat /etc/clustertab


The following example from the /etc/clustertab file shows a four-node cluster using the MAC switch driver:

                 #NodeID   NodeName  Device       Physical Address

                 1         sysa4    /dev/msw_0    00:AA:00:A5:7F:61
                 2         sysb4    /dev/msw_0    00:AA:00:A5:1E:9C
                 3         sysc4    /dev/msw_0    00:AA:00:A5:20:07
                 4         sysd4    /dev/msw_0    00:AA:00:A5:30:61


4. Shut down each node with hvshut(1Mha) and reboot.
Verifying the LLT installation:

Check that the LLT is functioning correctly with the lltstat -a command:


             #lltstat -a

             LLT node information:

                     max nodes: 8, max ports: 32, this node: 1
                     MTU: 1500, SAP: 0xcafe
                     Node  State
                      * 1  NODE_OPEN
                        2  NODE_CONNWAIT
                        3  NODE_CONNWAIT
                        4  NODE_CONNWAIT

             LLT port information:

                     - No ports active



CONFIGURING ReliantHA:
You can now configure:


               - Large message queues

               - Remote shell

               - The environment


Large message queues:
For each node in the cluster, reconfigure the kernel for large message queues:

1. Check the values of the MSGSSZ and MSGMNB tunable parameters using the following commands:


                 /etc/conf/bin/idtune -g MSGSSZ
                 /etc/conf/bin/idtune -g MSGMNB


2. If the value of MSGSSZ is less than 524288, set the value with the following command:

                 /etc/conf/bin/idtune -f MSGSSZ 524288


3. If the value of MSGMNB is less than 65536, set the value with the following command:

                 /etc/conf/bin/idtune -f MSGMNB 65536


4. Rebuild the kernel, using the following command:

                 /etc/conf/bin/idbuild -B -K


5. Shut down the node with hvshut(1Mha) and reboot.
Enabling remote shell (rsh):

ReliantHA uses rsh(1tcp) to determine that all nodes are consistent (done via hvstart). If a node does not have rsh enabled, then you can either:

- enable rsh

- OR -

- have ReliantHA skip the node check by setting the RELIANT_IGNORE_RSH parameter to "YES" in the /usr/opt/reliant/etc/hvenv file

Setting the environment:

Ensure that the ReliantHA environment variables are in your profile:

1. If your root account already has a .profile or .login file, add the following line to the end of the file:


                 . /usr/opt/reliant/etc/hvenv


If your root account does not have any of these files, then enter the following command:

                 cp /usr/opt/reliant/etc/hvenv /.profile


2. Log out and then log in again.
3. To validate the change, enter:


                 echo $RELIANT_HOST_NAME


This should return the ReliantHA system name (SYSA, SYSB, SYSC or SYSD).
TESTING THE ReliantHA SYSTEM:

You should now test the private networks to make sure that connectivity exists between nodes, and also test that each node is ONLINE.

Testing the private networks:

To test the private network, enter the following command on each node of the cluster to verify connectivity with each of the other nodes:


                 /usr/sbin/ping node


Each invocation of the command should return:

                 node is alive


Testing the status of nodes:
1. Start the base monitor on all nodes with:


                 hvstart


2. After approximately one minute, check the status of all nodes with:

                 hvdisp -a


All configured nodes should now be displayed as ONLINE.
After testing to ensure that all nodes are ONLINE, see Chapter 3, "Creating Reliant Configurations" of the SCO ReliantHA User's Guide for information on how to create configurations that you want to install.


Notes
System requirements are noted below:
Certain functionality requirements must be met by some of the hardware in the system if particular features of ReliantHA are to be used. Refer to "Functionality requirements for hardware" for further details.

For details of hardware known to work with ReliantHA, refer to "ReliantHA-compatible hardware." This provides a list of hardware that has been certified for use with ReliantHA.

I. SOFTWARE REQUIREMENTS:

To install and use SCO ReliantHA, you must have the following software:


                 SCO ReliantHA UnixWare High Availability Software

                 UnixWare 7 operating system

                 If NFS failover is required, then the Online Data Manager
                 (ODM) is needed

                 C++ UDK on at least one node


II. HARDWARE REQUIREMENTS:
SCO recommends that your system meet or exceed the following requirements for ReliantHA:

Servers: Two or more UnixWare 7 servers (maximum of four).

CD-ROM drive: An appropriate CD-ROM drive on each server, or one accessible to all servers.

Shared disk drives: Optional, for shared storage.

Ethernet network adapters: At least two Ethernet network adapters per server to be used to build public and ReliantHA private networks. Additional Ethernet network adapters can be configured using the MAC Switch Driver (MSW) to provide network fault tolerance.

SCSI adapters: One SCSI adapter for the operating system disk, and one SCSI adapter for each SCSI chain of shared disks.

Cables (Null-modem serial cables - optional):


                        2 nodes require 1 cable
                        3 nodes require 3 cables
                        4 nodes require 6 cables


An additional multi-ported SCSI cable is required for each chain of shared disks.
Disk space: For UnixWare 7 on each server, at least 10MB of free space in each of the / and /usr filesystems.

Serial ports: Optional. (n - 1) serial ports per ReliantHA node where 'n' is the number of nodes in the cluster.

RAM: At least 64MB on each ReliantHA server.

III. FUNCTIONALITY REQUIREMENTS FOR HARDWARE:

In order to use some features of ReliantHA, certain functionality is required of the hardware:


              -  Servers

              -  Host bus adapters

              -  Network adapters

              -  Shared disks


NOTE: All equipment (HBAs, disks, RAID, CD-ROM and tape drives, for example) connected to the shared SCSI bus must be able to function in a multi-initiator and multi-target way. Check with the equipment's manufacturer to find out if the equipment complies.
Server features:

Servers must be PCI bus-based if shared disks are to be used. Host bus adapter (HBA) features for shared disks

Host bus adapter features:

- Single-ended for Restricted systems that use short cables.

- Wide provides up to 16 SCSI IDs and wider data transfer of up to 20MB/s to use for more disks, faster transfer.

- Ultra-speed provides 40MB/s for faster data transfer for high-speed shared disks.

- Differential is recommended for shared disks, and long cable systems. Most RAID implementations require this.

NOTE: All HBAs should be tolerant to third-party reset. They should permit multiple initiators to access multiple targets simultaneously. Target mode is not required.

Network adapter features:

- Programmable primary MAC addressing provides more than one adapter on a private network and is required for adapters on private networks.

- Programmable multicast MAC addressing provides IP failover on public networks and is required for adapters on public networks.

Shared disk features:

SCSI hard disk drives for use in a JBOD setup within a cluster environment must support multiple initiators on the bus. The disk must be able to keep track of separate synchronization and speed negotiations from different initiators, and respond to subsequent data transfer requests from them correctly using those negotiations. Note that some disk drives only accept requests using the results of the last successful negotiation.

IV. ReliantHA-COMPATIBLE HARDWARE

The following sets of hardware have been tested for use in two-node and four-node cluster environments. This data is not exhaustive; it is a list of what is currently known to work.

Host bus adapters for shared storage:


              Manufacturer  Model  Bus  Details  ReliantHA support

              Adaptec       2940   PCI   S, N     Shared disks
              Adaptec      2940W   PCI   S, W     Shared disks
              Adaptec     2940UW   PCI   S, W, U  Shared disks
              Adaptec      2944W   PCI   D, W     Shared disks, RAID units
              Qlogic       1020    PCI   D, W, U  Shared disks, RAID units


Legend:

              D = Differential
              N = Narrow
              S = Single-ended
              U = Ultra
              W = Wide


Network adapters:

              Manufacturer  Model  Bus  Details  ReliantHA support

              3Com         3C900   PCI   10MB/s  Private and public networks
              Compaq   NetFlex/3   PCI   10MB/s  Private and public networks
              Compaq   NetFlex/3  EISA   10MB/s  Private and public networks
              Digital      DE500   PCI  100MB/s  Private networks
              Intel     Pro100/B   PCI  10/100MB/s Private (100MB/s) and public
                                                   (10MB/s) networks


RAID systems as shared storage:

              Manufacturer  Model  Bus  Details  ReliantHA support

              Data General  2000D  SCSI-2  Clariion  Adaptec 2944W, Qlogic 1020
              Digital       SC4600 SCSI-2  StorageWorks  Adaptec 2944W


See Also
The online SCO ReliantHA User's Guide

论坛徽章:
0
2 [报告]
发表于 2003-03-28 10:22 |只看该作者

UnixWare 下双机软件ReliantHA的安装与配置。

问题1:如何编辑自己的应用文件---就是保存在/usr/opt/reliant/build下面的,我自己对照例子编译的一切换就BUSY!
问题2:如果不用串口,直接用两块网卡,可以吗?---即共3块网卡,两块用来做监控!

论坛徽章:
0
3 [报告]
发表于 2003-03-30 10:23 |只看该作者

UnixWare 下双机软件ReliantHA的安装与配置。

我想应该也是可以的

论坛徽章:
0
4 [报告]
发表于 2003-04-02 09:34 |只看该作者

UnixWare 下双机软件ReliantHA的安装与配置。

我想应该不能用同一种网络,否则容易造成误判。

论坛徽章:
0
5 [报告]
发表于 2003-06-06 11:07 |只看该作者

UnixWare 下双机软件ReliantHA的安装与配置。

可以,没问题

论坛徽章:
0
6 [报告]
发表于 2003-08-15 21:42 |只看该作者

UnixWare 下双机软件ReliantHA的安装与配置。

Answer 版主,看到你的文章真是佩服,是否可以帮忙刻一套双机软件,不胜感激。

论坛徽章:
0
7 [报告]
发表于 2003-08-16 15:29 |只看该作者

UnixWare 下双机软件ReliantHA的安装与配置。

唉,看来是没戏了。

论坛徽章:
0
8 [报告]
发表于 2003-08-16 15:38 |只看该作者

UnixWare 下双机软件ReliantHA的安装与配置。

不知有没有那位好心人帮帮忙,我想安装Solaris和Unixware for X86下的双机环境,但苦于没有相关软件,硬件都有了,我愿意出点钱各买一套,也可以能满足一下我的愿望。
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP