免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12下一页
最近访问板块 发新帖
查看: 5293 | 回复: 12
打印 上一主题 下一主题

有metalink帐号的朋友帮忙下载两篇文章 [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2009-06-04 19:53 |只看该作者 |倒序浏览
打算近期在solaris10上安装rac11g,服务器网卡准备配置ipmp,从 http://dbastreet.com/docs/ChecklistSolaris11g.pdf 找到Oracle关于ipmp的几篇文章,但苦于没有metalink的帐号,无法看到这些文档,麻烦有metalink帐号的朋友帮忙下载回来发给我,或者回帖贴出来,多谢先。
需要这两篇文章:


https://metalink.oracle.com/meta ... T&p_id=368464.1
https://metalink.oracle.com/meta ... T&p_id=283107.1


俺的Email: lijt@bupticet.com

论坛徽章:
0
2 [报告]
发表于 2009-06-04 22:38 |只看该作者
已经看到文档,见:
http://www.itpub.net/viewthread. ... age%3D1#pid13695652

另外问问itpub和cu是什么关系?帐号是通用的?

论坛徽章:
0
3 [报告]
发表于 2009-06-05 09:03 |只看该作者
主题:         How to Setup IPMP as Cluster Interconnect
          文档 ID:         368464.1         类型:         HOWTO
          上次修订日期:         14-JAN-2009         状态:         MODERATED

In this Document
  Goal
  Solution
  References

This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review.

Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.7
Sun Solaris SPARC (64-bit)
Solaris Operating System (SPARC 64-bit)


Updated on 14-Jan-2009
Goal
The goal is to show how IPMP can be used as cluster interconnect in a RAC environment.
Solution

To start with the IPMP setup use Note 283107.1.
The IP addresses have been changed to show how it works:

- Physical IP : 192.168.0.99
- Test IP for ce0 : 192.168.0.65
- Test IP for ce1 : 192.168.0.66

Oifcfg requires an interface to be used to configure the private interface. This can not be done with IPMP, because you always have two interfaces and the physical IP will be switched to the active interface.

The recommended solution is not to configure any private interface.

The following steps need to done to use IPMP for the cluster interconnect:
1.  If the private interface has already been configured delete the interface with 'oifcfg delif'
2. Set the CLUSTER_INTERCONNECTS parameter in the spfile/init.ora to the physical IP which is swapped by IPMP.
3. Set the CLUSTER_INTERCONNECTS also for your ASM instances

ATTENTION:

Oracle Clusterware must also use the same physical interface, otherwise an interface down will only be recognized by the instances and an instance is evicted after 10 minutes (the mechanism is called IMR). Oracle Clusterware uses the private hostname for communication, so the private hostname in /etc/hosts must be set to the physical IP (192.168.0.99) that is switched from one interface to the other. The same private hostname must also be used in the Oracle Clusterware configuration during installation.

@

References
Note 283107.1 - Configuring Solaris IP Multipathing (IPMP) for the Oracle 10g VIP
Keywords
RAC ; SOLARIS ; CLUSTER_INTERCONNECTS ;

论坛徽章:
0
4 [报告]
发表于 2009-06-05 09:06 |只看该作者
主题:         Configuring Solaris IP Multipathing (IPMP) for the Oracle 10g VIP
          文档 ID:         283107.1         类型:         BULLETIN
          上次修订日期:         20-MAR-2009         状态:         PUBLISHED


PURPOSE
-------
In order to avoid the public LAN from being a single point of failure, Oracle highly recommends
configuring a redundant set of public network interface cards (NIC's) on each cluster node.  
On Sun Solaris platforms, it is our recommendation to take advantage of the Solaris IP Multipathing (IPMP)
to achieve redundancy, and to configure the Oracle 10g Virtual IP (VIP) on the redundant set of NIC's
assigned to the same IPMP group.


What is the difference between VIP and IPMP ?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   IPMP can failover an address to another interface, but not failover to the other node.
   Oracle VIP can failover to another interface on the same node or to another host in the cluster.


This note will go over the basic configuration steps required to configure IPMP, plus the steps
to configure the Oracle 10g VIP over of the redundant set of NIC's.

Note: Sun Trunking, an interface teaming functionality provided by Sun may also be used to achieve
      redundancy for the public interfaces.


SCOPE & APPLICATION
-------------------
This article is intended for experienced DBAs and Support Engineers.


0. PRE-CONFIGURATION NOTES
------------------------------------------------------------------------------
For those using Oracle 10g Release 1, you would need to apply either Oracle patch set 10.1.0.4
or patch #3714210 (on top of 10.1.0.3), in order for the VIP to be able to take advantage of
IPMP on the public network.  Please also note that this note doesn't cover redundancy in the
private interconnect network, which we also recommend making redundant using 3rd party
technology like Solaris IPMP.


1. HARDWARE CONFIGURATION
------------------------------------------------------------------------------
In order to make the public network redundant, a minimum of two NIC's need to be installed and
cabled correctly on each cluster node.  In a standard IPMP configuration, one of the NIC's will
be used as the primary link where all communications will go through.  Upon failure of the
primary link, IPMP will automatically fail the physical & virtual (Oracle VIP) IP addresses to
the standby NIC.


         +----------------+                         +----------------+
         |     Server     |                         |     Server     |
         +--+----------+--+                         +--+----------+--+
           ce0        ce1                             ce0        ce1
            |(primary) |(standby)    ==========>       |(failed)  |(primary)
            |          |                               |          |
          (vip)        |                               |        (vip)
            |          |                               |          |


In the example above, a server has two public NIC's named ce0 and ce1, each configured and
cabled correctly.


2. SERVER FIRMWARE CONFIGURATION
------------------------------------------------------------------------------
In order to avoid MAC address conflicts between the primary and standby NIC's, a unique
ethernet MAC address must be assigned to each network interface (NIC) on the server.  On
Solaris, this can be done by setting the "local-mac-address?" PROM variable to TRUE (the
default value is FALSE) on each cluster node.


3. NETWORK CONFIGURATION
------------------------------------------------------------------------------
In order to configure the VIP over Solaris IPMP, a minimum of four public IP addresses must be
prepared for each server within the cluster.

     - One physical IP address bound to the primary interface (the static IP address of the server)
     - One unused IP address, which will be configured by Oracle as the VIP for client access
     - One test IP address bound to each interface, used by IPMP for failure detection (both primary and standby)

All four public IP addresses need to reside on the same network subnet.  The following is the
list of IP addresses that will be used in the following example.

     - Physical IP :     146.56.77.30
     - Test IP for ce0 : 146.56.77.31
     - Test IP for ce1 : 146.56.77.32
     - Oracle VIP :      146.56.78.1


4. SOLARIS IP MULTIPATHING (IPMP) CONFIGURATION
------------------------------------------------------------------------------
All NIC's that are to be used by the VIP must be assigned to the same IPMP group.  This is to
ensure that IPMP will automatically relocate the VIP whenever the primary group member (NIC)
experiences a failure.  The following is an example configuration for two NIC's (ce0 and ce1),
configured in the same IPMP group used for Oracle client connection.  In this example both
NIC's belong to the same IPMP group "orapub".


  /etc/hostname.ce0 configuration (Primary NIC, where physical IP 146.56.77.30 is configured on)
  146.56.77.30 netmask + broadcast + group orapub up addif 146.56.77.31 deprecated -failover netmask + broadcast + up

  /etc/hostname.ce1 configuration (Standby NIC)
  146.56.77.32 netmask + broadcast + deprecated group orapub -failover standby up


With the above configuration, the "ifconfig -a" output should look like the following (NOTE: the ether value is only visible when ifconfig is ran as root):


root@jpsun1580[ / ]%>
lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 146.56.77.30 netmask fffffc00 broadcast 146.56.79.255
        groupname orapub
        ether 8:0:20:ee:c5:74
ce0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
        inet 146.56.77.31 netmask fffffc00 broadcast 146.56.79.255
ce1: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 4
        inet 146.56.77.32 netmask fffffc00 broadcast 146.56.79.255
        groupname orapub
        ether 8:0:20:ee:c5:75
ce2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
        inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
        ether 8:0:20:ee:c5:77
ce3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 6
        inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
        ether 8:0:20:ee:c5:76
clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7
        inet 172.16.193.1 netmask ffffff00 broadcast 172.16.193.255
        ether 0:0:0:0:0:1
root@jpsun1580[ / ]%>


Note that the physical IP address 146.56.77.30 is configured on the primary interface ce0,
while the test IP addresses for the two NIC's (marked as NOFAILOVER) are configured on ce0:1
and ce1 respectively.  Also note that both ce0 and ce1 belong to the same IPMP group "orapub",
which means that the physical IP address 146.56.77.30 will automatically relocate to an
available NIC (ce1) whenever the current NIC (ce0) experiences a failure.  The three entries
ce2, ce3 and clprivnet0 are private network paths used by Sun Cluster and RAC for internode
cluster communications.

5. ORACLE VIRTUAL IP CONFIGURATION
------------------------------------------------------------------------------
Having configured IPMP correctly, the Oracle VIP can now take advantage of IPMP for public
network redundancy. The VIP should now be configured to use all NIC's assigned to the same
public IPMP group.  By doing this Oracle will automatically choose the primary NIC within the
group to configure the VIP, and IPMP will be able to fail over the VIP within the IPMP group
upon a single NIC failure.


   o New 10g RAC installation
   ^^^^^^^^^^^^^^^^^^^^^^^^^^
      At the second screen in VIPCA (VIP Configuration Assistant, 1 of 2), select all NIC's
      within the same IPMP group where the VIP should run at.


   o Existing 10g RAC installation
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      For existing 10g RAC installations, use srvctl to modify the VIP to use all the NIC's
      within the same IPMP group.  The following example is configuring the VIP for jphp1580,
      to use the two NIC's specified in the command line.

        # srvctl stop nodeapps -n jpsun1580
        # srvctl modify nodeapps -n jpsun1580 -o /u01/app/oracle/product/10gdb -A 146.56.78.1/255.255.252.0/ce0\|ce1
        # srvctl start nodeapps -n jpsun1580



6. VIP + IPMP BASIC BEHAVIOR (SINGLE FAILURES AND TOTAL FAILURES)
------------------------------------------------------------------------------
Once started, the VIP should run on the primary member of the IPMP group.
In the following example, the VIP 146.56.78.1 is configured on top of ce0, as a logical interface named ce0:2.
The physical IP address 146.56.77.30 is also configured on ce0.


  ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
          inet 146.56.77.30 netmask fffffc00 broadcast 146.56.79.255
          groupname orapub
          ether 8:0:20:ee:c5:74
  ce0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
          inet 146.56.77.31 netmask fffffc00 broadcast 146.56.79.255
  ce0:2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 3
          inet 146.56.78.1 netmask fffffc00 broadcast 146.56.79.255
  ce1: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 4
          inet 146.56.77.32 netmask fffffc00 broadcast 146.56.79.255
          groupname orapub
          ether 8:0:20:ee:c5:75


Upon failure of the primary interface (ce0), IPMP will automatically relocate the physical &
virtual IP addresses to the next available NIC within the same IPMP group.  In the following
example, the physical IP and the VIP have both automatically relocated to ce1:1 and ce1:2.  
Note that the test IP addresses on both NIC's do not relocate, as they are used exclusively by
IPMP for failure detection purposes.


  ce0: flags=1000843<BROADCAST,MULTICAST,IPv4,NOFAILOVER,FAILED> mtu 1500 index 3
          inet 0.0.0.0 netmask 0
          groupname orapub
          ether 8:0:20:ee:c5:74
  ce0:1: flags=9040843<UP,BROADCAST,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED> mtu 1500 index 3
          inet 146.56.77.31 netmask fffffc00 broadcast 146.56.79.255
  ce1: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,> mtu 1500 index 4
          inet 146.56.77.32 netmask fffffc00 broadcast 146.56.79.255
          groupname orapub
          ether 8:0:20:ee:c5:75
  ce1:1: flags=29040843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY> mtu 1500 index 4
          inet 146.56.77.30 netmask fffffc00 broadcast 146.56.79.255
  ce1:2: flags=29040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,STANDBY> mtu 1500 index 4
          inet 146.56.78.1 netmask fffffc00 broadcast 146.56.79.255


Once the failure on ce0 is repaired, IPMP will automatically fail back the physical and Oracle
virtual IP addresses to the original primary interface (ce0).   All inter-node VIP
failovers/fallbacks are handled by IPMP and not by Oracle.


  ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
          inet 146.56.77.30 netmask fffffc00 broadcast 146.56.79.255
          groupname orapub
          ether 8:0:20:ee:c5:74
  ce0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
          inet 146.56.77.31 netmask fffffc00 broadcast 146.56.79.255
  ce0:2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 3
          inet 146.56.78.1 netmask fffffc00 broadcast 146.56.79.255
  ce1: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 4
          inet 146.56.77.32 netmask fffffc00 broadcast 146.56.79.255
          groupname orapub
          ether 8:0:20:ee:c5:75



Upon failure of all public NIC's (total failure), Oracle CRS will relocate the VIP to the next
available node within the cluster.



RELATED DOCUMENTS
-----------------
For details on Solaris IPMP, please refer to the following Solaris documentation available at Sun.com:
  o Solaris IP Multipathing Data Sheet
  o Solaris 9 9/04 System Administrator Collection >> System Administration Guide: IP Services
  o Solaris 10 System Administrator Collection >> System Administration Guide: IP Services

For information on how to configure IPMP for the RAC Cluster Interconnect, please refer to Note 368464.1, "How to Setup IPMP as Cluster Interconnect".

论坛徽章:
0
5 [报告]
发表于 2009-06-05 12:39 |只看该作者

论坛徽章:
0
6 [报告]
发表于 2009-06-05 13:17 |只看该作者
原帖由 silverlijt 于 2009-6-4 22:38 发表
已经看到文档,见:
http://www.itpub.net/viewthread.php?tid=1172868&pid=13695652&page=1&extra=page%3D1#pid13695652

另外问问itpub和cu是什么关系?帐号是通用的?

恩,通用了

论坛徽章:
2
丑牛
日期:2014-06-11 13:55:04ChinaUnix元老
日期:2015-02-06 15:16:14
7 [报告]
发表于 2009-06-05 13:18 |只看该作者
原帖由 silverlijt 于 2009-6-4 22:38 发表
已经看到文档,见:
http://www.itpub.net/viewthread. ... age%3D1#pid13695652

另外问问itpub和cu是什么关系?帐号是通用的?

大势所趋

论坛徽章:
0
8 [报告]
发表于 2009-06-05 13:20 |只看该作者
似乎itpub的帐号可以在cu上用,但在cu上注册的帐号却不能在itpub上用。

论坛徽章:
2
丑牛
日期:2014-06-11 13:55:04ChinaUnix元老
日期:2015-02-06 15:16:14
9 [报告]
发表于 2009-06-05 13:26 |只看该作者

回复 #8 lijiangt 的帖子

可以在后面加个尾巴(_cu)试试

不知道最终整合成啥样

论坛徽章:
0
10 [报告]
发表于 2009-06-05 13:50 |只看该作者
Subject:  Configuring Linux for the Oracle 10g VIP or private interconnect using bonding driver
  Doc ID:  298891.1 Type:  BULLETIN
  Modified Date :  02-APR-2008 Status:  PUBLISHED


PURPOSE
-------
In order to avoid the public LAN from being a single point of failure, Oracle highly recommends
configuring a redundant set of public network interface cards (NIC's) on each cluster node.
Network redundancy can be achieved On Linux platforms using NIC Teaming (configuring multiple
interfaces to a team or using the Linux kernel bonding module).

This note will go over the two possible choices for achieving redundancy on Linux.

As inter-node IP address failover is achieved by using the Oracle managed VIP, 3rd party
clusterware based inter-node IP address failover technologies should not be configured on the
same set of NIC's that are used by the Oracle VIP.  Only intra-node IP address failover
functionalities should be used in conjunction with the Oracle VIP.


SCOPE & APPLICATION
-------------------
This article is intended for experienced DBAs and Support Engineers.


1. NIC TEAMING BY CONFIGURING MULTIPLE INTERFACES TO A TEAM
-----------------------------------------------------------
Various hardware vendors provide network interface drivers and utilities to achieve NIC teaming.
Please consult your hardware vendor for details on how to configure your system for NIC teaming


2. NIC TEAMING USING THE LINUX KERNEL BONDING MODULE
----------------------------------------------------
The Linux kernel includes a bonding module that can be used to achieve software level NIC
teaming. The kernel bonding module can be used to team multiple physical interfaces to a single
logical interface, which is used to achieve fault tolerance and load balancing.  The bonding
driver is available as part of the Linux kernel version 2.4.12 or newer versions. Since the
bonding module is delivered as part of the Linux kernel, it can be configured independently
from the interface driver vendor (different interfaces can constitute a single logical
interface).

The configuration steps are different among Linux distributions.  This note will go over the
steps required to configure the bonding module in RedHat Enterprise Linux 3.0.

In the following example, two physical interfaces (eth0 and eth1) will be bonded together to a
single logical interface (bond0), and the VIP will run on top of the single logical interface.

A sample network configuration is as follows:

  Default Gateway:
    192.168.1.254

  Netmask:
    255.255.255.0

  Interface configuration before bonding:
    eth0: IP Address 192.168.1.1
    eth1: IP Address 192.168.1.2

After configuring the bonding driver, a logical interface named bondX (where X is a number
higher than zero) representing the team of interfaces.

  Interface configuration after bonding:
    bond0: IP Address 192.168.1.10


2-1 CONFIGURING THE BONDING DRIVER
----------------------------------
Since the bonding driver is delivered as a kernel module, the following lines need to be added
to /etc/modules.conf as root.

alias bond0 bonding
options bond0 miimon=100

For details on the "options" parameter, please refer to the documents referred to in section
2.8. In the above configuration, the MII link monitoring interval is set to 100ms.  MII is used
to monitor the interface link status, and this is a typical configuration for mission critical
systems that require fast failure detection.

   Note: MII is an abbreviation of "Media Independent Interface".  Many popular fast ethernet
         adapters use MII to autonegotiate the link speed and duplex mode.

By default, the bonding driver will transmit outgoing packets in a round-robin fashion using
each "slave" interface.  The above example uses this default behavior.  For details on changing
this behavior, please also refer to the documents referred to in section 2.8.

If you want to use muitlple bonding interfaces you should modify /etc/modules.conf like below example.

alias bond0 bonding
alias bond1 bonding
options bond0 miimon=100 max_bonds=2
options bond1 miimon=100 max_bonds=2

(In this example we have 2 bonding interfaces.)

The "max_bonds" parameter defines how many bonding interfaces we are
going to have.
For details on the "max_bonds" parameter, please refer to the documents
referred to in section 2.8.

2-2. CONFIGURING THE bond0 INTERFACE
------------------------------------
On RHEL 3.0, network interface parameters are configured in configuration files named
"ifcfg-<interface name>", found in the /etc/sysconfig/network-scripts directory.  In order to
enable the bonding driver, a configuration file "ifcfg-bond0" needs to be created with
appropriate parameters.  As root, create the file "/etc/sysconfig/network-scripts/ifcfg-bond0"
as shown below.

DEVICE=bond0
IPADDR=192.168.1.10
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
    (Please change the IP address, Netmask, Broadcast to match your network configuration)


2-3. CHANGING THE CONFIGURATION FOR THE EXISTING INTERFACES
-----------------------------------------------------------
As root, please change the configuration file "/etc/sysconfig/network-scripts/ifcfg-eth0" as
shown below:

DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

Please also change the configuration file "/etc/sysconfig/network-scripts/ifcfg-eth1" as shown
below:

DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

These steps are necessary to associate the bond0 interface to its slave interfaces (eth0 and eth1).


2-4. RESTART THE NETWORK
------------------------
Execute the following commands as root to reflect the changes.

# service network stop
# service network start

If the configuration is correct, the above commands should both return  [ OK ].


2-5. CONFIRMING THE NEW CONFIGURATION
-------------------------------------
The following messages should appear in your syslog (/var/log/messages).

Jan 28 16:00:09 rac01 kernel: bonding: MII link monitoring set to 100 ms
Jan 28 16:00:09 rac01 kernel: ip_tables: (C) 2000-2002 Netfilter core team
Jan 28 16:00:11 rac01 ifup: Enslaving eth0 to bond0
Jan 28 16:00:11 rac01 kernel: bonding: bond0: enslaving eth0 as a backup interface with a down link.
Jan 28 16:00:11 rac01 kernel: e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
Jan 28 16:00:11 rac01 kernel: bonding: bond0: link status definitely up for interface eth0.
Jan 28 16:00:11 rac01 kernel: bonding: bond0: making interface eth0 the new active one.
Jan 28 16:00:11 rac01 ifup: ENslaving eth1 to bond0
Jan 28 16:00:11 rac01 kernel: bonding: bond0: enslaving eth1 as a backup interface with a down link.
Jan 28 16:00:11 rac01 kernel: e1000: eth1: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
Jan 28 16:00:11 rac01 kernel: bonding: bond0: link status definitely up for interface eth1.
Jan 28 16:00:11 rac01 network: Bringing up interface bond0:  succeeded

The "ifconfig -a" command should return the following output.

bond0     Link encap:Ethernet  HWaddr 00:0C:29C:83:E8  
          inet addr:192.168.1.10  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:27 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3462 (3.3 Kb)  TX bytes:42 (42.0 b)

eth0      Link encap:Ethernet  HWaddr 00:0C:29C:83:E8  
          inet addr:192.168.1.10  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1701 (1.6 Kb)  TX bytes:42 (42.0 b)
          Interrupt:10 Base address:0x1424

eth1      Link encap:Ethernet  HWaddr 00:0C:29C:83:E8  
          inet addr:192.168.1.10  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1761 (1.7 Kb)  TX bytes:0 (0.0 b)
          Interrupt:11 Base address:0x14a4

(Note that other interfaces will also appear in a typical RAC installation)


2-6. CONSIDERATIONS FOR CRS INSTALLATION
-----------------------------------------
During CRS installation, choose "ublic" for the "bond0" interface in the "Specify Network
Interface Usage" OUI page.  If the "eth0" and "eth1" interfaces appear in OUI, then make sure
to choose "Do not use" for their types.

2-6a. IF YOU WANT TO CONFIGURE BONDING DEVICES AFTER INSTALLATION OF CRS.
-----------------------------------------
You can change your interconnect/public interface configuration using oifcfg command.
Please refer Note 283684.1

2-7. VIPCA CONFIGURATION
------------------------
The single interface name (i.e. "bond0" representing the redundant set of NIC's is the
interfaces that should be specified in the second screen in VIPCA (VIP Configuration Assistant,
1 of 2). Make sure not to select any of the underlying non-redundant NIC names in VIPCA, as
they should not be used by Oracle in a NIC teaming configuration.


2-8. OPTIONS FOR THE BONDING DRIVER
-----------------------------------
Various advanced interface, driver and switch configurations are available for achieving a
highly available network configuration. Please refer to the "Linux Ethernet Bonding Driver
mini-howto" for more details.

http://www.kernel.org/pub/linux/ ... working/bonding.txt


RELATED DOCUMENTS
-----------------
Linux Ethernet Bonding Driver mini-howto:
http://www.kernel.org/pub/linux/ ... working/bonding.txt

Red Hat Enterprise Linux 3: Reference Guide -> Appendix A. General Parameters and Modules -> A.3 Ethernet Parameters:
http://www.redhat.com/docs/manua ... dules-ethernet.html

Note 283684.1 - How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster
Note 291962.1 -        Setting Up Bonding on SLES 9
Note 291958.1 - Setting Up Bonding in Suse SLES8
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP