免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 2301 | 回复: 5
打印 上一主题 下一主题

转贴Oracle RAC on Sun Cluster 3.1有了他就不要找SUN的技术了 [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2005-05-13 13:53 |只看该作者 |倒序浏览
看着很不错就转过来了,也不知道是哪个原创的
软件要求:
Solaris 9
Sun Cluster 3.1
Veritas VxVM 3.5(需要VxVM的License和Cluster Volume Manager(CVM)的License )
Oracle 9i Enterprise Edition
Sun 3310阵列驱动

硬件要求:
Sun 480 两台
Sun QFE四口网卡 2块
Sun SCSI卡X6758A 2块
Sun 3310阵列 2台

一、安装硬件


二、配置阵列3310,要求如下:

要求如下:
两台都一样

c2t5d0 quorum盘 512M
c2t5d1 ora-rac-dg Log盘 256M
c2t5d2 ora-rac-dg 盘 140G
c2t5d3 ora-arc-dg Log盘 256M
c2t5d4 ora-arc-dg 盘 剩下的空间
c3t5d0 quorum盘 512M
c3t5d1 ora-rac-dg Log盘 256M
c3t5d2 ora-rac-dg 盘 140G
c3t5d3 ora-arc-dg Log盘 256M
c3t5d4 ora-arc-dg 盘 剩下的空间

三、安装操作系统

以Sun 480 36G硬盘为例,分区如下:
# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>;
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w2100000c50695b66,0
1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>;
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w2100000c50695de5,0
Specify disk (enter its number): 0
selecting c1t0d0
[disk formatted]
Warning: Current Disk has mounted partitions.

format>; p
partition>; p
Current partition table (original):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 5808 - 13067 10.00GB (7260/0/0) 20974140
1 var wm 13068 - 14519 2.00GB (1452/0/0) 4194828
2 backup wm 0 - 24619 33.92GB (24620/0/0) 71127180
3 swap wu 0 - 5807 8.00GB (5808/0/0) 16779312
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 14520 - 21779 10.00GB (7260/0/0) 20974140
6 unassigned wm 21780 - 22142 512.06MB (363/0/0) 1048707
7 unassigned wm 0 0 (0/0/0) 0


四、给操作系统打补丁(采用最新的补丁盘)

# cd /cdrom/eis-cd/sun/install
# ./setup-standard.sh

# cd /cdrom/eis-cd/sun/patch/9
# ../../install/bin/unpack_patches

# cd /tmp/9
# eject cdrom
# ./install_all_patches


五、安装3310的驱动和补丁
# ls
112697-04 SUNWqus SUNWqusu SUNWqusux SUNWqusx
# pkgadd -d . all
(一路yes即可)
# patchadd 112697-04


在文件/etc/kernel/sd.conf文件中加入如下几行:
name="sd" class="scsi" target=5 lun=0;
name="sd" class="scsi" target=5 lun=1;
name="sd" class="scsi" target=5 lun=2;
name="sd" class="scsi" target=5 lun=3;
name="sd" class="scsi" target=5 lun=4;

# reboot

重起后,使用format命令检查磁盘阵列的配置是否正确。
# format
Searching for disks...done

c2t5d0: configured with capacity of 510.00MB
c2t5d1: configured with capacity of 254.00MB
c2t5d2: configured with capacity of 136.71GB
c2t5d3: configured with capacity of 254.00MB
c2t5d4: configured with capacity of 66.62GB
c3t5d0: configured with capacity of 510.00MB
c3t5d1: configured with capacity of 254.00MB
c3t5d2: configured with capacity of 136.71GB
c3t5d3: configured with capacity of 254.00MB
c3t5d4: configured with capacity of 66.62GB


AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>;
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w2100000c5069619e,0
1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>;
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w2100000c50695ef2,0
2. c2t5d0 <SUN-StorEdge3310-0325 cyl 510 alt 2 hd 64 sec 32>;
/pci@8,700000/pci@3/scsi@4/sd@5,0
3. c2t5d1 <SUN-StorEdge3310-0325 cyl 254 alt 2 hd 64 sec 32>;
/pci@8,700000/pci@3/scsi@4/sd@5,1
4. c2t5d2 <SUN-StorEdge3310-0325 cyl 35273 alt 2 hd 127 sec 64>;
/pci@8,700000/pci@3/scsi@4/sd@5,2
5. c2t5d3 <SUN-StorEdge3310-0325 cyl 254 alt 2 hd 64 sec 32>;
/pci@8,700000/pci@3/scsi@4/sd@5,3
6. c2t5d4 <SUN-StorEdge3310-0325 cyl 34112 alt 2 hd 64 sec 64>;
/pci@8,700000/pci@3/scsi@4/sd@5,4
7. c3t5d0 <SUN-StorEdge3310-0325 cyl 510 alt 2 hd 64 sec 32>;
/pci@8,700000/pci@3/scsi@5/sd@5,0
8. c3t5d1 <SUN-StorEdge3310-0325 cyl 254 alt 2 hd 64 sec 32>;
/pci@8,700000/pci@3/scsi@5/sd@5,1
9. c3t5d2 <SUN-StorEdge3310-0325 cyl 35273 alt 2 hd 127 sec 64>;
/pci@8,700000/pci@3/scsi@5/sd@5,2
10. c3t5d3 <SUN-StorEdge3310-0325 cyl 254 alt 2 hd 64 sec 32>;
/pci@8,700000/pci@3/scsi@5/sd@5,3
11. c3t5d4 <SUN-StorEdge3310-0325 cyl 34112 alt 2 hd 64 sec 64>;
/pci@8,700000/pci@3/scsi@5/sd@5,4
Specify disk (enter its number): 0
selecting c1t0d0
[disk formatted]
Warning: Current Disk has mounted partitions.

format>; q


六、配置/etc/hosts文件和/.profile文件
rac1:
root@rac1 # cat /etc/hosts
#
# Internet host table
#
127.0.0.1 localhost
192.168.0.201 rac1 loghost a.b
192.168.0.202 rac2

在/.profile文件中加入以下内容:

PATH=$PATH:/usr/cluster/bin:/etc/vx/bin
export PATH
MANPATH=$MANPATH:/usr/cluster/man:/usr/share/man:/opt/VRTS/man
export MANPATH

rac2:
root@rac2 # cat /etc/hosts
#
# Internet host table
#
127.0.0.1 localhost
192.168.0.202 rac2 loghost a.b
192.168.0.201 rac1
在/.profile文件中加入以下内容:

PATH=$PATH:/usr/cluster/bin:/etc/vx/bin
export PATH
MANPATH=$MANPATH:/usr/cluster/man:/usr/share/man:/opt/VRTS/man
export MANPATH


七、安装Sun Cluster 3.1,并打Sun Cluster 3.1的补丁

建立主节点:

# cd /opt/suncluster_3_1/SunCluster_3.1/Sol_9/Tools
# ./scinstall
*** Main Menu ***

Please select from one of the following (*) options:

* 1) Establish a new cluster using this machine as the first node
* 2) Add this machine as a node in an established cluster
3) Configure a cluster to be JumpStarted from this install server
4) Add support for new data services to this cluster node
5) Print release information for this cluster node

* ?) Help with menu options
* q) Quit

Option:1

*** Establishing a New Cluster ***


This option is used to establish a new cluster using this machine as
the first node in that cluster.

Once the cluster framework software is installed, you will be asked
for the name of the cluster. Then, sccheck(1M) is run to test this
machine for basic Sun Cluster pre-configuration requirements.

After sccheck(1M) passes, you will be asked for the names of the
other nodes which will initially be joining that cluster. In
addition, you will be asked to provide certain cluster transport
configuration information.

Press Ctrl-d at any time to return to the Main Menu.


Do you want to continue (yes/no) [yes]?
>;>;>; Software Package Installation <<<

Installation of the Sun Cluster framework software packages will take
a few minutes to complete.

Is it okay to continue (yes/no) [yes]?
** Installing SunCluster 3.1 framework **
SUNWscr.....done
SUNWscu.....done
SUNWscnm....done
SUNWscdev...done
SUNWscgds...done
SUNWscman...done
SUNWscsal...done
SUNWscsam...done
SUNWscvm....done
SUNWmdm.....done
SUNWscva....done
SUNWscvr....done
SUNWscvw....done
SUNWfsc.....done
SUNWfscvw...done
SUNWjsc.....done
SUNWjscman..done
SUNWjscvw...done
SUNWkscvw...done
SUNWcsc.....done
SUNWcscvw...done
SUNWhscvw...done


Hit ENTER to continue:

>;>;>; Cluster Name <<<

Each cluster has a name assigned to it. The name can be made up of
any characters other than whitespace. It may be up to 256 characters
in length. And, you may want to assign a cluster name which will be
the same as one of the failover logical host names in the cluster.
Create each cluster name to be unique within the namespace of your
enterprise.

What is the name of the cluster you want to establish? TestRAC

>;>;>; Check <<<

This step runs sccheck(1M) to verify that certain basic hardware and
software pre-configuration requirements have been met. If sccheck(1M)
detects potential problems with configuring this machine as a cluster
node, a list of warnings is printed.
Hit ENTER to continue:

Running sccheck ... done
All sccheck tests passed.


Hit ENTER to continue:


>;>;>; Cluster Nodes <<<

This release of Sun Cluster supports a total of up to 16 nodes.

Please list the names of the other nodes planned for the initial
cluster configuration. List one node name per line. When finished,
type Control-D:

Node name:rac2
Node name (Ctrl-D to finish): ^D
This is the complete list of nodes:

rac1
rac2

Is it correct (yes/no) [yes]?

>;>;>; Authenticating Requests to Add Nodes <<<

Once the first node establishes itself as a single node cluster,
other nodes attempting to add themselves to the cluster configuration
must be found on the list of nodes you just provided. The list can be
modified once the cluster has been established using scconf(1M) or
other tools.

By default, nodes are not securely authenticated as they attempt to
add themselves to the cluster configuration. This is generally
considered adequate, since nodes which are not physically connected
to the private cluster interconnect will never be able to actually
join the cluster. However, DES authentication is available. If DES
authentication is selected, you must configure all necessary
encryption keys before any node will be allowed to join the cluster
(see keyserv(1M), publickey(4)).

Do you need to use DES authentication (yes/no) [no]?
>;>;>; Network Address for the Cluster Transport <<<

The private cluster transport uses a default network address of
172.16.0.0. But, if this network address is already in use elsewhere
within your enterprise, you may need to select another address from
the range of recommended private addresses (see RFC 1597 for
details).

If you do select another network address, please bear in mind that
the Sun Clustering software requires that the rightmost two octets
always be zero.

The default netmask is 255.255.0.0; you may select another netmask,
as long as it minimally masks all bits given in the network address
and does not contain any "holes".

Is it okay to accept the default network address (yes/no) [yes]?
Is it okay to accept the default netmask (yes/no) [yes]?
>;>;>; Point-to-Point Cables <<<

The two nodes of a two-node cluster may use a directly-connected
interconnect. That is, no cluster transport junctions are configured.
However, when there are greater than two nodes, this interactive form
of scinstall assumes that there will be exactly two cluster transport
junctions.

Does this two-node cluster use transport junctions (yes/no) [yes]?
>;>;>; Cluster Transport Junctions <<<

All cluster transport adapters in this cluster must be cabled to a
transport junction, or "switch". And, each adapter on a given node
must be cabled to a different junction. Interactive scinstall
requires that you identify two switches for use in the cluster and
the two transport adapters on each node to which they are cabled.


What is the name of the first junction in the cluster [switch1]? sw1

What is the name of the second junction in the cluster [switch2]? sw2

>;>;>; Cluster Transport Adapters and Cables <<<

You must configure at least two transport adapters on each node which
serve as connection points to the private cluster transport. More
than two connection points are allowed, but this interactive form of
scinstall assumes exactly two.

Note that interactive scinstall does not allow you to specify any
special transport adapter properties settings. If your adapters have
special properties which must be set, you may need to use
non-interactive scinstall by specifying a complete set of command
line options. For more information, please refer to the man pages for
your adapters in the scconf_transp_adap family of man pages (e.g.,
scconf_transp_adap_hme(1M)).


Hit ENTER to continue:
Select the first cluster transport adapter to use:

1) qfe0
2) qfe1
3) qfe2
4) qfe3
5) ce1
6) Other
Option: 1
Adapter "qfe0" is an ethernet adapter.

Searching for any unexpected network traffic on "qfe0" ... done
Verification completed. No traffic was detected over a 10 second
sample period.


All transport adapters support the "dlpi" transport type. Ethernet
adapters are supported only with the "dlpi" transport; however, other
adapter types may support other types of transport. For more
information on which transports are supported with which adapters,
please refer to the scconf_transp_adap family of man pages
(scconf_transp_adap_hme(1M), ...).

The "dlpi" transport type will be set for this cluster.

Name of the junction to which "qfe0" is connected [sw1]?
Each adapter is cabled to a particular port on a transport junction.
And, each port is assigned a name. You may explicitly assign a name
to each port. Or, for ethernet switches, you may allow scinstall to
assign a default name for you. The default port name assignment sets
the name to the node number of the node hosting the transport adapter
at the other end of the cable.

For more information regarding port naming requirements, refer to the
scconf_transp_jct family of man pages (e.g.,
scconf_transp_jct_dolphinswitch(1M)).

Use the default port name for the "qfe0" connection (yes/no) [yes]?
---------------------------------

Select the second cluster transport adapter to use:

1) qfe1
2) qfe2
3) qfe3
4) ce1
5) Other

Option: 2

Adapter "qfe2" is an ethernet adapter.

Searching for any unexpected network traffic on "qfe2" ... done
Verification completed. No traffic was detected over a 10 second
sample period.


Name of the junction to which "qfe2" is connected [sw2]?

Use the default port name for the "qfe2" connection (yes/no) [yes]?
>;>;>; Global Devices File System <<<

Each node in the cluster must have a local file system mounted on
/global/.devices/node@<nodeID>; before it can successfully participate
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you. However, in
order to do this, you must supply the name of either an
already-mounted file system or raw disk partition at this time. This
file system or partition should be at least 512 MB in size.

If an already-mounted file system is used, the file system must be
empty. If a raw disk partition is used, a new file system will be
created for you.

The default is to use /globaldevices.

Is it okay to use this default (yes/no) [yes]?

>;>;>; Automatic Reboot <<<

Once scinstall has successfully installed and initialized the Sun
Cluster software for this machine, it will be necessary to reboot.
After the reboot, this machine will be established as the first node
in the new cluster.

Do you want scinstall to reboot for you (yes/no) [yes]?no

You will need to manually reboot this node in "cluster mode" after
scinstall successfully completes.


Hit ENTER to continue:
>;>;>; Confirmation <<<

Your responses indicate the following options to scinstall:

scinstall -ik \
-C testRAC \
-F \
-T node=rac1,node=rac2,authtype=sys \
-A trtype=dlpi,name=qfe0 -A trtype=dlpi,name=qfe2 \
-B type=switch,name=sw1 -B type=switch,name=sw2 \
-m endpoint=:qfe0,endpoint=sw1 \
-m endpoint=:qfe2,endpoint=sw2

Are these the options you want to use (yes/no) [yes]?
Do you want to continue with the install (yes/no) [yes]?

Checking device to use for global devices file system ... done

Initializing cluster name to "testRAC" ... done
Initializing authentication options ... done
Initializing configuration for adapter "qfe0" ... done
Initializing configuration for adapter "qfe2" ... done
Initializing configuration for junction "sw1" ... done
Initializing configuration for junction "sw2" ... done
Initializing configuration for cable ... done
Initializing configuration for cable ... done


Setting the node ID for "rac1" ... done (id=1)

Setting the major number for the "did" driver ... done
"did" driver major number set to 300

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done
Installing a default NTP configuration ... done
Please complete the NTP configuration after scinstall has finished.

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done

Verifying that power management is NOT configured ... done
Unconfiguring power management ... done
/etc/power.conf has been renamed to /etc/power.conf.080403110548
Power management is incompatible with the HA goals of the cluster.
Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
The "local-mac-address?" parameter setting has been changed to "true".

Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Sun Cluster.
Please do not re-enable network routing.

Hit ENTER to continue:

*** Main Menu ***

Please select from one of the following (*) options:

1) Establish a new cluster using this machine as the first node
2) Add this machine as a node in an established cluster
3) Configure a cluster to be JumpStarted from this install server
* 4) Add support for new data services to this cluster node
* 5) Print release information for this cluster node

* ?) Help with menu options
* q) Quit
Option: q
Log file - /var/cluster/logs/install/scinstall.log.464


打Sun Cluster 3.1的补丁

cd /opt//SunCluster3.1Patch/9
# patchadd 113801-03

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch number 113801-03 has been successfully installed.
See /var/sadm/patch/113801-03/log for details

Patch packages installed:
SUNWscdev
SUNWscr
SUNWscu
SUNWscvw
#reboot


等待主节点启动后,登录到第二个节点进行安装:

# cd /opt//suncluster_3_1/SunCluster_3.1/Sol_9/Tools
# ./scinstall

*** Main Menu ***

Please select from one of the following (*) options:

* 1) Establish a new cluster using this machine as the first node
* 2) Add this machine as a node in an established cluster
3) Configure a cluster to be JumpStarted from this install server
4) Add support for new data services to this cluster node
5) Print release information for this cluster node

* ?) Help with menu options
* q) Quit

Option:2

*** Adding a Node to an Established Cluster ***


This option is used to add this machine as a node in an already
established cluster. If this is an initial cluster install, there may
only be a single node which has established itself in the new
cluster.

Once the cluster framework software is installed, you will be asked
to provide both the name of the cluster and the name of one of the
nodes already in the cluster. Then, sccheck(1M) is run to test this
machine for basic Sun Cluster pre-configuration requirements.

After sccheck(1M) passes, you may be asked to provide certain cluster
transport configuration information.

Press Ctrl-d at any time to return to the Main Menu.


Do you want to continue (yes/no) [yes]?

>;>;>; Software Package Installation <<<

Installation of the Sun Cluster framework software packages will take
a few minutes to complete.

Is it okay to continue (yes/no) [yes]?


** Installing SunCluster 3.1 framework **
SUNWscr.....done
SUNWscu.....done
SUNWscnm....done
SUNWscdev...done
SUNWscgds...done
SUNWscman...done
SUNWscsal...done
SUNWscsam...done
SUNWscvm....done
SUNWmdm.....done
SUNWscva....done
SUNWscvr....done
SUNWscvw....done
SUNWfsc.....done
SUNWfscvw...done
SUNWjsc.....done
SUNWjscman..done
SUNWjscvw...done
SUNWkscvw...done
SUNWcsc.....done
SUNWcscvw...done
SUNWhscvw...done


Hit ENTER to continue:
>;>;>; Sponsoring Node <<<

For any machine to join a cluster, it must identify a node in that
cluster willing to "sponsor" its membership in the cluster. When
configuring a new cluster, this "sponsor" node is typically the first
node used to build the new cluster. However, if the cluster is
already established, the "sponsoring" node can be any node in that
cluster.

Already established clusters can keep a list of hosts which are able
to configure themselves as new cluster members. This machine should
be in the join list of any cluster which it tries to join. If the
list does not include this machine, you may need to add it using
scconf(1M) or other tools.

And, if the target cluster uses DES to authenticate new machines
attempting to configure themselves as new cluster members, the
necessary encryption keys must be configured before any attempt to
join.

What is the name of the sponsoring node?rac1

>;>;>; Cluster Name <<<

Each cluster has a name assigned to it. When adding a node to the
cluster, you must identify the name of the cluster you are attempting
to join. A sanity check is performed to verify that the "sponsoring"
node is a member of that cluster.

What is the name of the cluster you want to join? testRAC

Attempting to contact "rac1" ... done

Cluster name "testRAC" is correct.

Hit ENTER to continue:
>;>;>; Check <<<

This step runs sccheck(1M) to verify that certain basic hardware and
software pre-configuration requirements have been met. If sccheck(1M)
detects potential problems with configuring this machine as a cluster
node, a list of warnings is printed.

Hit ENTER to continue:

Running sccheck ... done
All sccheck tests passed.


Hit ENTER to continue:
>;>;>; Autodiscovery of Cluster Transport <<<

If you are using ethernet adapters as your cluster transport
adapters, autodiscovery is the best method for configuring the
cluster transport.

Do you want to use autodiscovery (yes/no) [yes]?


Probing .......

The following connections were discovered:

rac1:qfe0 sw1 rac2:qfe0
rac1:qfe2 sw2 rac2:qfe2

Is it okay to add these connections to the configuration (yes/no) [yes]?
>;>;>; Global Devices File System <<<

Each node in the cluster must have a local file system mounted on
/global/.devices/node@<nodeID>; before it can successfully participate
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you. However, in
order to do this, you must supply the name of either an
already-mounted file system or raw disk partition at this time. This
file system or partition should be at least 512 MB in size.

If an already-mounted file system is used, the file system must be
empty. If a raw disk partition is used, a new file system will be
created for you.

The default is to use /globaldevices.

Is it okay to use this default (yes/no) [yes]?

>;>;>; Automatic Reboot <<<

Once scinstall has successfully installed and initialized the Sun
Cluster software for this machine, it will be necessary to reboot.
The reboot will cause this machine to join the cluster for the first
time.

Do you want scinstall to reboot for you (yes/no) [yes]?

>;>;>; Confirmation <<<

Your responses indicate the following options to scinstall:

scinstall -ik \
-C testRAC \
-N rac1 \
-A trtype=dlpi,name=qfe0 -A trtype=dlpi,name=qfe2 \
-m endpoint=:qfe0,endpoint=sw1 \
-m endpoint=:qfe2,endpoint=sw2

Are these the options you want to use (yes/no) [yes]?
Do you want to continue with the install (yes/no) [yes]?
Checking device to use for global devices file system ... done

Adding node "rac2" to the cluster configuration ... done
Adding adapter "qfe0" to the cluster configuration ... done
Adding adapter "qfe2" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "rac1" ... done


Setting the node ID for "rac2" ... done (id=2)

Setting the major number for the "did" driver ...
Obtaining the major number for the "did" driver from "rac1" ... done
"did" driver major number set to 300

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done
Installing a default NTP configuration ... done
Please complete the NTP configuration after scinstall has finished.

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done

Verifying that power management is NOT configured ... done
Unconfiguring power management ... done
/etc/power.conf has been renamed to /etc/power.conf.080403112852
Power management is incompatible with the HA goals of the cluster.
Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
The "local-mac-address?" parameter setting has been changed to "true".

Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Sun Cluster.
Please do not re-enable network routing.

Hit ENTER to continue:

*** Main Menu ***

Please select from one of the following (*) options:

1) Establish a new cluster using this machine as the first node
2) Add this machine as a node in an established cluster
3) Configure a cluster to be JumpStarted from this install server
* 4) Add support for new data services to this cluster node
* 5) Print release information for this cluster node

* ?) Help with menu options
* q) Quit

Option: q



Log file - /var/cluster/logs/install/scinstall.log.457

root@rac2 #cd /opt//SunCluster3.1Patch/9
root@rac2 # ls
113801-03 115059-02 README a b
root@rac2 # patchadd 113801-03

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch number 113801-03 has been successfully installed.
See /var/sadm/patch/113801-03/log for details

Patch packages installed:
SUNWscdev
SUNWscr
SUNWscu
SUNWscvw
#reboot


八、Sun Cluster 3.1安装后的配置(清除安装模式和加Quorum盘,配置NTP协议)

查看Sun Cluster中的DID设备名:

root@rac1 # scdidadm -L
1 rac1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 rac1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2
3 rac1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d3
4 rac1:/dev/rdsk/c2t5d0 /dev/did/rdsk/d4
4 rac2:/dev/rdsk/c2t5d0 /dev/did/rdsk/d4
5 rac1:/dev/rdsk/c2t5d1 /dev/did/rdsk/d5
5 rac2:/dev/rdsk/c2t5d1 /dev/did/rdsk/d5
6 rac1:/dev/rdsk/c2t5d2 /dev/did/rdsk/d6
6 rac2:/dev/rdsk/c2t5d2 /dev/did/rdsk/d6
7 rac1:/dev/rdsk/c2t5d3 /dev/did/rdsk/d7
7 rac2:/dev/rdsk/c2t5d3 /dev/did/rdsk/d7
8 rac1:/dev/rdsk/c2t5d4 /dev/did/rdsk/d8
8 rac2:/dev/rdsk/c2t5d4 /dev/did/rdsk/d8
9 rac1:/dev/rdsk/c3t5d0 /dev/did/rdsk/d9
9 rac2:/dev/rdsk/c3t5d0 /dev/did/rdsk/d9
10 rac1:/dev/rdsk/c3t5d1 /dev/did/rdsk/d10
10 rac2:/dev/rdsk/c3t5d1 /dev/did/rdsk/d10
11 rac1:/dev/rdsk/c3t5d2 /dev/did/rdsk/d11
11 rac2:/dev/rdsk/c3t5d2 /dev/did/rdsk/d11
12 rac1:/dev/rdsk/c3t5d3 /dev/did/rdsk/d12
12 rac2:/dev/rdsk/c3t5d3 /dev/did/rdsk/d12
13 rac1:/dev/rdsk/c3t5d4 /dev/did/rdsk/d13
13 rac2:/dev/rdsk/c3t5d4 /dev/did/rdsk/d13
14 rac2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d14
15 rac2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d15
16 rac2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d16


选择d4作为Quorum盘。

rac1# scsetup

>;>;>; Initial Cluster Setup <<<

This program has detected that the cluster "installmode" attribute is
still enabled. As such, certain initial cluster setup steps will be
performed at this time. This includes adding any necessary quorum
devices, then resetting both the quorum vote counts and the
"installmode" property.

Please do not proceed if any additional nodes have yet to join the
cluster.

Is it okay to continue (yes/no) [yes]

Do you want to add any quorum disks (yes/no) [yes]?

Dual-ported SCSI-2 disks may be used as quorum devices in two-node
clusters. However, clusters with more than two nodes require that
SCSI-3 PGR disks be used for all disks with more than two
node-to-disk paths. You can use a disk containing user data or one
that is a member of a device group as a quorum device.

Each quorum disk must be connected to at least two nodes. Please
refer to the Sun Cluster documentation for more information on
supported quorum device topologies.

Which global device do you want to use (d<N>? d4
Is it okay to proceed with the update (yes/no) [yes]?

scconf -a -q globaldev=d4

Command completed successfully.


Hit ENTER to continue:
Do you want to add another quorum disk (yes/no)? no
Once the "installmode" property has been reset, this program will
skip "Initial Cluster Setup" each time it is run again in the future.
However, quorum devices can always be added to the cluster using the
regular menu options. Resetting this property fully activates quorum
settings and is necessary for the normal and safe operation of the
cluster.

Is it okay to reset "installmode" (yes/no) [yes]?

scconf -c -q reset
scconf -a -T node=.

Cluster initialization is complete.


Type ENTER to proceed to the main menu:

*** Main Menu ***

Please select from one of the following options:

1) Quorum
2) Resource groups
3) Cluster interconnect
4) Device groups and volumes
5) Private hostnames
6) New nodes
7) Other cluster properties

a) Help with menu options
q) Quit

Option:q
root@rac1 #

配置NTP协议

root@rac1 # cd /etc/inet
root@rac1 # ls
datemsk.ndpd mipagent.conf.fa-sample ntp.server
hosts mipagent.conf.ha-sample protocols
ike netmasks secret
inetd.conf networks services
ipnodes ntp.client slp.conf.example
ipsecinit.sample ntp.cluster sock2path
mipagent.conf-sample ntp.conf.cluster
root@rac1 # vi ntp.conf.cluster
对于两节点集群,只保留以下这两行
peer clusternode1-priv prefer
peer clusternode2-priv

root@rac2 # cd /etc/inet
root@rac2 # ls
datemsk.ndpd mipagent.conf.fa-sample ntp.server
hosts mipagent.conf.ha-sample protocols
ike netmasks secret
inetd.conf networks services
ipnodes ntp.client slp.conf.example
ipsecinit.sample ntp.cluster sock2path
mipagent.conf-sample ntp.conf.cluster
root@rac2 # vi ntp.conf.cluster
对于两节点集群,只保留以下这两行
peer clusternode1-priv prefer
peer clusternode2-priv

九、安装后的集群状态校验

# scdidadm -L

# scstat -q

# scconf -p

十、重起机器

root@rac1 # scshutdown -y -g 30

在rac1:
OK boot

rac1启动后,在rac2
OK boot


十一、安装Veritas VxVM 3.5
两个节点都要安装VxVM

root@rac1# scvxinstall
Do you want Volume Manager to encapsulate root [no]? yes

Where is the Volume Manager cdrom? /opt//storagesolutions3.5mp1cd1/volume_manager/pkgs


Do you want Volume Manager to encapsulate root [no]? yes

Where is the Volume Manager cdrom? /opt//storagesolutions3.5mp1cd1/volum/_ma
nager/pkgs
Cannot find Volume Manager packages at /opt//storagesolutions3.5mp1cd1/volum/
_manager/pkgs.

Where is the Volume Manager cdrom? /opt//storagesolutions3.5mp1cd1/volume_ma
nager/pkgs

Disabling DMP.
Installing packages from /opt//storagesolutions3.5mp1cd1/volume_manager/pkgs.
Installing VRTSvlic.
Installing VRTSvxvm.
Installing VRTSvmman.
Obtaining the clusterwide vxio number...
Using 315 as the vxio major number.
Volume Manager installation is complete.

Please enter a Volume Manager license key:RRPH-PKC3-DRPC-DCUP-PPRP-PPO6-P6

Installing Volume Manager license.
Verifying encapsulation requirements.

The Volume Manager root disk encapsulation step will begin in 20 seconds.
Type Ctrl-C to abort ....................
Arranging for Volume Manager encapsulation of the root disk.
The vxconfigd daemon has been started and is in disabled mode...
Reinitialized the volboot file...
Created the rootdg...
Added the rootdisk to the rootdg...
The setup to encapsulate rootdisk is complete...
Updating /global/.devices entry in /etc/vfstab.

This node will be re-booted in 20 seconds.
Type Ctrl-C to abort ............

等待rac1启动后,再在rac2上安装Veritas VxVM 3.5:
root@rac2 # scvxinstall

Do you want Volume Manager to encapsulate root [no]? yes
Where is the Volume Manager cdrom? /opt//storagesolutions3.5mp1cd1/volume_manager/pkgs

Disabling DMP.
Installing packages from /opt//storagesolutions3.5mp1cd1/volume_manager/pkgs.
Installing VRTSvlic.
Installing VRTSvxvm.
Installing VRTSvmman.
Obtaining the clusterwide vxio number...
Using 315 as the vxio major number.
Volume Manager installation is complete.

Please enter a Volume Manager license key: RRPH-PKC3-DRPC-DCUP-PPRP-PPO6-P6

Installing Volume Manager license.
Verifying encapsulation requirements.

The Volume Manager root disk encapsulation step will begin in 20 seconds.
Type Ctrl-C to abort ....................
Arranging for Volume Manager encapsulation of the root disk.
The vxconfigd daemon has been started and is in disabled mode...
Reinitialized the volboot file...
Created the rootdg...
Added the rootdisk to the rootdg...
The setup to encapsulate rootdisk is complete...
Updating /global/.devices entry in /etc/vfstab.

This node will be re-booted in 20 seconds.
Type Ctrl-C to abort ....................


十二、在rac1和rac2上建立dba组和oracle用户,oracle用户的主组是dba.


十三、安装Sun Cluster 3.1 for Oracle RAC的支持包

root@rac1 # cd /opt//suncluster_3_1/SunCluster_3.1/Sol_9/Packages

root@rac1 # pkgadd -d . SUNWudlm

Processing package instance <SUNWudlm>; from </opt//suncluster_3_1/SunCluster_
3.1/Sol_9/Packages>;

Sun Cluster Support for Oracle Parallel Server UDLM, (opt)
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </opt>; as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWudlm>; [y,n,?]y

Installing Sun Cluster Support for Oracle Parallel Server UDLM, (opt) as <SUNWud
lm>;

## Installing part 1 of 1.
204 blocks

Installation of <SUNWudlm>; was successful.
root@rac1 # pkgadd -d . SUNWscucm SUNWudlmr SUNWcvmr SUNWcvm

Processing package instance <SUNWscucm>; from </opt//suncluster_3_1/SunCluster
_3.1/Sol_9/Packages>;

Sun Cluster UCMM reconfiguration interface
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </>; as the package base directory.
## Processing package information.
## Processing system information.
8 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWscucm>; [y,n,?] y
Installing Sun Cluster UCMM reconfiguration interface as <SUNWscucm>;

## Installing part 1 of 1.
520 blocks
## Executing postinstall script.

Installation of <SUNWscucm>; was successful.

Processing package instance <SUNWudlmr>; from </opt//suncluster_3_1/SunCluster
_3.1/Sol_9/Packages>;

Sun Cluster Support for Oracle Parallel Server UDLM, (root)
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </>; as the package base directory.
## Processing package information.
## Processing system information.
13 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

Installing Sun Cluster Support for Oracle Parallel Server UDLM, (root) as <SUNWu
dlmr>;

## Installing part 1 of 1.
/usr/cluster/lib/ucmm/reconf.d/rc2.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc4.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc5.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc6.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc7.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcA.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcK.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcS.d/05_udlm <symbolic link>;
[ verifying class <none>; ]

Installation of <SUNWudlmr>; was successful.

Processing package instance <SUNWcvmr>; from </opt//suncluster_3_1/SunCluster_
3.1/Sol_9/Packages>;

Sun Cluster Support for Veritas CVM, (root)
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </>; as the package base directory.
## Processing package information.
## Processing system information.
15 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

Installing Sun Cluster Support for Veritas CVM, (root) as <SUNWcvmr>;

## Installing part 1 of 1.
/usr/cluster/lib/ucmm/reconf.d/rc1.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc10.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc2.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc3.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc8.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc9.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcA.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcK.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcR.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcS.d/10_cvm <symbolic link>;
[ verifying class <none>; ]

Installation of <SUNWcvmr>; was successful.

Processing package instance <SUNWcvm>; from </opt//suncluster_3_1/SunCluster_3
.1/Sol_9/Packages>;

Sun Cluster Support for Veritas CVM, (opt)
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </opt>; as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWcvm>; [y,n,?]y

Installing Sun Cluster Support for Veritas CVM, (opt) as <SUNWcvm>;

## Installing part 1 of 1.
21 blocks

Installation of <SUNWcvm>; was successful.



十四、安装Oracle 9i RAC的分布式文件锁的包

root@rac1 #cd /opt//racpatch
root@rac1 # pkgadd -d . ORCLudlm

Processing package instance <ORCLudlm>; from </opt//racpatch>;

Oracle UNIX Distributed Lock Manager
(sparc) Dev Release 02/02/02, 3.3.4.5
Copyright (C) Oracle Corporation 1993, 1994, 1995, 1996, 1997

This software/documentation contains proprietary information of Oracle
Corporation; it is provided under a license agreement containing
restrictions on use and disclosure and is also protected by copyright
law. Reverse engineering of the software is prohibited.

If this software/documentation is delivered to a U.S. Government Agency
of the Department of Defense, then it is delivered with Restricted Rights
and the following legend is applicable:

RESTRICTED RIGHTS LEGEND:
Use, duplication, or disclosure by the Government is subject to restrictions
as set forth in subparagraph (c)(1)(ii) of DFARS 252.227-7013, Rights in
Technical Data and Computer Software (October 198.

If this software/documentation is delivered to a U.S. Government Agency
not within the Department of Defense, then it is delivered with
"Restricted Rights," as defined in FAR 52.227-14, Rights in Data -
General, including Alternate III (June 1987).

Oracle Corporation, 500 Oracle Parkway, Redwood City, CA 94065.

The information in this document is subject to change without notice.
If you find any problems in the documentation, please report them to us in
writing. Oracle Corporation does not warrant that this document is error free.

Oracle, CASE*Dictionary, Pro*Ada, Pro*COBOL, Pro*FORTRAN, Pro*Pascal,
Pro*PL/I, SQL*Connect, SQL*Forms, SQL*Loader, SQL*Net, and
SQL*Plus are registered trademarks of Oracle Corporation. CASE*Designer,
CASE*Method, Oracle7, Oracle Parallel Server, PL/SQL, Pro*C/C++,
SQL*Module, Oracle Server Manager and Trusted Oracle7 are trademarks of
Oracle Corporation.

All trade names referenced are the service mark, trademark, or registered
trademark of the respective manufacturer.
Installation of ORCLudlm on Solaris 2.9

You will now be prompted for the name of the group which will be used by Oracle.

- You will need to create this group before attempting to bringup pdb

- Oracle install will ask you for this information as well. Be sure
to give the same response for the group name.


Please enter the group which should be able to act as the DBA of the database
(dba): [?]
Sun Cluster release: 3.0
- no udlm_shmem_addr_file.txt file found
- /opt/SUNWcluster/lib/udlm_shmem_addr_file.txt will be created
- with 0x12000000 as the shmem attach address for udlm
Package classes: none sol_2.8 sc30 cpusaf
Using </opt>; as the package base directory.
## Processing package information.
## Processing system information.
3 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <ORCLudlm>; [y,n,?]y

Installing Oracle UNIX Distributed Lock Manager as <ORCLudlm>;

## Installing part 1 of 1.
/opt/ORCLcluster/lib/libskgxn2.so
/opt/SUNWcluster/TEMPLATE.conf
[ verifying class <none>; ]
/opt/SUNWcluster/bin/dlmdump
/opt/SUNWcluster/bin/dlmstat
/opt/SUNWcluster/bin/dlmtctl
/opt/SUNWcluster/bin/lkdbx
/opt/SUNWcluster/bin/lkmgr
/opt/SUNWcluster/bin/lktest
/opt/SUNWcluster/lib/sparcv9/libudlm.so
/opt/SUNWcluster/lib/sparcv9 <implied directory>;
/opt/SUNWcluster/lib/sparcv9/libudlmsvr.so
[ verifying class <sol_2.8>; ]
/opt/SUNWcluster/lib/libcdb.so <symbolic link>;
/opt/SUNWcluster/lib/libcdb.so.1 <symbolic link>;
/opt/SUNWcluster/lib/libcluster.so <symbolic link>;
/opt/SUNWcluster/lib/libcluster.so.1 <symbolic link>;
/opt/SUNWcluster/lib/libclustm.so <symbolic link>;
/opt/SUNWcluster/lib/libclustm.so.1 <symbolic link>;
/opt/SUNWcluster/lib/libhaops.so <symbolic link>;
/opt/SUNWcluster/lib/libhaops.so.1 <symbolic link>;
/opt/SUNWcluster/lib/libudlmlib.so <symbolic link>;
/opt/SUNWcluster/lib/libudlmlib.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libcdb.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libcdb.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libcluster.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libcluster.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libclustm.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libclustm.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libhaops.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libhaops.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libudlmlib.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libudlmlib.so.1 <symbolic link>;
[ verifying class <sc30>; ]
/opt/SUNWcluster/lib/udlm_shmem_addr_file.txt
[ verifying class <cpusaf>; ]
## Executing postinstall script.

/etc/opt/SUNWcluster/conf/udlm.conf NOT found.
It will be created with the values in the
/etc/opt/SUNWcluster/conf/udlm.conf.template file.

/opt/SUNWcluster/TEMPLATE.conf

Installation of <ORCLudlm>; was successful.



root@rac2 # pkgadd -d . SUNWudlm

Processing package instance <SUNWudlm>; from </opt//suncluster_3_1/SunCluster_
3.1/Sol_9/Packages>;

Sun Cluster Support for Oracle Parallel Server UDLM, (opt)
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </opt>; as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWudlm>; [y,n,?] y

Installing Sun Cluster Support for Oracle Parallel Server UDLM, (opt) as <SUNWud
lm>;

## Installing part 1 of 1.
204 blocks

Installation of <SUNWudlm>; was successful.
root@rac2 # pkgadd -d . SUNWscucm SUNWudlmr SUNWcvmr SUNWcvm

Processing package instance <SUNWscucm>; from </opt//suncluster_3_1/SunCluster
_3.1/Sol_9/Packages>;

Sun Cluster UCMM reconfiguration interface
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </>; as the package base directory.
## Processing package information.
## Processing system information.
8 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWscucm>; [y,n,?] y

Installing Sun Cluster UCMM reconfiguration interface as <SUNWscucm>;

## Installing part 1 of 1.
520 blocks
## Executing postinstall script.

Installation of <SUNWscucm>; was successful.

Processing package instance <SUNWudlmr>; from </opt//suncluster_3_1/SunCluster
_3.1/Sol_9/Packages>;

Sun Cluster Support for Oracle Parallel Server UDLM, (root)
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </>; as the package base directory.
## Processing package information.
## Processing system information.
13 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

Installing Sun Cluster Support for Oracle Parallel Server UDLM, (root) as <SUNWu
dlmr>;

## Installing part 1 of 1.
/usr/cluster/lib/ucmm/reconf.d/rc2.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc4.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc5.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc6.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc7.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcA.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcK.d/05_udlm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcS.d/05_udlm <symbolic link>;
[ verifying class <none>; ]

Installation of <SUNWudlmr>; was successful.

Processing package instance <SUNWcvmr>; from </opt//suncluster_3_1/SunCluster_
3.1/Sol_9/Packages>;

Sun Cluster Support for Veritas CVM, (root)
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </>; as the package base directory.
## Processing package information.
## Processing system information.
15 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

Installing Sun Cluster Support for Veritas CVM, (root) as <SUNWcvmr>;

## Installing part 1 of 1.
/usr/cluster/lib/ucmm/reconf.d/rc1.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc10.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc2.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc3.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc8.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rc9.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcA.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcK.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcR.d/05_cvm <symbolic link>;
/usr/cluster/lib/ucmm/reconf.d/rcS.d/10_cvm <symbolic link>;
[ verifying class <none>; ]

Installation of <SUNWcvmr>; was successful.

Processing package instance <SUNWcvm>; from </opt//suncluster_3_1/SunCluster_3
.1/Sol_9/Packages>;

Sun Cluster Support for Veritas CVM, (opt)
(sparc) 3.1.0,REV=2003.03.25.13.14
Copyright 2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </opt>; as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWcvm>; [y,n,?] y

Installing Sun Cluster Support for Veritas CVM, (opt) as <SUNWcvm>;

## Installing part 1 of 1.
21 blocks

Installation of <SUNWcvm>; was successful.






root@rac2 # cd /opt//racpatch
root@rac2 # pkgadd -d . ORCLudlm

Processing package instance <ORCLudlm>; from </opt//racpatch>;

Oracle UNIX Distributed Lock Manager
(sparc) Dev Release 02/02/02, 3.3.4.5
Copyright (C) Oracle Corporation 1993, 1994, 1995, 1996, 1997

This software/documentation contains proprietary information of Oracle
Corporation; it is provided under a license agreement containing
restrictions on use and disclosure and is also protected by copyright
law. Reverse engineering of the software is prohibited.

If this software/documentation is delivered to a U.S. Government Agency
of the Department of Defense, then it is delivered with Restricted Rights
and the following legend is applicable:

RESTRICTED RIGHTS LEGEND:
Use, duplication, or disclosure by the Government is subject to restrictions
as set forth in subparagraph (c)(1)(ii) of DFARS 252.227-7013, Rights in
Technical Data and Computer Software (October 198.

If this software/documentation is delivered to a U.S. Government Agency
not within the Department of Defense, then it is delivered with
"Restricted Rights," as defined in FAR 52.227-14, Rights in Data -
General, including Alternate III (June 1987).

Oracle Corporation, 500 Oracle Parkway, Redwood City, CA 94065.

The information in this document is subject to change without notice.
If you find any problems in the documentation, please report them to us in
writing. Oracle Corporation does not warrant that this document is error free.

Oracle, CASE*Dictionary, Pro*Ada, Pro*COBOL, Pro*FORTRAN, Pro*Pascal,
Pro*PL/I, SQL*Connect, SQL*Forms, SQL*Loader, SQL*Net, and
SQL*Plus are registered trademarks of Oracle Corporation. CASE*Designer,
CASE*Method, Oracle7, Oracle Parallel Server, PL/SQL, Pro*C/C++,
SQL*Module, Oracle Server Manager and Trusted Oracle7 are trademarks of
Oracle Corporation.

All trade names referenced are the service mark, trademark, or registered
trademark of the respective manufacturer.
Installation of ORCLudlm on Solaris 2.9

You will now be prompted for the name of the group which will be used by Oracle.

- You will need to create this group before attempting to bringup pdb

- Oracle install will ask you for this information as well. Be sure
to give the same response for the group name.


Please enter the group which should be able to act as the DBA of the database
(dba): [?]
Sun Cluster release: 3.0
- no udlm_shmem_addr_file.txt file found
- /opt/SUNWcluster/lib/udlm_shmem_addr_file.txt will be created
- with 0x12000000 as the shmem attach address for udlm
Package classes: none sol_2.8 sc30 cpusaf
Using </opt>; as the package base directory.
## Processing package information.
## Processing system information.
3 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <ORCLudlm>; [y,n,?] y

Installing Oracle UNIX Distributed Lock Manager as <ORCLudlm>;

## Installing part 1 of 1.
/opt/ORCLcluster/lib/libskgxn2.so
/opt/SUNWcluster/TEMPLATE.conf
[ verifying class <none>; ]
/opt/SUNWcluster/bin/dlmdump
/opt/SUNWcluster/bin/dlmstat
/opt/SUNWcluster/bin/dlmtctl
/opt/SUNWcluster/bin/lkdbx
/opt/SUNWcluster/bin/lkmgr
/opt/SUNWcluster/bin/lktest
/opt/SUNWcluster/lib/sparcv9/libudlm.so
/opt/SUNWcluster/lib/sparcv9 <implied directory>;
/opt/SUNWcluster/lib/sparcv9/libudlmsvr.so
[ verifying class <sol_2.8>; ]
/opt/SUNWcluster/lib/libcdb.so <symbolic link>;
/opt/SUNWcluster/lib/libcdb.so.1 <symbolic link>;
/opt/SUNWcluster/lib/libcluster.so <symbolic link>;
/opt/SUNWcluster/lib/libcluster.so.1 <symbolic link>;
/opt/SUNWcluster/lib/libclustm.so <symbolic link>;
/opt/SUNWcluster/lib/libclustm.so.1 <symbolic link>;
/opt/SUNWcluster/lib/libhaops.so <symbolic link>;
/opt/SUNWcluster/lib/libhaops.so.1 <symbolic link>;
/opt/SUNWcluster/lib/libudlmlib.so <symbolic link>;
/opt/SUNWcluster/lib/libudlmlib.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libcdb.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libcdb.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libcluster.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libcluster.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libclustm.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libclustm.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libhaops.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libhaops.so.1 <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libudlmlib.so <symbolic link>;
/opt/SUNWcluster/lib/sparcv9/libudlmlib.so.1 <symbolic link>;
[ verifying class <sc30>; ]
/opt/SUNWcluster/lib/udlm_shmem_addr_file.txt
[ verifying class <cpusaf>; ]
## Executing postinstall script.

/etc/opt/SUNWcluster/conf/udlm.conf NOT found.
It will be created with the values in the
/etc/opt/SUNWcluster/conf/udlm.conf.template file.

/opt/SUNWcluster/TEMPLATE.conf

Installation of <ORCLudlm>; was successful.
root@rac2 #


十五、加Veritas Cluster Volume Manager License

root@rac1 # /opt/VRTSvlic/bin/vxlicinst -k RRP9-UDDP-CRPP-8LBP-PPPP-PP3Z-PP

VERITAS License Manager vxlicinst utility version 3.00.007d
Copyright (C) VERITAS Software Corp 2002. All Rights reserved.
Number of days left for Demo = 60

License key successfully installed for VERITAS Volume Manager


root@rac2 # /opt/VRTSvlic/bin/vxlicinst -k RRP9-UDDP-CRPP-8LBP-PPPP-PP3Z-PP

VERITAS License Manager vxlicinst utility version 3.00.007d
Copyright (C) VERITAS Software Corp 2002. All Rights reserved.
Number of days left for Demo = 60

License key successfully installed for VERITAS Volume Manager


十六、修改Solaris关于安装oracle的内核参数
在两个节点的/etc/system加入如下内容:

set shmsys:shminfo_shmmax=4294967296
set shmsys:shminfo_shmmin=200
set shmsys:shminfo_shmmni=200
set shmsys:shminfo_shmseg=200
set semsys:seminfo_semmap=1024
set semsys:seminfo_semmns=2048
set semsys:seminfo_semmni=2048
set semsys:seminfo_semmsl=2048
set semsys:seminfo_semmnu=2048
set semsys:seminfo_semume=200
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767
forceload: sys/shmsys
forceload: sys/semsys
forceload: sys/msgsys


十六、重起机器

root@rac1 # scshutdown -y -g 30

在rac1:
OK boot

rac1启动后,在rac2
OK boot

论坛徽章:
0
2 [报告]
发表于 2005-05-13 15:09 |只看该作者

转贴Oracle RAC on Sun Cluster 3.1有了他就不要找SUN的技术了

连sun cluster都没用,双420R + A1000, 按照10g rac安装和配置向导。
安装系统,补丁,安装oracle CRS, 安装db,就行了。
oracle crs代替了cluster, volumn manager等的作用

论坛徽章:
0
3 [报告]
发表于 2005-05-13 15:09 |只看该作者

转贴Oracle RAC on Sun Cluster 3.1有了他就不要找SUN的技术了

以前发过了

论坛徽章:
0
4 [报告]
发表于 2005-05-13 16:19 |只看该作者

转贴Oracle RAC on Sun Cluster 3.1有了他就不要找SUN的技术了

楼主辛苦了

论坛徽章:
0
5 [报告]
发表于 2005-05-13 17:53 |只看该作者

转贴Oracle RAC on Sun Cluster 3.1有了他就不要找SUN的技术了

我只装过9i+sun cluster 3.1 u3下的rac,阵列是3510,没有做卷用裸设备做的

论坛徽章:
0
6 [报告]
发表于 2006-12-08 15:16 |只看该作者
楼主辛苦了
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP