免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 3651 | 回复: 5
打印 上一主题 下一主题

谁安装过oracle rac for linux [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2004-10-22 10:35 |只看该作者 |倒序浏览
??

论坛徽章:
0
2 [报告]
发表于 2004-10-22 10:36 |只看该作者

谁安装过oracle rac for linux

[root@rac1 root]# find /lib/modules -name "hangcheck-timer.o"
/lib/modules/2.4.9-e.49/kernel/drivers/char/hangcheck-timer.o
/lib/modules/2.4.9-e.49enterprise/kernel/drivers/char/hangcheck-timer.o
/lib/modules/2.4.9-e.10/kernel/drivers/addon/hangcheck/hangcheck-timer.o
[root@rac1 root]# /sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
Using /lib/modules/2.4.9-e.49/kernel/drivers/char/hangcheck-timer.o
[root@rac1 root]# grep Hangcheck /var/log/messages |tail -1
Oct 28 20:22:10 rac1 kernel: Hangcheck: Using TSC.

论坛徽章:
0
3 [报告]
发表于 2004-10-22 10:38 |只看该作者

谁安装过oracle rac for linux

oracle@rac1 oracle]$ ln -s /dev/raw/raw1 /var/opt/oracle/oradata/orcl/CMQuorumFile
ln: `/var/opt/oracle/oradata/orcl/CMQuorumFile': File exists
[oracle@rac1 oracle]$ ll  /var/opt/oracle/oradata/orcl/CMQuorumFile
lrwxrwxrwx    1 oracle   oinstall       13 Oct 24 21:20 /var/opt/oracle/oradata/orcl/CMQuorumFile ->; /dev/raw/raw1

论坛徽章:
0
4 [报告]
发表于 2004-10-22 10:39 |只看该作者

谁安装过oracle rac for linux

[root@rac1 root]# . ~oracle/.bash_profile
[root@rac1 root]# $ORACLE_HOME/oracm/bin/ocmstart.sh
oracm </dev/null 2>;&1 >;/opt/oracle/product/9.2.0/oracm/log/cm.out &
[root@rac1 root]# ps -ef |grep oracm
root      1889  1802  0 20:28 pts/0    00:00:00 grep oracm
[root@rac1 root]#

论坛徽章:
0
5 [报告]
发表于 2004-10-22 10:41 |只看该作者

谁安装过oracle rac for linux

这是oracle公司 发给我安装文档:
Install Red Hat Linux Advanced Server 2.1 (Pensacola)
Disk space permitting, select an 'Advanced Server' installation type (as opposed to Custom). This ensures most Oracle-required packages are installed.
Disk Druid was used to partition the single 40Gb IDE disk into a root (/) partition of 10Gb and swap partition of 2Gb. The remaining free disk space was left for future partitioning.
[root@arachnid root]# df -kl
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hda1 10080488 1217936 8350484 13% /
none 509876 0 5098763 0% /dev/shm

Lilo was selected as the boot loader, to be installed to the master boot record (/dev/hda).
During Firewall Configuration, the 'No firewall' option was selected.
Select your preferred Window Manager (Gnome, KDE or both) as well as the Software Development option during Package Group Selection. Selecting Gnome and Software Development options (without individual package
selection) results in an approximate install size of approximately 1.2Gb.
Upon completion and reboot, uni-processor servers should select the appropriate kernelfrom which to boot. The default kernel called linux is the Symmetric Multi-Processing (SMP) kernel and will hang on reboot if using uni-processor hardware, so select linux-up (uni-processor) instead. Modify the default kernel in /etc/lilo.conf, /etc/grub.conf later.
2. Install other required packages
Ensure the binutils package [binutils-2.11.90.0.8-12.i386.rpm] is installed - this is required for /usr/bin/ar, as, ld, nm, size, etc. utilities used by Oracle later e.g.:
[root@arachnid /]# rpm -qa | grep -i binutils
binutils-2.11.90.0.8-12
[root@arachnid /]#

Install any other required packages such as; pdksh, wu-ftp, Netscape, xpdf, zip, unzip, etc.
Those attempting to install Red Hat Linux Advanced Server 2.1 on like hardware e.g. Dell OptiPlex GX260, Compaq Evo D510, or any comprising an Intel Extreme graphics card with i810 chipset: i810, i810-dc100, i810e, i815, i830M, 845G, i810_audio device and/or Intel Pro/1000 MT network interface card should already be aware, that Errata 2.4.9-e.9 (or higher) and/or an upgrade of XFree86 from 4.1.0 to 4.3.0 is required to correctly discover the network card and overcome X/Motif display issues. A bios upgrade and/or modification of Onboard Video Buffer settings may also be required to realise optimal graphics performance.
Note: a fully working X-Windows environment is required to install Oracle, whether in silent or interactive modes.
XFree86 version 4.3.0 is available from http://www.xfree86.org or mirror site (65Mb).
Red Hat Advanced Server 2.1 Errata are available from Red Hat (http://www.redhat.com), including Red Hat Network (http://rhn.redhat.com).
Red Hat are unlikely to release an XFree86 upgrade for Red Hat Linux Advanced Server 2.1, although should be available in Red Hat Linux Advanced Server 3.0 (once Production).
@Oracle employees - also refer to the following url for driver issues:
@http://patchset.au.oracle.com/images/LNXIX86/AdvancedServer2.1/GX260/readme
After initial installation (2.4.9-e.3), attempting to configure networking using neat [redhat-config-network-0.9.10-2.1.noarch.rpm] may core. An upgrade to version redhat-config-network-1.0.3-1 should prove stable.
[root@arachnid rpms]# rpm -Uvh redhat-config-network-1.0.3-1.i386.rpm
Preparing... ################################### [100%]
1:redhat-config-network ################################### [100%]

For ease of (re)installation, the contents of the Advanced Server distribution's RPMS directories (cdroms 1-3) may be copied to local disk (e.g. /rpms).
3. Apply latest Errata (Operating System/Kernel patches)
Here, the default kernel version (2.4.9-e.3) was upgraded to Errata e.16 (2.4.9-e.16) immediately after initial installation. Note that an installation of the new kernel [# rpm -ivh kernel-...] is different to an upgrade [# rpm -Uvh kernel-...]. Installation creates a new kernel of the higher version, but retains the original kernel for fail back, whereas Upgrade replaces the original kernel.
If you upgrade to a higher kernel and use lilo boot manager, modify /etc/lilo.conf to reflect the upgraded /boot/kernel file names, remembering to lilo the file after. Grub automatically modifies its configuration file (/etc/grub.conf).
A complete list of Oracle/Red Hat supported kernel versions is available. How To Check the Supportability of RedHat AS. Applying the latest supported kernel is strongly recommended.
Reboot the server and boot to the new kernel. Configure any devices newly discovered by kudzu.
Uname should look something like:
[root@arachnid /]# uname -a
Linux arachnid 2.4.9-e.16 #1 Fri Mar 21 05:55:06 PST 2003 i686 unknown
4. Make the server network accessible
Configure networking on the server. Where possible, obtain and use fixed IP addressing. Although possible to use DHCP addressing, any change in IP address after installing and configuring OCFS and the Cluster Manager (OCM) may result in issues later. To prevent such issues, specify a non-fully qualified domain name when selecting the hostname for the server e.g.:
[root@arachnid /]# hostname
arachnid

At least for the duration of initial installation/configuration, it may be helpful to gain remote access to the server. Enable xinetd services (telnet, wu-ftp, rsh, etc.) as required by modifying their corresponding files in /etc/xinetd.d, then restart xinetd e.g.:
[/etc/xinetd.d/telnet]
# default: on
# description: The telnet server serves telnet sessions; it uses \
# unencrypted username/password pairs for authentication.
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no # modify from yes to no
}
[root@arachnid /]# service xinetd restart
Stopping xinetd: OK ]
Starting xinetd: OK ]
[root@arachnid /]#

Note: Remote shell (rsh) is required to be enabled before installing Oracle Cluster Manager.
5. Configure kernel and user limits
Having met the minimum Oracle requirements (see Installation guide), configure the kernel and user limits appropriately according to available resources.
The following are the contents of core initialisation/configuration files used to tune the kernel and increase user limits. Consult with your SA or Red Hat before implementing any changes that you are  unfamiliar with.
[/etc/sysctl.conf]
# Disables packet forwarding
net.ipv4.ip_forward = 0
# Enables source route verification
net.ipv4.conf.default.rp_filter = 1
# Disables the magic-sysrq key
kernel.sysrq = 0
net.core.rmem_default = 65535
net.core.rmem_max = 65535
net.core.wmem_default = 65535
net.core.rmem_max = 65535
fs.file-max = 65535
fs.aio-max-size = 65535
kernel.sem = 250 35000 100 128
kernel.shmmin = 1
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.shmmax = 522106880
#vm.freepages = 1242 2484 3726  # only use with pre-e.12 kernel (ie. workaround for kswapd issue)

[/etc/security/limits.conf]
...
oracle soft nofile 60000
oracle hard nofile 65535
oracle soft nproc 60000
oracle hard nproc 65535
Changes to the above files (except limits.conf) take effect upon server reboot. Linux provides dynamic kernel tuning via the /proc filesystem. Most kernel parameters can be changed immediately (dynamically) by echo'ing new values to the desired parameter e.g.:
[root@arachnid /proc/sys/kernel]# cat shmmax
522106880
[root@arachnid /proc/sys/kernel]# echo 4294967295 >; shmmax
[root@arachnid /proc/sys/kernel]# cat shmmax
4294967295
[root@arachnid /proc/sys/kernel]#
Note: dynamic changes made via /proc are volatile i.e. lost on reboot.
A complete description of the above parameters and recommended values are available from Red Hat, and are discussed in detail throughout the material cited in the Reference section.

6. Configure I/O Fencing
In 2 or more node RAC configurations, an I/O fencing model is required to detect when one or other node(s) die or become unresponsive - this helps to prevent Data corruption i.e. a node in a unknown state continuing to write to the shared disk. Two I/O fencing models are discussed - watchdog and hangcheck-timer.

Note: Neither watchdog nor hangcheck-timer configuration are required for single node configuration. However, for the purpose of emulating two or more node configuration, either watchdog (for 9.2.0.1.0) or hangcheck-timer (for 9.2.0.2.0+) can be implemented in single node configuration.  
Watchdog:
In 9.2.0.1.0 (9.2.0 base release), Oracle originally recommended using the softdog module (also known as watchdog) as the I/O fencing model. However, due to performance and stability issues when using watchdog with the /dev/watchdog device, Oracle since recommended using /dev/null as the watchdog device file.
To use /dev/watchdog device, perform the following steps:
Check whether the watchdog device exists i.e.:

[root@arachnid /]# ls -l /dev/watchdog
crw------- 1 oracle root 10, 130 Sep 24 2001 /dev/watchdog

If it does not exist, issue the following commands as root:

[root@arachnid /]# mknod /dev/watchdog c 10 130
[root@arachnid /]# chmod 600 /dev/watchdog
[root@arachnid /]# chown oracle /dev/watchdog
To use /dev/null with watchdog, modify the $ORACLE_HOME/oracm/admin/ocmargs.ora file as follows:
watchdogd -g dba -d /dev/null
oracm
noretsart 1800
To implement watchdog, modify the /etc/rc.local file to install the softdog module at boot time i.e.:
[/etc/rc.local]
#!/bin/sh
touch /var/lock/subsys/local
/sbin/insmod softdog soft_margin=60
Hangcheck Timer:
From 9.2.0.2.0 (9.2.0 Patch 1) onward, Oracle recommends using a new I/O fencing model , the hangcheck-timer module, in lieu of watchdog. Oracle Cluster Manager configuration changes are required if you have already implemented RAC using 9.2.0.1.0 then upgrade to 9.2.0.2.0 or higher. The reason for the I/O fencing model change and hangcheck-timer configuration requirements are discussed in the Oracle Server 9.2.0.2.0 (and onwards) patchset readme.

To configure the hangcheck-timer (recommended), refer to the 9.2.0.2.0 or higher patchset readme for specific instructions.
To use the hangcheck timer, modify the /etc/rc.local file to install the hangcheck-timer module at boot time i.e.:
[/etc/rc.local]
#!/bin/sh
touch /var/lock/subsys/local
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

7. Create partitions and filesystem for Oracle software ($ORACLE_HOME)
If not already performed during step 1, use /sbin/fdisk to create a partition to install Oracle software and binaries. In our example, an extended partition (/dev/hda3) of 26Gb is created, in which a logical partition (/dev/hda5) of 10Gb is created e.g.:
[root@arachnid kernel]# fdisk /dev/hda
The number of cylinders for this disk is set to 4865.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (1531-4865, default 1531):
Using default value 1531
Last cylinder or +size or +sizeM or +sizeK (1531-4865, default 4865): +10000m
Command (m for help): p
Disk /dev/hda: 255 heads, 63 sectors, 4865 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 1275 10241406 83 Linux
/dev/hda2 1276 1530 2048287+ 82 Linux swap
/dev/hda3 1531 4865 26788387+ 5 Extended
/dev/hda5 1531 2805 10241406 83 Linux
Command (m for help): w
After writing all changes, reboot the server to ensure the new partition table entries are read.
Create a filesystem on top of the partition(s) e.g.:
[root@arachnid /]# mkfs.ext2 -j /dev/hda5

Note: specifying the -j (journalling) flag when running mkfs.ext2 will create a journalled filesystem of type ext3.
Create a mount point upon which to attach the filesystem e.g.:
[root@arachnid /]# mkdir /u01;chmod 777 /u01

Optionally change ownership of the mount to the oracle user e.g.:
[root@arachnid /]# chown oracle:dba /u01

Mount the filesystem as root e.g.:
[root@arachnid /]# mount -t ext3 /dev/hda5 /u01

To automount the file system upon reboot, update /etc/fstab e.g.:
[/etc/fstab]
LABEL=/ / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/hda2 swap swap defaults 0 0
/dev/cdrom /mnt/cdrom iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
/dev/hda5 /u01 ext3 defaults 1 1
8. Create the Unix dba group
If it does not already exist, create the Unix dba group e.g.:
[root@arachnid /]# groupadd dba -g 501
[root@arachnid /]# grep dba /etc/group
dba:501:
9. Create the Oracle user
If it does not already exist, create the Oracle software owner/user e.g.:
[root@arachnid /]# useradd oracle -u 501 -g dba -d /home/oracle \
-s /bin/bash
[root@arachnid /]# grep oracle /etc/passwd
oracle:501:501::/home/oracle:/bin/bash
[root@arachnid /]# passwd oracle
Changing password for user oracle
New password: <password>;
Retype new password: <password>;
passwd: all authentication tokens updated successfully
10. Configure Oracle user environments
For a single instance, single database configuration, the Oracle environment can be appended to the oracle user's existing login script e.g. [/home/oracle/.bash_profile], so that the Oracle environment is defined and database accessible immediately upon oracle user login.
In this single node RAC configuration, the intention is to emulate two nodes, therefore two separate environment definition files are created - one defining the environment for instance A, the other for instance B.
Copy and modify the following files (V920A, V920B) to suit your environment.
The relevent file is source'd (.) by the oracle user as required, depending on which instance database access is required from e.g.:
[root@arachnid /]# su - oracle
[oracle@arachnid oracle]$ . V920A
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0]$ echo $ORACLE_SID
V920A
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0]$
[root@arachnid /]# su - oracle
[oracle@arachnid oracle]$ . V920B
[oracle@V920B@arachnid /u01/app/oracle/product/9.2.0]$ echo $ORACLE_SID
V920B
[oracle@V920B@arachnid /u01/app/oracle/product/9.2.0]$
[oracle@V920A@arachnid /home/oracle]$ ls -l
total 8
-rwxr-xr-x 1 oracle dba 932 Jun 10 18:09 V920A
-rwxr-xr-x 1 oracle dba 933 Jun 10 18:10 V920B
  
--- Start sample Oracle environment script [/home/oracle/V920A] ---
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATHHOME/bin
export PATH
unset USERNAME
#oracle
ORACLE_SID=V920A;export ORACLE_SID
ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/9.2.0;export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;export LD_LIBRARY_PATH
TNS_ADMIN=$ORACLE_HOME/network/admin;export TNS_ADMIN
CLASSPATH=$ORACLE_HOME/jlibORACLE_HOME/rdbms/jlibORACLE_HOME/network
/jlibORACLE_HOME/assistants/dbca/jlibORACLE_HOME/assistants/dbma/
jlibORACLE_HOME/owm/jlibORACLE_HOME/jdbc/lib/classes12.zip;export
CLASSPATH
PS1='[\u@$ORACLE_SID@\h $PWD]$ ';export PS1
PATH=$ORACLE_HOME/binPATH;export PATH
alias ll='ls -l --color'
alias cdo='cd $ORACLE_HOME'
alias sql='sqlplus "/ as sysdba"'
alias scott='sqlplus scott/tiger'
umask 022
cd $ORACLE_HOME
--- End sample Oracle environment script [/home/oracle/V920A] ---
--- Start sample Oracle environment script [/home/oracle/V920B] ---
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATHHOME/bin
export PATH
unset USERNAME
#oracle
ORACLE_SID=V920B;export ORACLE_SID
ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/9.2.0;export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;export LD_LIBRARY_PATH
TNS_ADMIN=$ORACLE_HOME/network/admin;export TNS_ADMIN
CLASSPATH=$ORACLE_HOME/jlibORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network
/jlib:$ORACLE_HOME/assistants/dbca/jlib:$ORACLE_HOME/assistants/dbma/
jlib:$ORACLE_HOME/owm/jlib:$ORACLE_HOME/jdbc/lib/classes12.zip;export
CLASSPATH
PS1='[\u@$ORACLE_SID@\h $PWD]$ ';export PS1
PATH=$ORACLE_HOME/bin:$PATH;export PATH
alias ll='ls -l --color'
alias cdo='cd $ORACLE_HOME'
alias sql='sqlplus "/ as sysdba"'
alias scott='sqlplus scott/tiger'
umask 022
cd $ORACLE_HOME
--- End sample Oracle environment script [/home/oracle/V920B] ---
[Part 2: Downloads]
11. Download Oracle Cluster File System (OCFS)
Oracle Cluster File System (OCFS) presents a consistent file system image across the servers in a cluster. OCFS allows administrators to take advantage of a filesystem for Oracle database files (data files, control files, and archive logs) and configuration files. This eases administration of Oracle9i Real Application Clusters (RAC).
When installing RAC on a 2 or more node cluster, OCFS provides an alternative to having to use raw devices. In this case, when using a single node setup for RAC, filesystem cache consistency issues that would otherwise plague a multi-node standard filesystem configuration, do not apply. In other words, standard filesystems such as ext2, ext3, etc. may be used to store Oracle datafiles in single node RAC configuration. To fully appreciate and emulate a multi-node RAC configuration, the steps involved in configuring OCFS volumes are provided.
Download the version of OCFS appropriate for your system. OCFS is readily available for download from http://oss.oracle.com, Oracle's Linux Open Source Projects development website. OCFS is offered under GNU Public Licence (GPL).
At time of writing, the latest available version was 2.4.9-e-1.0.8-4 suitable for kernel versions 2.4.9-e.12 and higher.
In this case, the minimum required files are:
·        ocfs-2.4.9-e-1.0.8-4.i686.rpm
·        ocfs-support-1.0.8-4.i686.rpm
·        ocfs-tools-1.0.8-4.i686.rpm
Since publication of this article, OCFS 1.0.9 has been made available from MetaLink (Patch 3034004) - latest revisions are available from  http://oss.oracle.com in the near future.
12. Download latest Oracle Server 9.2.0 Patchset
Download the latest Oracle Server 9.2.0 Patchset. At the time of writing, the latest available patchset is 9.2.0.3.0. The patchset (226Mb) not only contains core Oracle Server patches, but also Oracle Cluster Manager patches. The patch is readily available for download from MetaLink >; Patches as Patch Number 2761332.
Read the readme, then re-read the readme.
Note: although this article exists solely for evaluative purposes, significant changes to Oracle Cluster Manager configuration have been made made from 9.2.0.2.0 onward. The application of the latest OCM/Oracle Server patchset is recommended, however using the base release of Oracle Cluster Manager and Oracle Server (9.2.0.1.0) has been tested and works. However, if using the 9.2.0 base release, 9.2.0.1.0, expect regular instance failure accompanied with the following error in /var/log/messages. Oracle is only capable of o_direct write from the 9.2.0.2.0 patchset onward.
Jul 15 13:02:13 arachnid kernel: (2914) TRACE: ocfs_file_write(1271) non O_DIRECT write, fileopencount=1  
Unzip and untar the latest patchset to a temporary directory e.g.:
[root@arachnid /]# mkdir -p /u01/app/oracle/patches/92030
[root@arachnid /]# mv p2761332_92030_LINUX32.zip /u01/app/oracle/patches/92030
[root@arachnid /]# cd /u01/app/oracle/patches/92030
[root@arachnid /]# tar xvfz p2761332_92030_LINUX32.zip
[Part 3: Oracle Cluster File System]
13. Create additional Oracle Cluster File Sysytem (OCFS) partitions
In preparation for installing OCFS to store the database files, create at least two partitions using /sbin/fdisk.
The Oracle Cluster Manager quorum disk should reside on a dedicated partition. The quorum file itself should be of at least 1Mb in size. However, be aware that OCFS volumes require space for volume structures, therefore the minimum partition size if should be 50Mb). The number of  files toreside in and number of accessing nodes of an OCFS partition dictate the minimum size of the OCFS partition.
The size of the OCFS partition to store database files should exceed the total size of database files, ensuring to allow for ample growth. In our case, 5Gb was allocated for a single database only.
Following is sample fdisk output:
[root@arachnid /]# fdisk /dev/hda
The number of cylinders for this disk is set to 4865.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (2806-4865, default 2806):
Using default value 2806
Last cylinder or +size or +sizeM or +sizeK (2806-4865, default 4865): +10m
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (2808-4865, default 280:
Using default value 2808
Last cylinder or +size or +sizeM or +sizeK (2808-4865, default 4865): +5000m
Command (m for help): p
Disk /dev/hda: 255 heads, 63 sectors, 4865 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 1275 10241406 83 Linux
/dev/hda2 1276 1530 2048287+ 82 Linux swap
/dev/hda3 1531 4865 26788387+ 5 Extended
/dev/hda5 1531 2805 10241406 83 Linux
/dev/hda6 2806 2807 16033+ 83 Linux
/dev/hda7 2808 3445 5124703+ 83 Linux
Command (m for help):

Create a mount point upon which to attach the filesystem e.g.:
[root@arachnid /]# mkdir /cfs01;chmod 777 /cfs01

Logical partition /dev/hda6 (of 10Mb) will become /quorum (ocfs). Logical partition /dev/hda7 (of 5Gb) will become /cfs01 (ocfs).
After writing all changes, reboot the server to ensure the new partition table entries are read. Use /sbin/fdisk or cat /proc/partitions.
14. Install the Oracle Cluster File System (OCFS) software
Install the appropriate OCFS packages for your kernel as root e.g.:
[root@arachnid /rpms]# rpm -ivh ocfs-2.4.9-e-1.0.8-4.i686.rpm ocfs-support-1.0.8-4.i686.rpm ocfs-tools-1.0.8-4.i686.rpm

A complete list of files installed as part of OCFS can be seen by querying the rpm database or packages e.g.:
[root@arachnid /rpms]# rpm -qa | grep -i ocfs
ocfs-support-1.0.8-4
ocfs-2.4.9-e-1.0.8-4
ocfs-tools-1.0.8-4
[root@arachnid /rpms]# rpm -ql ocfs-support-1.0.8-4
/etc/init.d/ocfs
/sbin/load_ocfs
/sbin/mkfs.ocfs
/sbin/ocfs_uid_gen
[root@arachnid /rpms]# rpm -ql ocfs-2.4.9-e-1.0.8-4
/lib/modules/2.4.9-e-ABI/ocfs
/lib/modules/2.4.9-e-ABI/ocfs/ocfs.o
[root@arachnid /rpms]# rpm -ql ocfs-tools-1.0.8-4
/usr/bin
/usr/bin/cdslctl
/usr/bin/debugocfs
/usr/bin/ocfstool
/usr/share
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/cdslctl.1.gz
/usr/share/man/man1/ocfstool.1.gz

Note: the OCFS installation automatically creates the necessary rc (init) scripts to start OCFS on server reboot i.e.:
[root@arachnid /]# find . -name '*ocfs*' -print
...
./etc/rc.d/init.d/ocfs
./etc/rc.d/rc3.d/S24ocfs
./etc/rc.d/rc4.d/S24ocfs
./etc/rc.d/rc5.d/S24ocfs
15. Configuring Oracle Cluster File System (OCFS)
OCFS must first be configured before you create any OCFS volumes. Guidelines, limitations, and instructions for how to configure OCFS are described in the following documents available from http://oss.oracle.com:
·        Oracle Cluster File System Installation Notes Release 1.0 for Red Hat Linux Advanced Server 2.1 Part B10499-01
·        RHAS Best Practices (http://oss.oracle.com/projects/ocfs/dist/documentation/RHAS_best_practices.txt)
·        United Linux Best Practices (http://oss.oracle.com/projects/ocfs/dist/documentation/UL_best_practices.txt)
OCFS Installation Notes (Part B10499-01) covers the following topics:
1. Installing OCFS rpm files (performed in step 14.)
2. Using ocfstool to generate the/etc/ocfs.conf file.
3. Creating /var/opt/oracle/soft_start.sh script to load the ocfs module and start Oracle Cluster Manager.
4. Creating partitions using fdisk (performed in Step 13.)
5. Creating mounts points for OCFS partitions (performed in step 11.).
6. Formatting OCFS partitions e.g.:
[root@arachnid /]# mkfs.ocfs -b 128 -C -g 501 -u 501 -L cfs01 -m /cfs01 -p 0775 /dev/hda7
Cleared volume header sectors
Cleared node config sectors
Cleared publish sectors
Cleared vote sectors
Cleared bitmap sectors
Cleared data block
Wrote volume header

7. Adding OCFS mounts to /etc/fstab e.g.:
LABEL=/ / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/hda2 swap swap defaults 0 0
/dev/hda5 /u01 ext3 defaults 0 0
/dev/hda6 /quorum ocfs _netdev 0 0
/dev/hda7 /cfs01 ocfs _netdev 0 0
/dev/cdrom /mnt/cdrom iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
8. Tuning Red Hat Advanced Server for OCFS (performed in step 5.)
9. Swap partition configuration.
10. Network Adapter configuration.
11. OCFS limitations.
Note: OCFS Installation Notes (Part B10499-01) assumes Oracle Cluster Manager and Oracle Server patchset 9.2.0.2.0 (at least) have already been applied.
[Part 4: Oracle Cluster Manager]
16. Install Oracle Cluster Manager (OCM) software
a. Once OCFS installation is complete, load, start and mount all OCFS partitions e.g.:
[root@arachnid /]# mount -a -t ocfs
[root@arachnid /]# cat /proc/mounts

b. In our case, the quorum device is /quorum and the quorum file will be /quorum/quorum. Initialise the quorum file in /quorum as follows before attemptiong to start OCM:
[root@arachnid /]# touch /quorum/quorum

c. Mount the Oracle Server cdrom e.g.:
[root@arachnid /]# mount -t iso9660 /dev/cdrom /mnt/cdrom
mount: block device /dev/cdrom is wrote-protected , mounting read only

d. Run the Oracle Universal Installer (OUI) as the oracle user e.g.:
[oracle@V920A@arachnid /home/oracle]$ /mnt/cdrom/runInstaller&

e. Select the option to install the Oracle Cluster Manager software only - accept default values for watchdog timings. Exit the Installer once complete.
f. Perform the steps g., h., i., j. only if you wish to pre-patch Oracle Cluster Manager (OCM) beyond the base 9.2.0.1.0 version.
g. Once installed, re-run the Installer pointing it to the 9.2.0.3.0 products.jar file. Apply the 9.2.0.3.0 Oracle Cluster Manager patch ensuring to refer to the readme.
h. If using kernel 2.4.9-e.16 or higher, the hangcheck-timer module will already exist as /lib/modules/2.4.9-e.16/kernel/drivers/char/hangcheck-timer.o. If using a kernel version of 2.4.9-e.3, e.8, e.9 or e.10, download
and install the hangcheck-timer from MetaLink >; Patches - patch (2594820).
i. Remove all references or calls to watchdog (softdog) daemon from startup scripts, such as /etc/rc.local.
j. Implement the hangcheck timer by adding the following line to /etc/rc.local or /etc/rc.sysinit files:
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
17. Start the Oracle Cluster Manager (OCM)
Install (load) the hangcheck-timer module by running the following command as root:
[root@arachnid /]# /sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

Define the ORACLE_HOME environment variable for root i.e.:
[root@arachnid /]# export ORACLE_HOME=/u01/app/oracle/product/9.2.0

Start the Oracle Cluster Manager e.g.:
[root@arachnid /]# $ORACLE_HOME/oracm/bin/ocmstart.sh

Ensure the OCM processes start correctly i.e:
[root@arachnid /]# ps -ef | grep -i oracm
root 2875 1 0 17:49 pts/4 00:00:00 oracm
root 2877 2875 0 17:49 pts/4 00:00:00 oracm
root 2878 2877 0 17:49 pts/4 00:00:00 oracm
root 2879 2877 0 17:49 pts/4 00:00:00 oracm
root 2880 2877 0 17:49 pts/4 00:00:00 oracm
root 2882 2877 0 17:49 pts/4 00:00:00 oracm
root 2883 2877 0 17:49 pts/4 00:00:00 oracm
root 2884 2877 0 17:49 pts/4 00:00:00 oracm
root 2885 2877 0 17:49 pts/4 00:00:00 oracm
[Part 5: Oracle Database Server]
18. Install the Oracle Server software
Firstly, pre-define the intended Oracle environment (ORACLE_HOME, ORACLE_SID, etc.) to auto-populate OUI and DBCA field locations throughout the installation.
Start the Oracle Universal Installer as the oracle user e.g.
[oracle@V920A@arachnid /home/oracle]$ /mnt/cdrom/runInstaller&

To prevent cdrom eject issues later, invoke the installer from a directory other than the mount point (/mnt/cdrom) or any part of the mounted volume.
From the Welcome screen click Next.
The next screen should be the Cluster Node Selection screen - this screen will only appear if the Oracle Universal Installer detects the Cluster Manager is running (refer step 17). If the Cluster Manager is not running, correct this before performing this step, otherwise the Real Applications Clusters product will not appear in the list of installable products.
At the Cluster Node Selection screen, the non-fully qualified hostname (arachnid in this example) should already be listed. Because this is a single node installation only, click Next.
At the File Locations screen, confirm or enter the Source and Destination paths for the Oracle software, then click Next.
At the Available Products screen, select Oracle9i Database 9.2.0.1.0 Product, then click Next.
At the Installation Types screen, select the Enterprise Edition or Custom Installation Type. The Enterprise Edition Installation Type installs a pre-configured set of products, whereas the Custom Installation offers the ability to individually select which products to install. Click Next after making your selection. In this case, a Custom installation was performed.
Only the following products were selected from the Available Product Components screen:
Oracle9i Database 9.2.0.1.0
Enterprise Edition Options 9.2.0.1.0
  Oracle Advanced Security 9.2.0.1.0
  Oracle9i Real Application Clusters 9.2.0.1.0
  Oracle Partitioning 9.2.0.1.0
Oracle Net Services 9.2.0.1.0
  Oracle Net Listener 9.2.0.1.0
Oracle9i Development Kit 9.2.0.1.0
  Oracle C++ Call Interface 9.2.0.1.0
  Oracle Call Interface (OCI) 9.2.0.1.0
  Oracle Programmer 9.2.0.1.0
  Oracle XML Developer’s Kit 9.2.0.1.0
Oracle9i for UNIX Documentation 9.2.0.1.0
  Oracle JDBC/ODBC Interfaces 9.2.0.1.0

After product selection click Next.
At the Component Locations screen, enter the destination path for Components (OUI, JRE) that are not bound to a particular Oracle Home. In this case, ORACLE_BASE (/u01/app/oracle) was used.
At the Shared Configuration File Name screen, enter the OCFS or raw device name for the shared configuration file. This configuration file is used by the srvctl utility - the configuration/administration utility to manage Real Application Clusters instances and Listeners.
At the Privileged Operating System Groups screen, confirm or enter the Unix group(s) you defined in step 8 (dba in our case), then click Next. Users who are made members of this group are implicitly granted direct access and management of the Oracle database and software.
At the Create Database screen, select No i.e. do not create a database, then click Next. Do not create at this time. If you downloaded the latest Oracle Server patchset, apply it first (outlined in later steps) before creating a database. Doing so will save time and eliminate the need to perform database a upgrade later.
At the Summary screen, review your product selections then click Install.
Perform the following actions when prompted to run $ORACLE_HOME/root.sh as root:
[root@arachnid /]# mkdir -p /var/opt/oracle
[root@arachnid /]# touch /var/opt/oracle/srvConfig.loc
[root@arachnid /]# /u01/app/oracle/product/9.2.0/root.sh

Note: the above actions prevent issues running root.sh script when no shared configuration file/destination is specified.
From the Oracle Net Configuration Assistant Welcome screen, select Perform typical configuration, then click Next.
Once complete, exit the Installer.
19. Apply the Oracle Server 9.2.0.3.0 patchset
This step is optional and is only required if you wish to pre-patch the Oracle Server software beyond the base 9.2.0.1.0 version.
Re-run the Installer ($ORACLE_HOME/bin/runInstaller& pointing it to the 9.2.0.3.0 patchset products.jar file in the directory created in Step 9.
At the Welcome screen, click Next.
At the Cluster Node Selection screen, ensure the local hostname is specified, then click Next.
At the File Locations screen, enter or browse to the 9.2.0.3.0 patchset products.jar in the source path window, then click Next.
At the Available Products screen, select Oracle9iR2 Patch Set 2 - 9.2.0.3.0 (Oracle9iR2 Patch Set 2), then click Next.
At the Summary screen, review your product selections then click Install.
Exit the Installer once complete.
20. Create and initialise the Server Configuration File
In two or more node configurations, the Server Configuration file should reside on OCFS or raw partitions. Standard file system is used in this example.
Check whether /var/opt/oracle/srvConfig.loc, /etc/srvConfig.loc or $ORACLE_HOME/srvm/config/srvConfig.loc file already exists. If not, create it as 'root' as follows. Allow at least 100Mb in size.
[root@arachnid /]# mkdir -p /var/opt/oracle
[root@arachnid /]# touch /var/opt/oracle/srvConfig.loc
[root@arachnid /]# chown oracle:dba /var/opt/oracle/srvConfig.loc
[root@arachnid /]# chmod 755 /var/opt/oracle/srvConfig.loc

Add the srvconfig_loc parameter to the srvConfig.loc e.g.:
srvconfig_loc=/u01/app/oracle/product/9.2.0/dbs/srvConfig.dbf

If it does not already exist, create the Server Configuration file referenced by srvconfig_loc in the /var/opt/oracle/srvConfig.loc file e.g.:
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0/dbs]$ touch srvConfig.dbf
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0/dbs]$ ls -l
total 92
lrwxrwxrwx 1 oracle dba 30 Jun 25 17:47 initV920A.ora ->; initV920.ora
lrwxrwxrwx 1 oracle dba 12 Jun 25 17:48 initV920B.ora ->; initV920.ora
-rw-r--r-- 1 oracle dba 3372 Jun 26 17:23 initV920.ora
-rwxr-xr-x 1 oracle dba 0 Jun 26 17:58 srvConfig.ora

Initialise the Server Configuration File once from either node e.g.:
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0/dbs]$ srvconfig -f -init
21. Start the Global Services Daemon
Start the Global Services Daemon (GSD) as the oracle user using the gsdctl utility e.g.:
[oracle@V920A@arachnid /home/oracle]$ $ORACLE_HOME/bin/gsdtctl start

Ensure the gsd services are running e.g.:
[oracle@V920A@arachnid /home/oracle]$ ps -fu oracle
UID PID PPID C STIME TTY TIME CMD
...
oracle 14851 1881 0 14:33 pts/1 00:00:00 /bin/sh ./gsdctl start
oracle 14853 14851 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14863 14853 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14864 14863 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14865 14863 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14866 14863 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 14872 14863 0 14:33 pts/1 00:00:00 /u01/app/jre/1.1.8/bin/../bin/i6
oracle 15079 15025 0 14:39 pts/3 00:00:00 ps -fu oracle
...
22. Create a Standalone Database
Create a database manually or use the Database Configuration Assistant (?/bin/dbca). If using DBCA to create a database, due to known DBCA issues, select 'Oracle Single Instance Database' and not 'Oracle Clustered Database'. For greater control and future reuse, use DBCA to generate the database create scripts. Doing so allows you to modify the scripts to increase default values for; MAXINSTANCES, MAXLOGFILES, MAXDATAFILES.
Note: Stopping Oracle Cluster Manager and Global Services Daemon before running DBCA will invoke DBCA, however you will only be presented with the option to create a standalone database.
Run dbca as the oracle user e.g.:
[oracle@V920A@arachnid /home/oracle]$ $ORACLE_HOME/bin/dbca&

At the Welcome screen, select Oracle single instance database, then click Next.
At Step 1 of 8: Operations, select Create a Database, then click Next.
At Step 2 of 8: Database Templates, select a database type then click Next. The Includes Datafiles column denotes whether a pre-configured seed database will be used, or whether to create a new database/files afresh. In this example, New Database was selected.
At Step 3 of 8: Database Identification, enter the Global Database Name (V920 in this example) and SID (V920A), then click Next.
At Step 4 of 8: Database Features, select required features from the Database Features tab, then click Next. In this case, Oracle UltraSearch and Example Schemas were not selected.
At Step 5 of 8: Database Connection Options, select either Dedicated or Shared Server (formerly MTS) Mode, then click Next. In this case, Dedciated Server was selected.
At Step 6 of 8: Initialization Parameter, select or modify the various instance parameters from the Memory, Character Sets, DB Sizing, File Locations and Archive tabs, then click Next. In this case the following configuration was specified:
Memory:
  Custom
    Shared Pool: 100 Mb
    Buffer cache: 26 Mb
    Java Pool: 100 Mb
    Large Pool: 10 Mb
    PGA: 24 Mb
Character Sets:
  Database Character Set: Use the default (WE8ISO8859P1)
  National Character Set: AL16UTF16
DB Sizing:
  Block Size: 8 Kb
  Sort Area Size: 524288 bytes
File Locations:
  Initialization Parameter Filename: /u01/app/oracle/product/9.2.0/dbs/initV920A.ora
  Create server parameter: Not selected
  Trace File Directories:
    For User Processes: /u01/admin/{DB_NAME}/udump
    For Background Process: /u01/admin/{DB_NAME}/bdump
    For Core Dumps: /u01/admin/{DB_NAME}/cdump
Archive:
  Archive Log Mode: Disabled

At Step 7 of 8: Database Storage, explode each of the database object types, specify their desired locations, then click Next. Ensure that all database files reside on the OCFS partition(s) you created earlier in step 15. - /cfs01 In this case.
Storage:
  Controlfile:
    control01.ctl: /cfs01/oradata/{DB_NAME}/
    control02.ctl: /cfs01/oradata/{DB_NAME}/
    control03.ctl: /cfs01/oradata/{DB_NAME}/
Tablespaces:
  Default values selected
Datafiles:
  /cfs01/oradata/{DB_NAME}/drsys01.dbf: Size 10 Mb
  /cfs01/oradata/{DB_NAME}/indx01.dbf: Size 10 Mb
  /cfs01/oradata/{DB_NAME}/system01.dbf: Size 250 Mb
  /cfs01/oradata/{DB_NAME}/temp01.dbf: Size 20 Mb
  /cfs01/oradata/{DB_NAME}/tools01.dbf: Size 10 Mb
  /cfs01/oradata/{DB_NAME}/undotbs1_01.dbf: Size 200 Mb
  /cfs01/oradata/{DB_NAME}/users01.dbf: Size 10 Mb
  /cfs01/oradata/{DB_NAME}/xdb01.dbf: Size 10 Mb
Redo Log Groups:
  Group 1: /cfs01/oradata/{DB_NAME}/redo01.log Size 1024 Kb
  Group 2: /cfs01/oradata/{DB_NAME}/redo02.log Size 1024 Kb
  Group 3: /cfs01/oradata/{DB_NAME}/redo03.log Size 1024 Kb

At step 8 of 8: Creation Operations, select either Create Database and/or Generate Database Creation Scripts to review and create a database at a later time, then click Finish. In this example, both Create Database and Generate Database Creation Scripts options were selected.
23. Convert the Standalone Database to a Clustered Database
The following steps are based on <Note:208375.1>;.
a. Make a full database backup before you change anything.
b. Copy the existing $ORACLE_HOME/dbs/init<SID1>;.ora to $ORACLE_HOME/dbs/init<db_name>;.ora e.g.:
[oracle@V920A@arachnid /]$ cp $ORACLE_HOME/initV920A.ora $ORACLE_HOME/initV920.ora

c. Add the following cluster database parameters to $ORACLE_HOME/dbs/init<db_name>;.ora e.g.:
[/u01/app/oracle/product/9.2.0/dbs/initV920.ora]
*.cluster_database = TRUE
*.cluster_database_instances = 4
V920A.instance_name = V920A
V920B.instance_name = V920B
V920A.instance_number = 1
V920B.instance_number = 2
*.service_names = "V920"
V920A.thread = 1
V920B.thread = 2
V920A.local_listener="(address=(protocol=tcp)(host=arachnid)(port=1521))"
V920A.remote_listener="(address=(protocol=tcp)(host=arachnid)(port=1522))"
V920B.local_listener="(address=(protocol=tcp)(host=arachnid)(port=1522))"
V920B.remote_listener="(address=(protocol=tcp)(host=arachnid)(port=1521))"
V920A.undo_tablespace=UNDOTBS1
V920B.undo_tablespace=UNDOTBS2
...

Note: Parameters prefixed with V920A. apply only to instance V920A. Those prefixed with V920B. apply only to instance V920B. Those prefixed with *. apply to all instances (V920A and V920B in this case).
d. Modify the original $ORACLE_HOME/dbs/init<SID1>;.ora file to include the following line e.g.:
[/u01/app/oracle/product/9.2.0/dbs/initV920A.ora]
ifile=/u01/app/oracle/product/9.2.0/dbs/initV920.ora

In preparation to create the second database instance, create a second $ORACLE_HOME/dbs/init<SID2>;.ora file that points to the $ORACLE_HOME/dbs/init<DB_NAME>;.ora file e.g.:
[/u01/app/oracle/product/9.2.0/dbs/initV920B.ora]
ifile=/u01/app/oracle/product/9.2.0/dbs/initV920.ora

Alternatively, only create the $ORACLE_HOME/dbs/init<DB_NAME>;.ora shared file containing the cluster parameters, then create two symbollic links in $ORACLE_HOME/dbs directory to point to the $ORACLE_HOME/dbs/init<DB_NAME>;.ora file called $ORACLE_HOME/dbs/init<SID1>;.ora and init<SID2>;.ora e.g.:
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0/dbs]$ ll
total 36
...
lrwxrwxrwx 1 oracle dba 30 Jun 25 17:47 initV920A.ora ->; initV920.ora
lrwxrwxrwx 1 oracle dba 12 Jun 25 17:48 initV920B.ora ->; initV920.ora
-rw-r--r-- 1 oracle dba 3368 Jun 25 15:11 initV920.ora

Restart Oracle Cluster Manager and Global Services Daemon if you stopped them previously.
Restart the first instance (V920A) for the new cluster parameters to take effect.
e. Open the database then run $ORACLE_HOME/rdbms/admin/catclust.sql (formerly catparr.sql) as sys to create cluster specific views e.g.:
SQL>; @?/catclust
Note: As this creates the necessary database views, the script need only be run from 1 instance.
f. If you created a single instance database using DBCA/scripts without modifying MAXINSTANCES, MAXLOGFILES, etc., recreate the controlfile and modify these parameters accordingly. This step is discussed in <Note:1012929.6>;.
g. From the first instance (V920A), mount the database then create additional redologs for the second instance thread e.g.:
SQL>; alter database add logfile thread 2
2 group 4 ('/cfs01/oradata/V920/redo04.log') size 10240K,
3 group 5 ('/cfs01/oradata/V920/redo05.log') size 10240K,
4 group 6 ('/cfs01/oradata/V920/redo06.log') size 10240k;
Database altered.
SQL>; alter database enable public thread 2;
Database altered.

h. Create a second Undo Tablespace from the first instance (V920A) e.g.:
SQL>; create undo tablespace undotbs2 datafile
2 '/cfs01/oradata/V920/undotbs2_01.dbf' size 200m;
Tablespace created.

i. From a new telnet session, source (.) the second instance's environment script to set the ORACLE_SID to that of the second instance e.g.:
[oracle@V920A@arachnid /u01/app/oracle/product/9.2.0]$ cd
[oracle@V920A@arachnid /home/oracle]$ . V920B
[oracle@V920B@arachnid /u01/app/oracle/product/9.2.0]$

j. Start the second instance e.g.:
[oracle@V920B@arachnid /]$ sqlplus "/ as sysdba"

k. From either instance, check that both redo threads are active i.e.:
SQL>; select THREAD#,STATUS,ENABLED from gv$thread;
THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
2 OPEN PUBLIC
1 OPEN PUBLIC
2 OPEN PUBLIC
24. Create and start two Oracle Net Listeners
The Oracle Network Manager (netmgr) runs by default after an Oracle Server installation. If you did not create an Oracle Net Listener earlier, create two Listeners (one for each instance) ensuring to use the TCP/IP ports specified by LOCAL_LISTENER, REMOTE_LISTENER parameters in Step 23c. e.g.:
[/u01/app/oracle/product/9.2.0/network/admin/listener.ora]
1 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = arachnid)(PORT = 1521))
      )
    )
  )
2 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = arachnid)(PORT = 1522))
      )
    )
  )

Start both Listeners i.e.:
[oracle@V920A@arachnid /]$ lsnrctl start 1
LSNRCTL for Linux: Version 9.2.0.3.0 - Production on 27-JUN-2003 13:42:40
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Starting /u01/app/oracle/product/9.2.0/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 9.2.0.3.0 - Production
System parameter file is /u01/app/oracle/product/9.2.0/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/9.2.0/network/log/1.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arachnid.au.oracle.com)(PORT=1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arachnid)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias 1
Version TNSLSNR for Linux: Version 9.2.0.3.0 - Production
Start Date 27-JUN-2003 13:42:40
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security OFF
SNMP OFF
Listener Parameter File /u01/app/oracle/product/9.2.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/9.2.0/network/log/1.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arachnid.au.oracle.com)(PORT=1521)))
The listener supports no services
The command completed successfully
[oracle@V920A@arachnid /]$
  
[oracle@V920A@arachnid /]$ lsnrctl start 2
LSNRCTL for Linux: Version 9.2.0.3.0 - Production on 27-JUN-2003 13:42:59
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Starting /u01/app/oracle/product/9.2.0/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 9.2.0.3.0 - Production
System parameter file is /u01/app/oracle/product/9.2.0/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/9.2.0/network/log/2.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arachnid.au.oracle.com)(PORT=1522)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arachnid)(PORT=1522)))
STATUS of the LISTENER
------------------------
Alias 2
Version TNSLSNR for Linux: Version 9.2.0.3.0 - Production
Start Date 27-JUN-2003 13:42:59
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security OFF
SNMP OFF
Listener Parameter File /u01/app/oracle/product/9.2.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/9.2.0/network/log/2.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arachnid.au.oracle.com)(PORT=1522)))
The listener supports no services
The command completed successfully
[oracle@V920A@arachnid /]$

Because Automatic Service Registration is configured, each instance (if already started before Listeners were started) will automatically register with both the local and remote Listeners after one minute, thereby implementing server-side Listener load balancing i.e.:
[oracle@V920A@arachnid /]$ lsnrctl serv 1
LSNRCTL for Linux: Version 9.2.0.3.0 - Production on 27-JUN-2003 13:45:46
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arachnid)(PORT=1521)))
Services Summary...
Service "V920.au.oracle.com" has 2 instance(s).
  Instance "V920A", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
  Instance "V920B", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         REMOTE SERVER
         (address=(protocol=tcp)(host=arachnid)(port=1522))
The command completed successfully
[oracle@V920A@arachnid /]$
  
[oracle@V920A@arachnid /]$ lsnrctl serv 2
LSNRCTL for Linux: Version 9.2.0.3.0 - Production on 27-JUN-2003 13:46:05
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arachnid)(PORT=1522)))
Services Summary...
Service "V920.au.oracle.com" has 2 instance(s).
  Instance "V920A", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         REMOTE SERVER
         (address=(protocol=tcp)(host=arachnid)(port=1521))
  Instance "V920B", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
The command completed successfully
[oracle@V920A@arachnid /]$

You should now have a 2 instance RAC database running on a single node using OCFS.

论坛徽章:
0
6 [报告]
发表于 2004-10-22 10:58 |只看该作者

谁安装过oracle rac for linux

解决了,呵呵!
ps -ef|grep oracm
root      2298     1  0 21:16 pts/0    00:00:00 /opt/oracle/product/9.2.0/oracm/
root      2300  2298  0 21:16 pts/0    00:00:00 /opt/oracle/product/9.2.0/oracm/
root      2301  2300  0 21:16 pts/0    00:00:00 /opt/oracle/product/9.2.0/oracm/
root      2302  2300  0 21:16 pts/0    00:00:00 /opt/oracle/product/9.2.0/oracm/
root      2303  2300  0 21:16 pts/0    00:00:00 /opt/oracle/product/9.2.0/oracm/
root      2304  2300  0 21:16 pts/0    00:00:00 /opt/oracle/product/9.2.0/oracm/
root      2305  2300  0 21:16 pts/0    00:00:00 /opt/oracle/product/9.2.0/oracm/
root      2306  2300  0 21:16 pts/0    00:00:00 /opt/oracle/product/9.2.0/oracm/
root      2307  2300  0 21:16 pts/0    00:00:00 /opt/oracle/product/9.2.0/oracm/
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP