免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12
最近访问板块 发新帖
楼主: zdygk
打印 上一主题 下一主题

[新手入门] step by step install Oracle RAC on AIX (zt) [复制链接]

论坛徽章:
0
11 [报告]
发表于 2003-06-20 17:03 |只看该作者

step by step install Oracle RAC on AIX (zt)

呵呵,是metalink.oracle.com上的。

论坛徽章:
0
12 [报告]
发表于 2003-06-23 13:37 |只看该作者

step by step install Oracle RAC on AIX (zt)

老哥,既然在HP-UX上也装了RAC,能不能也总结一下,贴出来让我们学学啊!!!

论坛徽章:
0
13 [报告]
发表于 2003-06-23 13:54 |只看该作者

step by step install Oracle RAC on AIX (zt)

[quote]原帖由 "zhoulm"]老哥,既然在HP-UX上也装了RAC,能不能也总结一下,贴出来让我们学学啊!!![/quote 发表:
   

这是metalink的文档,不过找起来比较难。HP-ux的我已经贴过,在相应论坛。

论坛徽章:
0
14 [报告]
发表于 2003-07-17 18:32 |只看该作者

step by step install Oracle RAC on AIX (zt)

有linux上面的吗?

论坛徽章:
0
15 [报告]
发表于 2003-07-17 18:51 |只看该作者

step by step install Oracle RAC on AIX (zt)

Step-By-Step Installation of RAC on Linux

Purpose
This document will provide the reader with step-by-step instructions on how to install a cluster, install Oracle Real Application Clusters (RAC) and start a cluster database on Linux. For additional explanation or information on any of these steps, please see the references listed at the end of this document.
  1. Configuring the Clusters Hardware
1.1 Minimal Hardware list / system Requirements
1.1.1 Hardware
1.1.2 Software
1.2 Installing the Shared Disk Subsystem
1.3 Installing Cluster Interconnect and Public Network Hardware
1.4 Configuring the Interconnect
1.5 Viewing and Setting the Networking Parameters

2. Creating a Cluster
2.1 Cluster Software Installation
2.2 Configuring the Kernel for Real Application Cluster Support
3. Preparing for the Installation of RAC
3.1 Configure the Shared Disks and UNIX Preinstallation Tasks
3.1.1 UNIX Preinstallation Steps
3.1.2 Configure the shared disks
3.2 Using the Oracle Universal Installer for Real Application Clusters
3.3 Configure the Cluster Manager, Node Monitor and SRVCTL configuration file
3.4 Configuring the Listeners
3.5 Create a RAC Database using the Oracle Database Configuration Assistant
4. Administering Real Application Cluster Instances
5. References


1. Configuring the Clusters Hardware
1.1 Minimal Hardware list / System Requirements
Please check the RAC/Linux certification matrix for information on currently supported hardware/software.
1.1.1 Hardware
·        Requirements:
o        Refer to the RAC/Linux certification matrix for information on supported configurations.
1.1.2 Software
·        Certified platforms:
o        Certified distributions and configurations are documented here.
o        Make sure you have make and rsh-server packages installed, check with:
$rpm -q rsh-server make
rsh-server-0.17-5
make-3.79.1-8
If these are not installed, use your favorite package manager to install them.
1.2 Installing the Shared Disk Subsystem
This is highly dependent on the subsystem you have chosen. Please refer to your hardware documentation for installation and configuration instructions on Linux. Additional drivers and patches might be required. In this article we assume that the shared disk subsystem is correctly installed and that the shared disks are visible to all nodes in the cluster. Real Application Clusters requires the use of raw devices on Linux. You will need to carefully partition your disks to have adequate sized partitions. Using LVM (Logical Volume Manager) is very useful to make the management of raw devices more flexible.
In this article we will use LVM logical volumes. The only difference with using disk partitions is in the binding of the raw devices, you'll have to use the partitions (ex: /sbin/raw /dev/raw/raw1 /dev/sda1) instead of the logical volumes (ex: /sbin/raw /dev/raw/raw1 /dev/oracle/db_name_raw_system_251m).
1.3 Installing Cluster Interconnect and Public Network Hardware
·        If not already installed, install host adapters in your cluster nodes. For the procedure on installing host adapters, see the documentation that shipped with your host adapters and node hardware.
1.4 Configuring the Interconnect
Each system will have at least an IP address for the public network and one for the private cluster interconnect. For the public network, get the addresses from your network manager. For the private interconnect use 1.1.1.1 , 1.1.1.2 for the first and second node. Make sure to add all addresses in /etc/hosts.
  
ex:
9.25.120.143    rac1        #Oracle 9i Rac node 1 - public network
9.25.120.143    rac2        #Oracle 9i Rac node 2 - public network
1.1.1.1              int-rac1   #Oracle 9i Rac node 1 - interconnect
1.1.1.2              int-rac2   #Oracle 9I Rac node 2 - interconnect

Use your favorite tool to configure these adapters (ex: YaST2 on SuSE Linux). Make sure your public network is the primary (eth0).
1.5 Viewing and Setting the Networking Parameters

Interprocess communication is an important issue for RAC since cache fusion transfers buffers between instances using this mechanism. Thus, networking parameters are important for RAC databases. The values in the following table are the default on most distributions and should be ok for most configurations.
Parameter        Meaning        Value
/proc/sys/net/core/rmem_default        The default setting in bytes of the socket receive buffer        65535
/proc/sys/net/core/rmem_max        The maximum socket receive buffer size in bytes        65535
/proc/sys/net/core/wmem_default        The default setting in bytes of the socket send buffer        65535
/proc/sys/net/core/wmem_max        The maximum socket  send  buffer  size  in bytes        65535

You can see these settings with:
$ cat /proc/sys/net/core/rmem_default
Change them with:
$ echo  65535 >;  /proc/sys/net/core/rmem_default
This will need to be done each time the system boots. Some distributions already have setup a method for this during boot. On Red Hat , this can be configured in
/etc/sysctl.conf (like : net.core.rmem_default = 65535).

2. Creating a Cluster
2.1 Cluster Software Installation
On Linux, the cluster software required to run Real Application Clusters is included in the Oracle distribution. There is a difference between Oracle V9.0 and V9.2 as indicated in the relevant sections.
2.2 Configuring the Kernel for Real Application Cluster Support
When the Kernel sources are included, you can check support for
·        Raw devices:
    $ ls /usr/src/linux/drivers/raw.c
  
·        Watchdog support:
o        Make sure the watchdog device exists.
$ ls -l /dev/watchdog
crw-------    1 oracle   root      10, 130 Sep 24  2001 /dev/watchdog
o        If it does not exist, issue the following commands as root
$ mknod /dev/watchdog c 10 130
o        Check for support in the kernel
$ grep WATCHDOG /usr/src/linux/.config
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_NOWAYOUT=y
CONFIG_SOFT_WATCHDOG=m
When watchdog support is not configured, refer to  Linux/Watchdog Configuration Steps  or contact your Operating System support organization.
When the kernel is correctly configured for watchdog support as a loadable module, we need to load this module. Put the following command also in your startup scripts. I would recommend to group all actions to be performed as root at system startup in a file called startoracle_root.sh. The following command will be the first entry:
/sbin/insmod softdog soft_margin=60
Note: Parameter settings can also be set in /etc/sysctl.conf instead of in
startoracle_root.sh on some distributions, as documented in 1.5 Viewing and Setting the Networking Parameters.

3. Preparing for the Installation of RAC
The Real Application Clusters installation process includes four major tasks.
1.        Configure the shared disks and UNIX preinstallation tasks.
2.        Run the Oracle Universal Installer to install the Oracle9i Enterprise Edition and the Oracle9i Real Application Clusters software.
3.        Configure the cluster manager and node monitor (V90 only).
4.        Create and configure your database.
3.1 Configure the Shared Disks and UNIX Preinstallation Tasks
3.1.1 UNIX Preinstallation Steps
Add the Oracle USER
Depending on your Linux Operating System vendor (ex: SuSE) , this may have been preconfigured.
·        Make sure you have an osdba group defined in the /etc/group file on all nodes of your cluster. To designate an osdba group name and group number and osoper group during installation, these group names must be identical on all nodes of your Linux cluster that will be part of the Real Application Clusters database. The default UNIX group name for the osdba and osoper groups is dba. A typical entry would therefore look like the following:
  
dba::101racle
oinstall::102:root,oracle
·        Create an oracle account on each node so that the account:
o        Is a member of the osdba group
o        Is used only to install and update Oracle software
A typical command would look like the following:
# useradd -c "Oracle software owner" -G dba, oinstall -u 101 -m -d /export/home/oracle -s /bin/ksh oracle
  
or use a graphical interface like YaST2.
·        Create a mount point directory on each node to serve as the top of your Oracle software directory structure so that:
o        The name of the mount point on each node is identical to that on the initial node
o        The oracle account has read, write, and execute privileges
We used /oracle in our example.
$ mkdir /oracle
$ chown -R oracle.oinstall /oracle
$ chmod -R ug=rwx,o=rx /oracle
·        Set the correct permission and ownership for the watchdog device:
$ chmod 600 /dev/watchdog
$ chown oracle /dev/watchdog
·        Depending on your Linux distribution, make sure inetd or xinetd is started on all nodes and that the ftp, telnet, shell and login (or rsh) services are enabled (see /etc/inetd.conf or /etc/xinetd.conf and /etc/xinetd.d). For SuSE Linux, you can use YaST2 network configuration.
·        On the node from which you will run the Oracle Universal Installer, set up user equivalence by adding entries for all nodes in the cluster, including the local node, to the .rhosts file of the oracle account, or the /etc/hosts.equiv file.
  
Sample entries in /etc/hosts.equiv file:
rac1
rac2
int-rac1
int-rac2

  
·        As oracle user, check for user equivalence for the oracle account by performing a remote login (rlogin) to each node (public and private) in the cluster.
  
Note: If you are prompted for a password, you have not given the oracle account the same attributes on all nodes. You must correct this because the Oracle Universal Installer cannot use the rcp command to copy Oracle products to the remote node's directories without user equivalence.
System Kernel Parameters
Verify operating system kernel parameters are set to appropriate levels:
Note: The parameters listed here are for a single instance with default parameter settings. You might have to tune some of these parameters for customized databases.
  
Kernel Parameter        Setting        Purpose
SHMMAX        Physical memory/2        Maximum allowable size of one shared memory segment. Should be half the size of the physical memory.
SHMMIN        1        Minimum allowable size of a single shared memory segment.
SEMMNI        1024        Maximum number of semaphore sets in the entire system.
SEMMSL        100        Minimum recommended value. SEMMSL should be 10 plus the largest PROCESSES parameter of any Oracle database on the system.
SEMMNS        1024        Maximum semaphores on the system. This setting is a minimum recommended value. SEMMNS should be set to the sum of the PROCESSES parameter for each Oracle database, add the largest one twice, plus add an additional 10 for each database.
SEMOPM        100        Maximum number of operations per semop call.
SEMVMX        32767        Maximum value of a semaphore.
(swap space)        1 Gb        Two to four times your system's physical memory size.
You will have to set the correct parameters during system startup, so include them in your startup script (startoracle_root.sh):
$ export SEMMSL=100
$ export SEMMNS=1024
$ export SEMOPM=100
$ export SEMMNI=100
$ echo  $SEMMSL $SEMMNS $SEMOPM $ SEMMNI >; /proc/sys/kernel/sem
$ export SHMMAX=2147483648
$ echo $SHMMAX >; /proc/sys/kernel/shmmax
Check these with:
$ cat /proc/sys/kernel/sem
$ cat /proc/sys/kernel/shmmax
You might want to increase the maximum number of file handles, include this in your startup script or use /etc/sysctl.conf :
$ echo 65536 >; /proc/sys/fs/file-max
To allow your oracle processes to use these file handles, add the following to your oracle account login script (ex.: .profile)
$ ulimit -n 65536
Note: This will only allow you to set the soft limit as high as the hard limit. You might have to increase the hard limit on system level. This can be done by adding ulimit -Hn 65536 to /etc/initscript. You will have to reboot the system to make this active.
Sample /etc/initscript:
ulimit -Hn 65536
eval exec "$4"

Establish system environment variables
·        Set a local bin directory in the user's PATH, such as /usr/local/bin, or /opt/bin. It is necessary to have execute permissions on this directory.
·        Set the DISPLAY variable to point to the system's (from where you will run OUI) IP address, or name, X server, and screen.
·        Set a temporary directory path for TMPDIR with at least 100 Mb of free space to which the OUI has write permission.
Establish Oracle environment variables: Set the following Oracle environment variables (ex in .profile , depending on the shell used by oracle):
  
  
Environment Variable        Suggested value
ORACLE_BASE        e.g. /oracle
ORACLE_HOME        e.g. /oracle/product/901
ORACLE_TERM        xterm
NLS_LANG        AMERICAN_AMERICA.UTF8 for example
ORA_NLS33        $ORACLE_HOME/ocommon/nls/admin/data
PATH        Should contain $ORACLE_HOME/bin
LD_LIBRARY_PATH        Should contain $ORACLE_HOME/libORACLE_HOME/oracm/lib
CLASSPATH        $ORACLE_HOME/JREORACLE_HOME/jlib \ $ORACLE_HOME/rdbms/jlib: \ $ORACLE_HOME/network/jlib
THREADS_FLAG        native
·        Make sure you unset LANG, JRE_HOME and JAVA_HOME in your .profile.
·        Create the directory /var/opt/oracle and set ownership to the oracle user.
$ mkdir /var/opt/oracle
$ chown oracle.oinstall /var/opt/oracle
3.1.2 Configure the shared disks
Real Application Clusters requires that all instances be able to access a set of unformatted devices on a shared disk subsystem. These shared disks are also referred to as raw devices. If your platform supports an Oracle-certified cluster file system, however, you can store the files that Real Application Clusters requires directly on the cluster file system.
The Oracle instances in Real Application Clusters write data onto the raw devices to update the control file, server parameter file, each datafile, and each redo log file. All instances in the cluster share these files. The Oracle provided node monitor also needs a raw device shared by all nodes.
The Oracle instances in the RAC configuration write information to raw devices defined for:
·        The control file
·        The spfile.ora
·        Each datafile
·        Each ONLINE redo log file
·        Server Manager (SRVM) configuration information

It is therefore necessary to define raw devices for each of these categories of files. The Oracle Database Configuration Assistant (DBCA) will create a seed database expecting the following configuration (replace db_name with the actual name of your database):
  
Note: the logical volumes should be bigger than the size of the oracle datafiles, in the following table, Datafile Size indicates the size used for the Oracle datafiles. In this article, we created the logical volumes or disk partitions 1Mb bigger (indicated by the sample filenames).The sizes indicated here are according to what dbca uses as defaults in V9.2+ except for the redo logfiles, the default sizes are mostly the same for V9.0.1. You are strongly encouraged to adjust the sizes to your needs. After the database creation, you can allow them to auto extent to the size you need.
  
Raw Volume        Datafile Size V9.2        Sample File Name (For V9.2)        Datafile Size V9.0

SYSTEM tablespace        410Mb        /dev/oracle/db_name_raw_system_411m        250Mb
USERS tablespace        25 Mb        /dev/oracle/db_name_raw_users_26m        25Mb
TEMP tablespace        40 Mb        /dev/oracle/db_name_raw_temp_41m        40Mb
UNDOTBS tablespace per instance        200 Mb        /dev/oracle/db_name_raw_undotbsx_201m        200Mb
CWMLITE tablespace        20 Mb        /dev/oracle/db_name_raw_cwmlite_21m        ---
EXAMPLE tablespace        140 Mb        /dev/oracle/db_name_raw_example_141m        ---
OEMREPO tablespace        20 MB        /dev/oracle/db_name_raw_oemrepo_21m        ---
INDX tablespace        25 Mb        /dev/oracle/db_name_raw_indx_26m        25Mb
TOOLS tablespace        10 Mb        /dev/oracle/db_name_raw_tools_11m        10Mb
DRSYS tablespace        20 Mb        /dev/oracle/db_name_raw_drsys_21m        ---
ODM tablespace        20Mb        /dev/oracle/db_name_raw_odm_21m        ---
XDB tablespace        40Mb        /dev/oracle/db_name_raw_xdb_41m        ---
First control file        110 Mb        /dev/oracle/db_name_raw_controlfile1_110m        110Mb
Second control file        110 Mb        /dev/oracle/db_name_raw_controlfile2_110m        110Mb
Third control file        110Mb        /dev/oracle/db_name_raw_controlfile3_110m        ---
Two ONLINE redo log files per instance        120 Mb x 2        /dev/oracle/db_name_raw_rdo_thread_lognumber_121m        120MB x 2
spfile.ora        5 Mb        /dev/oracle/db_name_raw_spfile_5m        5Mb
srvmconfig        104 Mb        /dev/oracle/srvctl_raw__104m        104Mb
The following device is not used by DBCA        ***************         It is included for having an overview on raw devices. This is used as quorum disk in Oracle RAC V9.2+        **********
node monitor / quorum disk               1Mb         minimum size: (4 * number of nodes in the cluster ) + 4 (in Kb)  /dev/oracle/nm_raw_5m        1Mb
Notes:
·        Automatic Undo Management requires an undo tablespace per instance therefore you would require a minimum of 2 tablespaces as described above. By following the naming convention described in the table above, the logical volumes are identified with the database and the volume type (the data contained in the raw volume). Volume size is also identified using this method.
·        In the sample names listed in the table, the string db_name should be replaced with the actual database name, thread is the thread number of the instance, and lognumber is the log number within a thread.
·        The sample commands in the remainder of this article are based on V9.2. The commands are the same for V9.0 but sizes and number of datafiles are different.
·        The srvmconfig file is used to store the configuration for the srvctl utility. Its location is specified in /var/opt/oracle/srvConfig.loc (default on linux) or /etc/srvConfig.loc.
·        The node monitor raw device (quorum disk) is used by the cluster software for checking the status of the different nodes in the cluster. Don't confuse this with the srvctl configuration file.

After creating the necessary partitions or logical volumes, , you have to bind the raw devices.  Please check where your raw devices are located, look for /dev/raw1 or /dev/raw/raw1.These bindings have to be recreated at system reboot, so put the commands in your startup scripts (startoracle_root.sh) .  This must be executed as root on all nodes.
Note: Make sure you have sufficient raw devices. If not, create additional devices as root using
(xx represents a number):
$ mknod rawxx c  162 xx

The bind commands look like:
/usr/sbin/raw /dev/raw1 /dev/oracle/db_name_raw_system_411m
/usr/sbin/raw /dev/raw2 /dev/oracle/db_name_raw_users_26m
/usr/sbin/raw /dev/raw3 /dev/oracle/db_name_raw_temp_41m
/usr/sbin/raw /dev/raw4 /dev/oracle/db_name_raw_undotbs1_201m
/usr/sbin/raw /dev/raw5 /dev/oracle/db_name_raw_undotbs2_201m
/usr/sbin/raw /dev/raw6 /dev/oracle/db_name_raw_indx_26m
/usr/sbin/raw /dev/raw7 /dev/oracle/db_name_raw_controlfile1_110m
/usr/sbin/raw /dev/raw8 /dev/oracle/db_name_raw_controlfile2_110m
/usr/sbin/raw /dev/raw9 /dev/oracle/db_name_raw_rdo_1_1_121m
/usr/sbin/raw /dev/raw10 /dev/oracle/db_name_raw_rdo_1_2_121m
/usr/sbin/raw /dev/raw11 /dev/oracle/db_name_raw_rdo_2_1_121m
/usr/sbin/raw /dev/raw12 /dev/oracle/db_name_raw_rdo_2_2_121m
/usr/sbin/raw /dev/raw13 /dev/oracle/db_name_raw_spfile_5m
/usr/sbin/raw /dev/raw14 /dev/oracle/nm_raw_5m
/usr/sbin/raw /dev/raw15 /dev/oracle/srvctl_raw
/usr/sbin/raw /dev/raw16 /dev/oracle/db_name_raw_tools_11m
/usr/sbin/raw /dev/raw17 /dev/oracle/db_name_raw_cwmlite_21m
/usr/sbin/raw /dev/raw18 /dev/oracle/db_name_raw_example_141m
/usr/sbin/raw /dev/raw19 /dev/oracle/db_name_raw_oemrepo_21m
/usr/sbin/raw /dev/raw20 /dev/oracle/db_name_raw_drsys_21m
/usr/sbin/raw /dev/raw21 /dev/oracle/db_name_raw_odm_21m
/usr/sbin/raw /dev/raw22 /dev/oracle/db_name_raw_tools_41m
/usr/sbin/raw /dev/raw23 /dev/oracle/db_name_raw_controlfile3_110m
Also make sure to set the permissions and ownership as required on all nodes- Add this to your startup script:
  
/bin/chmod 600 /dev/raw1
/bin/chmod 600 /dev/raw2
/bin/chmod 600 /dev/raw3
/bin/chmod 600 /dev/raw4
/bin/chmod 600 /dev/raw5
/bin/chmod 600 /dev/raw6
/bin/chmod 600 /dev/raw7
/bin/chmod 600 /dev/raw8
/bin/chmod 600 /dev/raw9
/bin/chmod 600 /dev/raw10
/bin/chmod 600 /dev/raw11
/bin/chmod 600 /dev/raw12
/bin/chmod 600 /dev/raw13
/bin/chmod 600 /dev/raw14
/bin/chmod 600 /dev/raw15
/bin/chmod 600 /dev/raw16
/bin/chmod 600 /dev/raw17
/bin/chmod 600 /dev/raw18
/bin/chmod 600 /dev/raw19
/bin/chmod 600 /dev/raw20
/bin/chmod 600 /dev/raw21
/bin/chmod 600 /dev/raw22
/bin/chmod 600 /dev/raw23
/bin/chown oracle.dba /dev/raw1
/bin/chown oracle.dba /dev/raw2
/bin/chown oracle.dba /dev/raw3
/bin/chown oracle.dba /dev/raw4
/bin/chown oracle.dba /dev/raw5
/bin/chown oracle.dba /dev/raw6
/bin/chown oracle.dba /dev/raw7
/bin/chown oracle.dba /dev/raw8
/bin/chown oracle.dba /dev/raw9
/bin/chown oracle.dba /dev/raw10
/bin/chown oracle.dba /dev/raw11
/bin/chown oracle.dba /dev/raw12
/bin/chown oracle.dba /dev/raw13
/bin/chown oracle.dba /dev/raw14
/bin/chown oracle.dba /dev/raw15
/bin/chown oracle.dba /dev/raw16
/bin/chown oracle.dba /dev/raw17
/bin/chown oracle.dba /dev/raw18
/bin/chown oracle.dba /dev/raw19
/bin/chown oracle.dba /dev/raw20
/bin/chown oracle.dba /dev/raw21
/bin/chown oracle.dba /dev/raw22
/bin/chown oracle.dba /dev/raw23

You can verify the bindings with:
$ raw -qa
/dev/raw1:  bound to major 3, minor 7
/dev/raw2:  bound to major 3, minor 8
...
Optionally, you can create soft links to the raw devices in your $ORACLE_BASE/oradata/db_name directory. In this example, $ORACLE_BASE is /oracle.
Don't forget to substitute the string db_name with your actual database name. Do this as oracle user on all nodes. This does however increase the management work when you decide to modify the configuration.
$ mkdir /oracle/oradata
$ mkdir /oracle/oradata/db_name
ln -s /dev/raw1 /oracle/oradata/db_name/db_name_raw_system_411m
ln -s /dev/raw2 /oracle/oradata/db_name/db_name_raw_users_26m
ln -s /dev/raw3 /oracle/oradata/db_name/db_name_raw_temp_41m
ln -s /dev/raw4 /oracle/oradata/db_name/db_name_raw_undotbs1_201m
ln -s /dev/raw5 /oracle/oradata/db_name/db_name_raw_undotbs2_201m
ln -s /dev/raw6 /oracle/oradata/db_name/db_name_raw_indx_26m
ln -s /dev/raw7 /oracle/oradata/db_name/db_name_raw_controlfile1_110m
ln -s /dev/raw8 /oracle/oradata/db_name/db_name_raw_controlfile2_110m
ln -s /dev/raw9 /oracle/oradata/db_name/db_name_raw_rdo_1_1_121m
ln -s /dev/raw10 /oracle/oradata/db_name/db_name_raw_rdo_1_2_121m
ln -s /dev/raw11 /oracle/oradata/db_name/db_name_raw_rdo_2_1_121m
ln -s /dev/raw12 /oracle/oradata/db_name/db_name_raw_rdo_2_2_121m
ln -s /dev/raw13 /oracle/oradata/db_name/db_name_raw_spfile_5m
ln -s /dev/raw16 /oracle/oradata/db_name/db_name_raw_tools_11m
ln -s /dev/raw17 /oracle/oradata/db_name/db_name_raw_cwmlite_21m
ln -s /dev/raw18 /oracle/oradata/db_name/db_name_raw_example_141m
ln -s /dev/raw19 /oracle/oradata/db_name/db_name_raw_oemrepo_21m
ln -s /dev/raw20 /oracle/oradata/db_name/db_name_raw_drsys_21m
ln -s /dev/raw21 /oracle/oradata/db_name/db_name_raw_odm_21m
ln -s /dev/raw22 /oracle/oradata/db_name/db_name_raw_xdb_41m
ln -s /dev/raw23 /oracle/oradata/db_name/db_name_raw_controlfile3_110m
On the node from which you run the Oracle Universal Installer, create an ASCII file identifying the raw volume objects as shown above. The
DBCA requires that these objects exist during installation and database creation. When creating the ASCII file content for the objects, name them
using the format:
database_object=raw_device_file_path or database_object=soft_link
When you create the ASCII file, separate the database objects from the paths with equals (=) signs as shown in the example below:-
system=/oracle/oradata/db_name/db_name_raw_system_411m
users=/oracle/oradata/db_name/db_name_raw_users_26m
temp=/oracle/oradata/db_name/db_name_raw_temp_41m
undotbs1=/oracle/oradata/db_name/db_name_raw_undotbs1_201m
undotbs2=/oracle/oradata/db_name/db_name_raw_undotbs2_201m
control1=/oracle/oradata/db_name/db_name_raw_controlfile1_110m
control2=/oracle/oradata/db_name/db_name_raw_controlfile2_110m
redo1_1=/oracle/oradata/db_name/db_name_raw_rdo_1_1_121m
redo1_2=/oracle/oradata/db_name/db_name_raw_rdo_1_2_121m
redo2_1=/oracle/oradata/db_name/db_name_raw_rdo_2_1_121m
redo2_2=/oracle/oradata/db_name/db_name_raw_rdo_2_2_121m
spfile=/oracle/oradata/db_name/db_name_raw_spfile_5m
indx=/oracle/oradata/db_name/db_name_raw_indx_26m
tools=/oracle/oradata/db_name/db_name_raw_tools_11m
cwmlite=/oracle/oradata/db_name/db_name_raw_cwmlite_21m
example=/oracle/oradata/db_name/db_name_raw_example_141m
oemrepo=/oracle/oradata/db_name/db_name_raw_oemrepo_21m
drsys=/oracle/oradata/db_name/db_name_raw_drsys_21m
odm=/oracle/oradata/db_name/db_name_raw_odm_21m
xdb=/oracle/oradata/db_name/db_name_raw_xdb_41m
control3=/oracle/oradata/db_name/db_name_raw_controlfile3_110m

You must specify that Oracle should use this file to determine the raw device volume names by setting the following environment variable where filename
is the name of the ASCII file that contains the entries shown in the example above:
setenv DBCA_RAW_CONFIG filename
or
export DBCA_RAW_CONFIG=filename
  
3.2 Using the Oracle Universal Installer for Real Application Clusters
Follow these procedures to use the Oracle Universal Installer to install the Oracle Enterprise Edition and the Real Application Clusters software. Oracle9i is supplied on multiple CD-ROM disks. During the installation process it is necessary to switch between the CD-ROMS. OUI will manage the switching between CDs. For the latest Linux certification matrix click here.
To install the Oracle Software, perform the following:
·        Mount the first CD as root
·        Allow oracle access to the Xserver
$ xhost +
·        Login as the oracle user
·        Make sure your display is set
$ export DISPLAY=:0 (or setenv DISPLAY :0)
·        $ <cdrom_mount_point>;/runInstaller
  
·        At the OUI Welcome screen, click Next.
·        A prompt will appear for the Inventory Location (if this is the first time that OUI has been run on this system). This is the base directory into which OUI will install files. The Oracle Inventory definition can be found in the file /etc/oraInst.loc. Click OK.
·        Enter oinstall as the UNIX group name of the user who controls the installation of the Oracle9i software. Click Next.
·        An instruction to run /tmp/orainstRoot.sh appears. Run this as root and click Continue.
·        The File Location window will appear. Do NOT change the Source field. The Destination field defaults to the ORACLE_HOME environment variable. Click Next.
For Oracle RAC V9.2+:
·        Check Oracle Cluster  Manager. Click Next.
·        On the public node information screen, enter the public node names and click Next.
·        On the private node information screen, enter the interconnect node names. Click Next.
·        Accept the default value (60000) for the watchdog parameter. Click Next.
·        Enter the full name of the raw device you have created for the node monitor for the Quorum disk information. Click Next.
·        Press Install at the summary screen.
·        You will now briefly get a progress window followed by the end of installation screen. Click Exit. and confirm by clicking Yes.
Note: Create the directory $ORACLE_HOME/oracm/log (as oracle) on the other nodes if it doesn't exist.
·        Start the cluster manager on all nodes as root
$ export ORACLE_HOME=/oracle/product/9.2.0
$ORACLE_HOME/oracm/bin/ocmstart.sh
·        Run the installer again.
·        At the OUI Welcome screen, click Next.
·        Cluster Node Selection Screen. Select nodes. Press Next.
·        At the file location screen, press Next.
·        Select the Products to install. In this example, select the Oracle9i Database then click Next.
·        Select the installation type. Choose the Enterprise Edition option. The selection on this screen refers to the installation operation, not the database configuration. Click Next.
·        Database Configuration. Select General Purpose. Click Next.
·        You are prompted for the pathname of the shared configuration file : enter the pathname of the raw device you used for the srvconf file (ex: /dev/raw/raw15) and click Next.
·        The installer now displays the Database Identification page. Enter the Global Database Name and Oracle System Identifier (SID). The Global Database Name is typically of the form name.domain, for example mydb.us.oracle.com while the SID is used to uniquely identify an instance (DBCA should insert a suggested SID, equivalent to name1 where name was entered in the Database Name field). In the RAC case the SID specified will be used as a prefix for the instance number. For example, MYDB, would become MYDB1, MYDB2 for instance 1 and 2 respectively. Click Next.
·        Database Character Set. Select the desired option and click Next.
·        Summary screen. Click Install.
·        Install screen shows the progress.
·        Insert (umount + mount ) CD2 when asked. Press ok.
·        Insert (umount + mount ) CD3 when asked. Press ok.
·        You will get a popup indicating to run $ORACLE_HOME/root.sh as root.
Note:
·        Make sure the directory $ORACLE_HOME/rdbms/audit exists on all nodes.
·        Make sure the directory $ORACLE_HOME/rdbms/log exists on all nodes.
·        Make sure the directory $ORACLE_HOME/network/log exists on all nodes.
·         Now run $ORACLE_HOME/root.sh  as root and answer the questions (just press return) on all nodes. Then press OK.
·        Configuration tools window appears and starts cluster configuration assistant - net configuration assistant - database configuration assistant - agent configuration assistant and the HTTP server. This might take a while to complete. Check for error messages. Click Next.
·        Database Assistant : You will get a popup window for password management. Enter the appropriate passwords and press OK.
·        End of installation screen. Press Exit and confirm by clicking Yes.
·        The Enterprise manager console appears. Click Add selected databases... Click OK.
·        You can perform the enterprise manager configuration now. Exit when finished.
For Oracle RAC V9.0.1:
·        Select the Products to install. In this example, select the Oracle9i Database then click Next.
·        Select the installation type. Choose the Custom option. The selection on this screen refers to the installation operation, not the database configuration. Click Next.
·        Select the product components you need. Make sure to include the Real Application Clusters option. Click Next. Depending on your choice, there might be some additional screens as mentioned here. We have chosen only Real application clusters and the partitioning options.
·        The next screen allows you to specify an alternate path for the OUI. Click Next.
·        You are prompted for the pathname of the shared configuration file : enter the pathname of the raw device you used for the srvconf file (ex: /dev/raw15) and click Next.
·        On the cluster nodes selection screen, select all nodes where the software should be installed. Specify the interconnect node names (ex: int-rac2). Click Next.
·        Privileged operating system groups. Click Next.
·        You get the summary screen. Once Install is selected, the OUI will install the Oracle RAC software on to the local node, and then copy software to the other nodes selected earlier. This will take some time. You will need to switch to the second an third CDROM when asked by the installer (umount an mount as root). During the installation process, the OUI does not display messages indicating that components are being installed on other nodes - I/O activity may be the only indication that the process is continuing.
·        You will get a popup indicating to run $ORACLE_HOME/root.sh as root. Do so and answer the questions (just press return) on all nodes. Then press OK.
·        Now click Exit and confirm (click Yes) and confirm the exit (click YES) to exit the installer.
·        You can now umount your cdrom as root.
3.3 Configure the Cluster Manager, Node Monitor and SRVCTL configuration file
For Oracle RAC V9.0.1:
By selecting the Real Application Clusters option during the Oracle software installation, the cluster software has been installed.
we will need to configure the node monitor.
·        create the file $ORACLE_HOME/oracm/admin/nmcfg.ora with the following contents:
CmHostName=int-rac1 (on the other node, you specify int-rac2...)
DefinedNodes=int-rac1 int-rac2
CmDiskFiles=/dev/raw/raw14 (the raw device bound to the node monitor raw device)
Note: Be careful, these parameters are case sensitive.
  
·        start the cluster manager and the node monitor as root - add these commands to your startup file (startoracle_root.sh)
$ export ORACLE_HOME=/oracle/product/9.0.1
$ORACLE_HOME/oracm/bin/ocmstart.sh
·        initialize the raw device used by the srvctl utility (as oracle user):
$ srvconfig -init
  
·        start the global services daemon as the oracle user, also put this command in a startup script run as oracle user (startoracle.sh)
$ gsd
For Oracle RAC V9.2+:
The installation has created the configuration file for you in $ORACLE_HOME/oracm/admin/cmcfg.ora. The file should look like:
HeartBeat=15000
ClusterName=Oracle Cluster Manager, version 9i
PollInterval=1000
MissCount=20
PrivateNodeNames=localhost
PublicNodeNames=bel712
ServicePort=9998
WatchdogSafetyMargin=5000
WatchdogTimerMargin=60000
CmDiskFile=/dev/raw/raw14
HostName=localhost


The global services daemon will be started, but add the following to your startup script :
            $ gsdctl start
3.4 Configuring the Listeners
For Oracle RAC V9.2+:
The Oracle Universal Installer already configured this for us, no need to do this unless you want to configure additional listeners.

For Oracle RAC V9.0.1:
Before creating our database, we will configure the listeners, this avoids errors during the database creation.
·        make sure the LANG environment variable is not set
$ unset LANG
  
·        start the network configuration assistant as oracle user:
$ netca
§        On the welcome screen, select cluster configuration and click Next.
§        On the nodes screen, select all nodes and click Next.
§        On this screen, select listener configuration and click Next.
§        You get the listener configuration screen. Select add and click Next.
§        Accept the default name listener by clicking Next.
§        This is the protocol selection screen. TCPIP should already be selected. Add IPC (click IPC then > and click Next.
§        Accept the default port number (1521) on the screen by clicking Next.
§        On the IPC configuration screen, enter your database name as key and click Next.
§        Do not configure anything else and exit netca unless you want to use Oracle intelligent agent. You will have to statically add the service names to allow automatic detection of your database by the agent.  The instances will register themselves with the listener.
3.5 Create a RAC Database using the Oracle Database Configuration Assistant
For Oracle RAC V9.2.0.1:
Please install patch  Patch 2417903 available via MetaLink. This resolves an issue with slow startup of your instances.
The Universal installer already created a database for you, no need to do this unless you want to create additional databases.
For Oracle RAC V9.0.1:
The Oracle Database Configuration Assistant (DBCA) will create a database for you (for an example of manual database creation see Database Creation in Oracle9i RAC). The DBCA creates your database using the optimal flexible architecture (OFA). This means the DBCA creates your database files, including the default server parameter file, using standard file naming and file placement practices. The primary phases of DBCA processing are:-
·        Verify that you correctly configured the shared disks for each tablespace (for non-cluster file system platforms)
·        Create the database
Due to an issue with 9.0.1, using a seed database might fail unless you install Patch Set 3 (V9.0.1.3) first. We will create a new database here which will be slower than using  a seed database, but will be possible without the installation of the patch set.

Oracle Corporation recommends that you use the DBCA to create your database. This is because the DBCA preconfigured databases optimize your environment to take advantage of Oracle9i features such as the server parameter file and automatic undo management. The DBCA also enables you to
define arbitrary tablespaces as part of the database creation process. So even if you have datafile requirements that differ from those offered in one of the DBCA templates, use the DBCA. You can also execute user-specified scripts as part of the database creation process.
The DBCA and the Oracle Net Configuration Assistant also accurately configure your Real Application Clusters environment for various Oracle high availability features and cluster administration tools.
·        Start DBCA by executing the command dbca. The RAC Welcome Page displays. Choose Oracle Cluster Database option and select Next.
·        The Operations page is displayed. Choose the option Create a Database and click Next.
·        The Node Selection page appears. Select the nodes that you want to configure as part of the RAC database and click Next. If nodes are missing from the Node Selection then perform clusterware diagnostics by executing the $ORACLE_HOME/bin/lsnodes -v command and analyzing its output. Resolve the problem and then restart the DBCA.
·        The Database Templates page is displayed. The templates other than New Database include datafiles. Choose New Database and then click Next.
·        The Show Details button provides information on the database template selected.
·        DBCA now displays the Database Identification page. Enter the Global Database Name and Oracle System Identifier (SID). The Global Database Name is typically of the form name.domain, for example mydb.us.oracle.com while the SID is used to uniquely identify an instance (DBCA should insert a suggested SID, equivalent to name1 where name was entered in the Database Name field). In the RAC case the SID specified will be used as a prefix for the instance number. For example, MYDB, would become MYDB1, MYDB2 for instance 1 and 2 respectively.
·        The Database Options page is displayed.  Deselect all options (also click additional / standard database configuration) and accept the tablespaces to be dropped as well, then choose Next.Note: If you did not choose New Database from the Database Template page, you will not see this screen.
·        Select the dedicated server mode option from the Database Connection Options page. Note: If you did not choose New Database from the Database Template page, you will not see this screen. Click Next.
·        DBCA now displays the Initialization Parameters page. This page comprises a number of Tab fields. Modify the Memory settings if desired and then select the File Locations tab. The option Create persistent initialization parameter file is selected by default. The file name should point to the correct database link. Otherwise you didn't set the DBCA_RAW_CONFIG correctly or the file contains errors. The button All Initialization Parameters... displays the Initialization Parameters dialog box. This box presents values for all initialization parameters and indicates whether they are to be included in the spfile to be created through the check box, included (Y/N). Instance specific parameters have an instance value in the instance column. Complete entries in the All Initialization Parameters (especially check/correct the remote_listener entry, should read LISTENER) page and select Close. Note: There are a few exceptions to what can be altered via this screen. Ensure all entries in the Initialization Parameters page are complete. You might want to select an 8 bit character set in the db Sizing section and select Next.
·        DBCA now displays the Database Storage Window. This page allows you to enter file names for each tablespace in your database. They should be pointing to the correct raw device or database link. Make sure to correct the datafile sizes to what you have planned and remember that the datafile should be somewhat smaller than the raw device. Click Next.
·        The Database Creation Options page is displayed. Ensure that the option Create Database is checked and click Finish.
·        The DBCA Summary window is displayed. Review this information and then click OK.
·        Once the Summary screen is closed using the OK option, DBCA begins to create the database according to the values specified.
·        You will get the password management window, complete as desired and click Exit.
A new database now exists. It can be accessed via Oracle SQL*PLUS or other applications designed to work with an Oracle RAC database.  You should make sure to execute the startoracle_root.sh script at system startup as root and the startoracle.sh script as user oracle.
  

4. Administering Real Application Clusters Instances
Oracle Corporation recommends that you use SRVCTL to administer your Real Application Clusters database environment. SRVCTL manages configuration
information that is used by several Oracle tools. For example, Oracle Enterprise Manager and the Intelligent Agent use the configuration information that
SRVCTL generates to discover and monitor nodes in your cluster. Before using SRVCTL, ensure that your Global Services Daemon (GSD) is running. To use SRVCTL, you must have already created the configuration information for the database that you want to administer.
You must have done this either by using the Oracle Database Configuration Assistant (DBCA), or by using the srvctl add command. If you have followed the instructions in this article, dbca will have added your database and instances.
For Oracle RAC V9.0.1:
$ srvctl config -p racdb1
racnode1 racinst1
racnode2 racinst2
$ srvctl config -p racdb1 -n racnode1
racnode1 racinst1
Examples of starting and stopping RAC follow:-
$ srvctl start -p racdb1
Instance successfully started on node: racnode2
Listeners successfully started on node: racnode2
Instance successfully started on node: racnode1
Listeners successfully started on node: racnode1
$ srvctl stop -p racdb1
Instance successfully stopped on node: racnode2
Instance successfully stopped on node: racnode1
Listener successfully stopped on node: racnode2
Listener successfully stopped on node: racnode1
$ srvctl stop -p racdb1 -i racinst2 -s inst
Instance successfully stopped on node: racnode2
$ srvctl stop -p racdb1 -s inst
PRKO-2035 : Instance is already stopped on node: racnode2
Instance successfully stopped on node: racnode1
For Oracle RAC V9.2+:
$ srvctl config database -d racdb1
racnode1 racinst1 /oracle/product/9.2.0
racnode2 racinst2 /oracle/product/9.2.0
Examples of starting and stopping RAC follow:-
$ srvctl start database -d racdb1
$ srvctl status database -d arldb
Instance racinst1 is running on racnode1
Instance racinst2 is running on racnode2
$ srvctl stop database -d racdb1
$ srvctl status database -d arldb
Instance racinst1 is not running on racnode1
Instance racinst2 is not running on racnode2
$ srvctl start instance -d racdb1 -i racinst1
$ srvctl status instance -d racdb1 -i racinst1
Instance racinst1 is running on racnode1
$ srvctl status database -d arldb
Instance racinst1 is running on racnode1
Instance racinst2 is not running on racnode2
$ srvctl stop instance -d racdb1 -i racinst1
For further information on srvctl see the Oracle9i Real Application Clusters Administration Release 1 (9.0.1) manual.

5. References
* Oracle9i Installation Guide for UNIX Systems: AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel and Sun SPARC Solaris
* Oracle9i Installation Checklist for Linux Intel
* Oracle9i Quick Installation Procedure for Linux
* Oracle9i Real Application Clusters Installation and Configuration

论坛徽章:
0
16 [报告]
发表于 2007-10-07 17:18 |只看该作者
恩,03年就写了,你好猛拉!
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP