- 论坛徽章:
- 0
|
##RedHat Linux AS3-oracle9204-RAC策略安装完全手册##-新年小礼
启动OCM all RAC node
[oracle @orasrv1 oracle] $ cd $ORACLE_HOME/oracm/bin
[oracle @orasrv1 bin] $ su
[oracle @orasrv1 bin] # ./ocmstart.sh
启动完用ps –ef |grep oracm 看一下进程
root 4389 1 0 15:14 ? 00:00:00 oracm
root 4391 4389 0 15:14 ? 00:00:00 oracm
root 4392 4391 0 15:14 ? 00:00:03 oracm
root 4393 4391 0 15:14 ? 00:00:00 oracm
root 4394 4391 0 15:14 ? 00:00:03 oracm
root 4395 4391 0 15:14 ? 00:00:00 oracm
root 4396 4391 0 15:14 ? 00:00:00 oracm
root 4397 4391 0 15:14 ? 00:00:00 oracm
root 4398 4391 0 15:14 ? 00:00:00 oracm
root 4401 4391 0 15:14 ? 00:00:01 oracm
root 4449 4391 0 15:14 ? 00:00:00 oracm
root 4491 4391 0 15:14 ? 00:00:00 oracm
root 9494 4391 0 17:48 ? 00:00:00 oracm
root 9514 4391 0 17:48 ? 00:00:01 oracm
root 9519 4391 0 17:48 ? 00:00:00 oracm
root 9520 4391 0 17:48 ? 00:00:00 oracm
root 9521 4391 0 17:48 ? 00:00:00 oracm
root 9522 4391 0 17:48 ? 00:00:00 oracm
root 9526 4391 0 17:49 ? 00:00:00 oracm
oracle 12000 11685 0 18:22 pts/4 00:00:00 grep oracm
注:cluter manager有时候不能正常启动,会出现以下错误提示:
ocmstart.sh :Error: Restart is too frequent
ocmstart.sh :Info: check the system configuration and fix the problem.
ocmstart.sh: info:After you fixed the problem,remove the timestamp file
ocmstart.sh: Info:”/opt/oracle/product/9.2.0/oracm/log/ocmstart.ts”
这是因为cluster manager不能频繁启动的原因,进行以下操作可以解决马上重新启动cluster manager
[oracle @orasrv1 oracle] $ cd $ORACLE_HOME/oracm/log
[oracle @orasrv1 oracle] $ rm *.ts
[oracle @orasrv1 oracle] $ sh ./ocmstart.sh
5.Installing Oracle9i 9.2.0.4.0 Database
To install the Oracle9i Real Application Cluster 9.2.0.1.0 software, insert the Oracle9iR2 Disk 1 and launch runInstaller. These steps only need to be performed on one node, the node you are installing from.
[oracle @orasrv1 oracle] $ unset LANG
[oracle @orasrv1 oracle] $ /tmp/Disk1/runIstaller
- Welcome Screen: Click Next
- Cluster Node Selection: Select/Highlight all RAC nodes using the shift key and the left mouse button.
Click Next
Note: If not all RAC nodes are showing up, or if the Node Selection Screen
does not appear, then the Oracle Cluster Manager (Node Monitor) oracm is probably not
running on all RAC nodes. See Starting and Stopping Oracle 9i Cluster Manager for more information.
- File Locations: Click Next
- Available Products: Select "Oracle9i Database 9.2.0.4.0" and click Next
- Installation Types: Select "Enterprise Edition" and click Next
- Database Configuration: Select "Software Only" and click Next
- Shared Configuration File Name:
Enter the name of an OCFS shared configuration file or the name of
the raw device name.
Select "/var/opt/oracle/oradata/orcl/SharedSrvctlConfigFile" and click Next
- Summary: Click Install.
When installation has completed, click Exit.
曾经遇见过磁盘空间不够是问题,/opt/oracle 目录下空间不够,可能是当时分区的问题,/opt/oracle挂在了swap下,可将/opt/oracle挂到sda5上,sda5是当时建的扩展分区50gb,并修改文件
vi /etc/fstab 添加相应的值 /dev/sda5 /opt/oracle ext3 …
cp -r /opt/oracle/ /temp
mount /dev/sda5 /opt/oracle
cp -r /temp /opt/oracle/
error: you do not have sufficient privileges to write to the specified path.lin component database configuration assistant 9.2.0.0 installation cannot continue for this componet.
/opt/ora9/oradata 权限设置为 oracle.dba
注:安装过程中要非常注意提示操作,不要用装单机Oracle经验来做,有些操作需要所有节点一起做的。安装过程中要确认节点间通信正常,因为安装过程中,会从一个节点将所有oracle9i的安装文件通过网络传入其他节点中。这个安装过程只需要在一个节点进行即可,但要保证所有节点中cluster manager软件正常启动,可以用ps –ef | grep oracm查看每个节点的cluster manager状态。
安装oracle9i补丁
补丁3119415 与2617419 补丁,固定以上的ins_oemagent.mk 错误
[oracle @orasrv1 oracle] $ cd $ORACLE_HOME/bin
[oracle @orasrv1 oracle] $ cp p2617419_220_GENERIC.zip /tmp
[oracle @orasrv1 oracle] $ unzip p2617419_220_GENERIC.zip
[oracle @orasrv1 oracle] $ unzip p3119415_9204_LINUX.zip
[oracle @orasrv1 oracle] $ cd 3119415
[oracle @orasrv1 oracle] $ export PATH=$PATH:/opt/ora9/product/9.2/bin/OPatch
[oracle @orasrv1 oracle] $ export PATH=$PATH:/sbin
[oracle @orasrv1 oracle] $ which opatch
/opt/ora9/product/9.2/bin/OPatch
[oracle @orasrv1 oracle] $ opatch apply
[oracle @orasrv1 oracle] $ cd $ORACLE_BASE/oui/bin/linux
[oracle @orasrv1 oracle] $ ln -s libclntsh.so.9.0 libclntsh.so
初试化共享文件
安装完毕后创建配置文件
su - root
[root @orasrv1 root] # mkdir -p /var/opt/oracle
[root @orasrv1 root] # touch /var/opt/oracle/srvConfig.loc
[root @orasrv1 root] # chown oracle.dba /var/opt/oracle/srvConfig.loc
[root @orasrv1 root] # chmod 755 /var/opt/oracle/srvConfig.loc
在srvConfig.loc 中间添加srvconfig_loc 参数如下:
srvconfig_loc=/var/opt/oracle/oradata/orcl/SharedSrvctlConfigFile
创建srvConfig.dbf 文件。如果是共享设备,需要创建到共享设备上,如ocfs 文件系统或
者是raw 分区上,那么上面的文件名将有一些差异。
Starting Oracle Global Services
Initialize the Shared Configuration File
Before attempting to initialize Shared Configuration File, make sure that the Oracle Global Services daemon is NOT running, by using the following command:
要保证gsdctl没有启动
# su - oracle
[oracle @orasrv1 oracle] $ gsdctl stat
[root @orasrv1 root] # su - oracle
[oracle @orasrv1 oracle] $ srvconfig -init
NOTE: If you receive a PRKR-1025 error when attempting to run the srvconfig -init command, check that you have the valid entry for "srvconfig_loc" in your /var/opt/oracle/srvConfig.loc file and that the file is owned by "oracle". This entry gets created by the root.sh.
If you receive a PRKR-1064 error when attempting to run the srvconfig -init command, then check if /var/opt/oracle/oradata/orcl/SharedSrvctlConfigFile file is accessable by all RAC nodes:
如果有错误,查看srvconfig.loc的权限应为oracle,在所有节点查看SharedSrvctlConfigFile
[oracle @orasrv1 oracle] $ cd ~oracle/oradata/orcl
[oracle @orasrv1 oracle] $ ls -l SharedSrvctlConfigFile
lrwxrwxrwx 1 oracle dba 13 May 2 20:17 SharedSrvctlConfigFile ->; /dev/raw/raw2
如果用的是裸设备,你的raw共享磁盘设置的太小,加大空间再试
Start Oracle Global Services
After initializing the Shared Configuration File, you will need to manually start the Oracle Global Services daemon (gsd) to ensure that it works. At this point in the installation, the Global Services daemon should be down. To confirm this, run the following command:
初试化后,手工启动gsdctl,在启动前保证gsdctl是没有启动的
[oracle @orasrv1 oracle] $ gsdctl stat
GSD is not running on the local node
Let's manually start the Global Services daemon (gsd) by running the following command on all nodes in the RAC cluster:
[oracle @orasrv1 oracle] $ gsdctl start
如果有某节点没有启动,按下面的方法检查,如果没有在所有节点启动,在dbac时有错误
Successfully started GSD on local node
Check Node Name and Node Number Mappings
In most cases, the Oracle Global Services daemon (gsd) should successfully start on all local nodes in the RAC cluster. There are problems, however, where the node name and node number mappings are not correct in the cmcfg.ora file on node 2. This does not happen very often, but it has happened to me on at least one occasion.
If the node name and node number mappings are not correct, it will not show up until you attempt to run the Database Configuration Assistant (dbca)—the assistant we will be using later to create our cluster database. The error reported by the DBCA will say something to the effect, "gsd daemon has not been started on node 2".
To check that the node name and number mappings are correct on your cluster, run the following command on both your nodes:
Listing for node1:
[oracle @orasrv1 oracle] $ lsnodes -n
rac1als 0
rac2als 1
Listing for node2:
[oracle @orasrv1 oracle] $ lsnodes -n
rac2als 1
rac1als 0
在启动都没有错误后,添加下面语句到/etc/rc.local
. ~oracle/.bash_profile
rm -rf $ORACLE_HOME/oracm/log/*.ts
$ORACLE_HOME/oracm/bin/ocmstart.sh
su - oracle -c "gsdctl start"
su - oracle -c "lsnrctl start"
Create the Oracle Database
建库之前检查/opt/ora9/ 下所有目录的权限 admin oui product 包括子目录
[root @orasrv1 root] # xhost +127.0.0.1
[oracle @orasrv1 oracle] $ unset LANG
[oracle @orasrv1 oracle] $ dbca
Screen Name Response
Type of Database Select "Oracle Cluster Database" and click "Next"
Operations Select "Create a database" and click "Next"
Node Selection Click the "Select All" button to the right. If all of the nodes in your RAC cluster are not showing up, or if the Node Selection Screen does not appear, then the Oracle Cluster Manager (Node Manager) oracm is probably not running on all RAC nodes. For more information, see Starting and Stopping Oracle9i Cluster Manager under the "Installing Oracle9i Cluster Manager" section.
Database Templates Select "New Database" and click "Next"
Database Identification Global Database Name: orcl
SID Prefix: orcl
Database Features For your new database, you can keep all database features selected. I typically do. If you want to, however, you can clear any of the boxes to not install the feature in your new database.
Click "Next" when finished.
Database Connection Options Select "Dedicated Server Mode" and click "Next"
Initialization Parameters Click "Next"
Database Storage If you have followed this article and created all symbolic links, then the datafiles for all tablespaces should match the DBCA. I do, however, change the initial size for each tablespace. To do this, negotiate through the navigation tree for all tablespaces and change the value for the following tablespaces:
If you need to, select appropriate files and then click "Next"
Creation Options Click here for a snapshot of the options I used to create my cluster database
When you are ready to start the database creation process, click "Finish"
Summary Click "OK" |
|