免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 1102 | 回复: 0
打印 上一主题 下一主题

RedHat Linux as4 上Oracle RAC 集群 [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2011-12-21 08:43 |只看该作者 |倒序浏览
##############RedHat Linux下Oracle10g RAC集群安装步骤################
Oracle集群的实质就是多个服务器访问同一个Oracle数据库,这样可以避免一
个服务器宕机时数据库不能访问,同时也可以进行负载均衡。
**************************本实验案例步骤**************************
两Linux节点rac01、rac02配置如下
RAM 1024
DISK 每个节点scsi硬盘30G一个,共享硬盘10G
网卡 每个节点两块网卡eth0(private)、eth1(public)
操作系统 RedHat Enterprise Linux AS4 Update2
ip设置
rac01: eth0 10.10.10.100/255.255.255.0
eth1 202.100.0.100/255.255.255.0 202.100.0.1
rac02: eth0 10.10.10.200/255.255.255.0
eth1 202.100.0.200/255.255.255.0 202.100.0.1
所需软件
Oracle cluster 软件10201_clusterware_linux32.zip
Oracle数据库软件10201_database_linux32.zip
ocfs2-2.6.9-55.EL-1.2.9-1.el4.i686.rpm
ocfs2-2.6.9-55.ELsmp-1.2.9-1.el4.i686.rpm
ocfs2console-1.2.7-1.el4.i386.rpm
ocfs2-tools-1.2.7-1.el4.i386.rpm
###################################################################创建Oracle用户和所属组,并且查看nobody用户是否存在,在安装完成后nobody用户必
须执行一些扩展任务,若不存在必须手动创建。
【注:AB 为两节点都要执行的 A为只需启动一个节点执行即可】
AB
[root@rac01 ~]# groupadd -g 1000 oinstall
[root@rac01 ~]# groupadd -g 1001 dba
[root@rac01 ~]# id nobody
uid=99(nobody) gid=99(nobody) groups=99(nobody)
[root@rac01 ~]# useradd -u 1000 -g 1000 -G 1001 oracle
[root@rac02 ~]# passwd oracle
###################################################################主机名称 解析通过hosts文件解析,两节点在hosts文件中添加如下内容
AB
[root@rac02 ~]# vi /etc/hosts
202.100.0.100   rac01 ###公网ip
202.100.0.200   rac02
202.100.0.10    vip01 ###虚拟ip
202.100.0.20    vip02
10.10.10.100    priv01 ###私网ip
10.10.10.200    priv02
####################################################################配置SSH
两节点都用Oracle用户执行
AB
[oracle@rac01 ~]$ mkdir .ssh
[oracle@rac01 ~]$ chmod 700 .ssh/
AB
[oracle@rac02 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 存放公钥和私钥目录
Enter passphrase (empty for no passphrase): 私钥密码
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
0b:fe:7e:e1:cb:f7:6f:7c:bf:74:ce:01:c5:c6:4f:a2 oracle@rac02
AB
[oracle@rac02 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
63:59:50:c3:3e:ee:c2:c5:cc:85:33:1b:e3:ee:ed:6b oracle@rac02
[oracle@rac02 ~]$ cd .ssh
A
[oracle@rac01 .ssh]$ ssh rac01 cat ~/.ssh/id_rsa.pub >> authorized_keys
[oracle@rac01 .ssh]$ ssh rac01 cat ~/.ssh/id_dsa.pub >> authorized_keys
[oracle@rac01 .ssh]$ ssh rac02 cat ~/.ssh/id_rsa.pub >> authorized_keys
[oracle@rac01 .ssh]$ ssh rac02 cat ~/.ssh/id_dsa.pub >> authorized_keys
[oracle@rac01 .ssh]$ scp authorized_keys rac02:/home/oracle/.ssh/
AB
[oracle@rac02 .ssh]$ chmod 600 authorized_keys
在两节点上进行测试
[oracle@rac01 .ssh]$ ssh rac01 date
Sun Aug  9 08:01:45 EDT 2009
[oracle@rac01 .ssh]$ ssh rac02 date
Sun Aug  9 08:01:56 EDT 2009
[oracle@rac01 ~]$ ssh rac02 date
Sun Aug  9 08:02:17 EDT 2009
[oracle@rac01 ~]$ ssh rac01 date
Sun Aug  9 08:02:18 EDT 2009
###############################################################################
在两节点上查看所需软件,若未安装,需手动安装
AB
[root@rac01 ~]# rpm -q gcc gcc-c++ glibc gnome-libs libstdc++
libstdc++-devel binutils compat-db openmotif21 control-center make
###################################################################为Oracle安装配置参数
AB
[root@rac02 ~]# vi /etc/sysctl.conf
kernel.sem=250  32000   100     128
kernel.shmmni=4096
kernel.shmall=2097152
kernel.shmmax=2147483648
net.ipv4.ip_local_port_range=1024 65000
net.core.rmem_default=1048576
net.core.rmem_max=1048576
net.core.wmem_default=262144
net.core.wmem_max=262144
[root@rac02 ~]# sysctl -p
###################################################################设置SSH对Oracle用户的限制
AB
[root@rac01 ~]# vi /etc/security/limits.conf
oracle          soft    nproc   2047
oracle          hard    nproc   16384
oracle          soft    nofile  1024
oracle          hard    nofile  65536
AB
[root@rac02 ~]# vi /etc/pam.d/login
session         required        /lib/security/pam_limits.so
AB
[root@rac01 ~]# vi /etc/profile
if [ $USER = "oracle" ] ; then
        if [ $SHELL = "/bin/ksh" ] ; then
                ulimit -p 16384
                ulimit -n 65536
        else
                ulimit -u 16384 -n 65536
        fi
fi
[root@rac01 ~]# source /etc/profile
####################################################################安装配置OCFS2【Oracle cluster file system 2】
AB
[root@rac02 as4]# rpm -ivh ocfs2-tools-1.2.7-1.el4.i386.rpm ocfs2-2.6.9-55.ELsmp-1.2.9-1.el4.i686.rpm ocfs2console-1.2.7-1.el4.i386.rpm
#################################################################### 在一个节点上对共享硬盘分区,建立两个分区,一个用于存储Oracle软件,至少3000m,另一个用于存储Oracle数据库文件及恢复文件,至少 4000M,分区如下:
[root@rac02 as4]# fdisk -l /dev/sdb
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         501     4024251   83  Linux
/dev/sdb2             502        1305     6458130   83  Linux
【分好了之后一定要重启所有节点。】
####################################################################
A
[root@rac01 ~]# export DISPLAY=202.100.0.111:0.0
[root@rac02 ~]# ocfs2console
wps_clip_image-5541
选择task 》》》》format
wps_clip_image-5561
选择/dbv/sdb1 填入orahome   》》》》 ok
选择task 》》》》format
选择/dbv/sdb2 填入oradata   》》》》 ok
之后如下图所示
wps_clip_image-5655
选择cluster》》》》configure nodes
wps_clip_image-5686
选择 添加
添加两节点的主机名和ip
应用后如下图所示关闭
wps_clip_image-5718
现在查看/etc/ocfs2/cluster.conf
将看到如下内容
[root@rac02 ~]# cat /etc/ocfs2/cluster.conf
node:
        ip_port = 7777
        ip_address = 202.100.0.100
        number = 0
        name = rac01
        cluster = ocfs2
node:
        ip_port = 7777
        ip_address = 202.100.0.200
        number = 1
        name = rac02
        cluster = ocfs2
cluster:
        node_count = 2
        name = ocfs2
选择cluster 》》》》propagate configuration
wps_clip_image-6152
输入另一节点的管理员密码 然后关闭。
###############################################################################
配置o2cb系统启动时就启动OCFS2驱动和服务。
AB
[root@rac02 ~]# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]: y 启动时自动加载驱动
Cluster to start on boot (Enter "none" to clear) [ocfs2]: 默认为ocfs2文件系统
Specify heartbeat dead threshold (&gt;=7) [31]:
Specify network idle timeout in ms (&gt;=5000) [30000]:
Specify network keepalive delay in ms (&gt;=1000) [2000]:
Specify network reconnect delay in ms (&gt;=2000) [2000]:
Writing O2CB configuration: OK
O2CB cluster ocfs2 already online
查看o2cb时出现以下提示说明服务已经启动
[root@rac01 ~]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
  Heartbeat dead threshold: 31
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
Checking O2CB heartbeat: Not active
####################################################################两节点都创建挂载目录并挂载/dev/sdb1和/dev/sdb2
AB
[root@rac02 ~]# mkdir -p /orac/orahome
[root@rac02 ~]# mkdir -p /orac/oradata
[root@rac02 ~]# mount -t ocfs2 /dev/sdb1 /orac/orahome/
[root@rac02 ~]# mount -t ocfs2 -o datavolume,nointr /dev/sdb2 /orac/oradata/
####################################################################配置以是系统启动时自动加载/dev/sdb1和/dev/sdb2
[root@rac02 ~]# vi /etc/fstab
/dev/sdb1       /orac/oradata   ocfs2   _netdev,datavolume,nointr 0 0
/dev/sdb2       /orac/orahome   ocfs2   _netdev 0 0
在任何一个节点上查看是否加载上共享磁盘
[root@rac02 ~]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sdb1             ocfs2  rac02, rac01
/dev/sdb2             ocfs2  rac02, rac01
####################################################################安装集群就需软件
创建对应权限的目录
AB
[root@rac01 ~]# mkdir /orac/crs
[root@rac01 ~]# chmod -R 775 /orac/crs/
[root@rac01 ~]# chown -R root:oinstall /orac/crs/
[root@rac01 ~]# chown -R oracle:oinstall /orac/orahome/
[root@rac01 ~]# chmod -R 775 /orac/orahome/
[root@rac01 ~]# chown -R oracle:oinstall /orac/oradata/
[root@rac01 ~]# chmod -R 775 /orac/oradata/
解压集群就绪软件
A
[root@rac01 share]# unzip 10201_clusterware_linux32.zip
切换用户、导出图形界面开始安装
A
[root@rac01 share]# su - oracle
[oracle@rac01 ~]$ export DISPLAY=202.100.0.111:0.0
[oracle@rac01 ~]$ export LANG=""
[oracle@rac01 ~]$ /share/clusterware/runInstaller
wps_clip_image-8927
直接next
wps_clip_image-8936
Next
wps_clip_image-8943
指定Oracle cluster ware 的 ORACLE_HOME为/orac/crs/10.2.0
Next
wps_clip_image-9003
产品需求检查next
wps_clip_image-9016
指定集群名称和节点信息,配置成如图所示next
wps_clip_image-9042
网卡配置next
wps_clip_image-9053
指定OCR存储位置 选择使用外部冗余  填入/orac/oradata/ocrdata
Next
wps_clip_image-9105
指定仲裁磁盘  /orac/oradata/votedisk
wps_clip_image-9138
wps_clip_image-9140
wps_clip_image-9142
分别在两节点上执行脚本
AB(会看到两节点上的提示会有所不同)
[root@rac01 proc]# /home/oracle/oraInventory/orainstRoot.sh
Changing permissions of /home/oracle/oraInventory to 770.
Changing groupname of /home/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@rac01 proc]# /orac/crs/10.2.0/root.sh
WARNING: directory '/orac/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/orac/crs' is not owned by root
assigning default hostname rac01 for node 1.
assigning default hostname rac02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac01 priv01 rac01
node 2: rac02 priv02 rac02
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /orac/oradata/votedisk
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac01
CSS is inactive on these nodes.
        rac02
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac01 proc]# chown root /orac/crs/
[root@rac02 ~]# /home/oracle/oraInventory/orainstRoot.sh
Changing permissions of /home/oracle/oraInventory to 770.
Changing groupname of /home/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@rac02 ~]# /orac/crs/10.2.0/root.sh
WARNING: directory '/orac/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/orac/crs' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac01 for node 1.
assigning default hostname rac02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac01 priv01 rac01
node 2: rac02 priv02 rac02
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac01
        rac02
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (2) nodes...
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...
Done.
执行结束后点击ok继续
wps_clip_image-12683
推出  安装结束
####################################################################开始安装Oracle
A
[oracle@rac02 ~]$ export DISPLAY=202.100.0.111:0
[oracle@rac02 ~]$ export LANG=""
[oracle@rac02 ~]$ /share/database/runInstaller
wps_clip_image-12917
wps_clip_image-12919
选择安装企业版
wps_clip_image-12929
填入Oracle home   /orac/orahome/10.2.0/db_1
wps_clip_image-12973
选中所有节点
wps_clip_image-12982
wps_clip_image-12984
选择只安装数据库软件
wps_clip_image-12997
Install开始安装
wps_clip_image-13011
两节点运行脚本完成后ok继续
wps_clip_image-13028
推出安装界面
###################################################################配置用户配置文件
AB(但两节点的ORACLE_SID不相同,一个为JAVA1另一个为JAVA2)
[oracle@rac01 ~]$ vi .bashrc
export ORACLE_BASE=/orac/orahome/10.2.0/
export ORACLE_HOME=$ORACLE_BASE/db_1
export ORACLE_SID=JAVA1
export PATH=$ORACLE_HOME/bin:$PATH
[oracle@rac01 ~]$source .bashrc
[oracle@rac01 ~]$ vi .bashrc
export ORACLE_BASE=/orac/orahome/10.2.0/
export ORACLE_HOME=$ORACLE_BASE/db_1
export ORACLE_SID=JAVA2
export PATH=$ORACLE_HOME/bin:$PATH
[oracle@rac02 ~]$source .bashrc
一个节点执行dbca开始安装数据库
A
[oracle@rac01 ~]$dbca
选择Oracle real application cluster cluster database
wps_clip_image-13660
选择创建数据库
wps_clip_image-13670
选择所有节点
wps_clip_image-13679
wps_clip_image-13682
选择普通用途
wps_clip_image-13691
填写数据库实例名
wps_clip_image-13702
wps_clip_image-13705
wps_clip_image-13708
wps_clip_image-13711
wps_clip_image-13714
wps_clip_image-13717
wps_clip_image-13720
wps_clip_image-13723
wps_clip_image-13726
wps_clip_image-13729
wps_clip_image-13732
由于没有监听,会有以下提示,选择yes继续
wps_clip_image-13756
开始安装
wps_clip_image-13763
wps_clip_image-13766
推出完成安装
退出时会自动启动集群实例
####################################################################
启动Oracle后
在一个节点上创建表
提交后在另一个节点上能查看到即表明安装成功。

本文出自 “zhuyan” 博客,请务必保留此出处http://zhuyan.blog.51cto.com/890880/189973

您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP