免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12下一页
最近访问板块 发新帖
查看: 9264 | 回复: 15
打印 上一主题 下一主题

Install 3-nodes Oracle 10g RAC on Solaris X64 step by step [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2008-07-24 17:52 |只看该作者 |倒序浏览
Install 3-nodes Oracle 10g RAC on Solaris X64 step by step


nodes information

10.198.90.132   SunOS sxrtfs01 5.10 Generic_127112-05 i86pc i386 i86pc
10.198.90.133   SunOS sxrtfs02 5.10 Generic_127112-05 i86pc i386 i86pc
10.198.90.134   SunOS sxrtfs03 5.10 Generic_120012-14 i86pc i386 i86pc


CAUTION:

The hostname can not use Capital characters. you'd better use lowercase characters.  


1)upload oracle RAC related installation files

    10201_clusterware_solx86_64.zip  
    10201_database_solx86_64.zip

ftp the 2 files to sxrtfs01


2)pre-install procedures , on every node

  a. create oracle group and user


++++
  groupadd -g 100 oinstall
  groupadd -g 101 dba
  mkdir -p /export/home
  useradd -u 200 -g oinstall -G dba -s /usr/bin/bash -d /export/home/oracle -m oracle
  id -a oracle
  passwd oracle
+++++


  b. check 'nobody' user and see if the user exist

  # id -a nobody



  c. setting IP address for oracle RAC
  
     Oracle RAC  needs 3 IP address

        1. public IP
        2. Virtual IP
        3. Private IP

   
   /etc/hosts (the same on every node)
  
# public ip for oracle rac
10.198.90.132   sxrtfs01
10.198.90.133   sxrtfs02
10.198.90.134   sxrtfs03

# virtual ip for oracle rac
10.198.91.240   sxrtfs01-vip
10.198.91.241   sxrtfs02-vip
10.198.91.242   sxrtfs03-vip

# private ip for oracle rac
200.100.0.1  sxrtfs01-priv
200.100.0.2  sxrtfs02-priv
200.100.0.3  sxrtfs03-priv



take node sxrtfs01 for example.

ifconfig e1000g0:1 plumb
ifconfig e1000g0:1  10.198.91.240 netmask 255.255.248.0 up

ifconfig e1000g2 plumb
ifconfig e1000g2  200.100.0.1  netmask 255.255.255.0 up




-bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 10.198.90.132 netmask fffff800 broadcast 10.198.95.255
        ether 0:14:4f:1f:e7:a4
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 10.198.91.240 netmask fffff800 broadcast 10.198.95.255
e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 200.100.0.1 netmask ffffff00 broadcast 200.100.0.255
        ether 0:14:4f:1f:e7:a6
-bash-3.00#


//on node sxrtfs02
ifconfig e1000g0:1 plumb
ifconfig e1000g0:1  10.198.91.241 netmask 255.255.248.0 up

ifconfig e1000g2 plumb
ifconfig e1000g2  200.100.0.2  netmask 255.255.255.0 up


//on node sxrtfs03
ifconfig e1000g0:1 plumb
ifconfig e1000g0:1  10.198.91.242 netmask 255.255.248.0 up

ifconfig e1000g2 plumb
ifconfig e1000g2  200.100.0.3  netmask 255.255.255.0 up




on node sxrtfs01

-bash-3.00# more /etc/hostname.*
::::::::::::::
/etc/hostname.e1000g0
::::::::::::::
sxrtfs01
::::::::::::::

::::::::::::::
/etc/hostname.e1000g2
::::::::::::::
sxrtfs01-priv

//no need /etc/hostname.e1000g0:1



d. open rsh for oracle user on each node

  
   #su - oracle
   $ vi .rhosts
      +


  e. create install directories and setting permission (on each node)





+++++

  mkdir /opt/app
  mkdir -p /opt/app
  mkdir -p /opt/app/crs
  mkdir -p /opt/app/crs/crshome
  mkdir -p /opt/app/oraInventory
  mkdir -p /opt/app/oracle
  mkdir -p /opt/app/oracle/orahome
  chown -R oracle:oinstall /opt/app
  chmod -R 775  /opt/app




+++++

论坛徽章:
0
2 [报告]
发表于 2008-07-24 17:53 |只看该作者
f. setting oracle user's profile





PATH=$PATH:/usr/sbin:/sbin:/usr/bin:/usr/lib/vxvm/bin:\
/opt/VRTSvxfs/sbin:/opt/VRTSvcs/bin:/opt/VRTS/bin:\
/opt/VRTSvcs/rac/bin:/opt/VRTSob/bin:/opt/VRTSvcs/vxfen/bin/:.
MANPATH=$MANPATH:/usr/share/man:/opt/VRTS/man:.
ORACLE_BASE=/opt/app/oracle
ORACLE_HOME=/opt/app/oracle/orahome
ORACLE_SID=rac
CRS_BASE=/opt/app/crs
CRS_HOME=/opt/app/crs/crshome
PATH=$PATHORACLE_HOME/binCRS_HOME/bin
CLASSPATH=$CLASSPATHORACLE_HOME/JREORACLE_HOME\
/jlibORACLE_HOME/rdbms/jlibORACLE_HOME/network/jlib
export ORACLE_BASE ORACLE_HOME PATH MANPATH




  g. setting OS kernel parameters

   /etc/system




* For Oracle RAC

set semsys:seminfo_semmni =  100
set semsys:seminfo_semmns = 1024
set semsys:seminfo_semmsl = 256
set semsys:seminfo_semvmx = 32767
set shmsys:shminfo_shmmax = 4294967295
set shmsys:shminfo_shmmin = 100


  


3) setting disk raw partitions for oracle RAC

#format


CAUTION: slice 2,8,9 can not be used

so, 10 slices - 3 = 7 available slices





Disk 1     c4t50060E801002A030d0

    ericractest_system_raw_500m       /dev/rdsk/c4t50060E801002A030d0s3      system
    ericractest_sysaux_raw_800m       /dev/rdsk/c4t50060E801002A030d0s4      sysaux
    ericractest_undotbs1_raw_600m     /dev/rdsk/c4t50060E801002A030d0s5      undotbs1
    ericractest_temp_raw_250m         /dev/rdsk/c4t50060E801002A030d0s6      temp
    ericractest_example_raw_160m      /dev/rdsk/c4t50060E801002A030d0s7      example
    ericractest_users_raw_120m        /dev/rdsk/c4t50060E801002A030d0s0      users
    ericractest_redo1_1_raw_130m      /dev/rdsk/c4t50060E801002A030d0s1      redo1_1
  


partition> p
Current partition table (unnamed):
Total disk cylinders available: 3913 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm     396 -  411      125.51MB    (16/0/0)     257040
  1 unassigned    wm     412 -  428      133.35MB    (17/0/0)     273105
  2     backup    wu       0 - 3912       29.98GB    (3913/0/0) 62862345
  3 unassigned    wm     100 -  163      502.03MB    (64/0/0)    1028160
  4 unassigned    wm     164 -  265      800.11MB    (102/0/0)   1638630
  5 unassigned    wm     266 -  342      604.01MB    (77/0/0)    1237005
  6 unassigned    wm     343 -  374      251.02MB    (32/0/0)     514080
  7 unassigned    wm     375 -  395      164.73MB    (21/0/0)     337365
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 unassigned    wm       0               0         (0/0/0)           0

partition>



Disk 2   c4t50060E801002A030d1




    ericractest_redo1_2_raw_130m           /dev/rdsk/c4t50060E801002A030d1s0     redo1_2
    ericractest_redo2_1_raw_130m           /dev/rdsk/c4t50060E801002A030d1s1     redo2_1
    ericractest_redo2_2_raw_130m           /dev/rdsk/c4t50060E801002A030d1s3     redo2_2
    ericractest_redo3_1_raw_130m           /dev/rdsk/c4t50060E801002A030d1s4     redo3_1
    ericractest_redo3_2_raw_130m           /dev/rdsk/c4t50060E801002A030d1s5     redo3_2
    ericractest_control1_raw_110m          /dev/rdsk/c4t50060E801002A030d1s6     control1
    ericractest_control2_raw_110m          /dev/rdsk/c4t50060E801002A030d1s7     control2


partition> p
Current partition table (original):
Total disk cylinders available: 3913 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm     100 -  116      133.35MB    (17/0/0)     273105
  1 unassigned    wm     117 -  133      133.35MB    (17/0/0)     273105
  2     backup    wu       0 - 3912       29.98GB    (3913/0/0) 62862345
  3 unassigned    wm     134 -  150      133.35MB    (17/0/0)     273105
  4 unassigned    wm     151 -  167      133.35MB    (17/0/0)     273105
  5 unassigned    wm     168 -  184      133.35MB    (17/0/0)     273105
  6 unassigned    wm     185 -  199      117.66MB    (15/0/0)     240975
  7 unassigned    wm     200 -  214      117.66MB    (15/0/0)     240975
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 unassigned    wm       0               0         (0/0/0)           0

partition>







Disk 3   c4t50060E801002A030d2



    ericractest_spfile_raw_10m         /dev/rdsk/c4t50060E801002A030d2s0     spfile     
    ericractest_pwdfile_raw_5m         /dev/rdsk/c4t50060E801002A030d2s1     pwdfile
    ora_ocr_raw_500m                     /dev/rdsk/c4t50060E801002A030d2s3      //ocr
    ora_vote_raw_100m                    /dev/rdsk/c4t50060E801002A030d2s4      //vote
    ericractest_undotbs2_raw_600m       /dev/rdsk/c4t50060E801002A030d2s5      undotbs2
    ericractest_undotbs3_raw_600m       /dev/rdsk/c4t50060E801002A030d2s6      undotbs3

partition> p
Current partition table (original):
Total disk cylinders available: 3913 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm     180 -  181       15.69MB    (2/0/0)       32130
  1 unassigned    wm     102 -  102        7.84MB    (1/0/0)       16065
  2     backup    wu       0 - 3912       29.98GB    (3913/0/0) 62862345
  3 unassigned    wm     103 -  166      502.03MB    (64/0/0)    1028160
  4 unassigned    wm     167 -  179      101.98MB    (13/0/0)     208845
  5 unassigned    wm     182 -  258      604.01MB    (77/0/0)    1237005
  6 unassigned    wm     259 -  335      604.01MB    (77/0/0)    1237005
  7 unassigned    wm       0               0         (0/0/0)           0
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 unassigned    wm       0               0         (0/0/0)           0

partition>



4) Prepare ocr and vote for CRS

on each node


mkdir -p /export/home/oracle/ocr
cd /export/home/oracle/ocr
ln -s /dev/rdsk/c4t50060E801002A030d2s3 ocr
chown -RL rootinstall ocr
chmod -R 640 ocr
ln -s /dev/rdsk/c4t50060E801002A030d2s4 vote
chown -RL oracle:dba vote
chmod -R 640 vote

论坛徽章:
0
3 [报告]
发表于 2008-07-24 17:53 |只看该作者
5) install crs

on node sxrtfs01

-bash-3.00# unzip 10201_clusterware_solx86_64.zip


remember:
  ocr  /export/home/oracle/ocr/ocr
  vote  /export/home/oracle/ocr/vote


$runInstaller



//execute 'root.sh'

On the first node , sxrtfs01


-bash-3.00# ./root.sh
WARNING: directory '/opt/app/oracle' is not owned by root
WARNING: directory '/opt/app' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/opt/app/oracle' is not owned by root
WARNING: directory '/opt/app' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: sxrtfs01 sxrtfs01-priv sxrtfs01
node 2: sxrtfs02 sxrtfs02-priv sxrtfs02
node 3: sxrtfs03 sxrtfs03-priv sxrtfs03
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /export/home/oracle/ocr/vote
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        sxrtfs01
CSS is inactive on these nodes.
        sxrtfs02
        sxrtfs03
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
-bash-3.00#




on the second node sxrtfs02

-bash-3.00# ./root.sh
WARNING: directory '/opt/app/oracle' is not owned by root
WARNING: directory '/opt/app' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/opt/app/oracle' is not owned by root
WARNING: directory '/opt/app' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: sxrtfs01 sxrtfs01-priv sxrtfs01
node 2: sxrtfs02 sxrtfs02-priv sxrtfs02
node 3: sxrtfs03 sxrtfs03-priv sxrtfs03
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        sxrtfs01
        sxrtfs02
CSS is inactive on these nodes.
        sxrtfs03
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
-bash-3.00#





no the third node sxrtfs03


-bash-3.00# ./root.sh
WARNING: directory '/opt/app/oracle' is not owned by root
WARNING: directory '/opt/app' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/opt/app/oracle' is not owned by root
WARNING: directory '/opt/app' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: sxrtfs01 sxrtfs01-priv sxrtfs01
node 2: sxrtfs02 sxrtfs02-priv sxrtfs02
node 3: sxrtfs03 sxrtfs03-priv sxrtfs03
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        sxrtfs01
        sxrtfs02
        sxrtfs03
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (3) nodes....
Creating GSD application resource on (3) nodes....
Creating ONS application resource on (3) nodes....
Starting VIP application resource on (3) nodes....
Starting GSD application resource on (3) nodes....
Starting ONS application resource on (3) nodes....


Done.
-bash-3.00#



6) if crs installationfailed once , you have to clean it up

-bash-3.00# ./root.sh
WARNING: directory '/opt/app/oracle' is not owned by root
WARNING: directory '/opt/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)




#cd /opt/app/crs/crshome/install

-bash-3.00# ./rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Jul 24 14:57:01.785 | INF | daemon shutting down
Stopping resources.
Error while stopping resources. Possible cause: CRSD is down.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/var/opt/oracle/scls_scr'


-bash-3.00# ./rootdeinstall.sh

Removing contents from OCR device
2560+0 records in
2560+0 records out

论坛徽章:
0
4 [报告]
发表于 2008-07-24 17:55 |只看该作者

回复 #3 Chinese_Dragon 的帖子

7) verify the installation of crs



-bash-3.00# pwd        
/opt/app/crs/crshome/bin
-bash-3.00# ./olsnodes  -n
sxrtfs01        1
sxrtfs02        2
sxrtfs03        3
-bash-3.00#


//check if all the crs related processes are started.

-bash-3.00# ps -ef|grep oprocd
    root 29605 29532   0 14:57:50 ?           0:00 /opt/app/oracle/orahome/bin/oprocd run -t 1000

-m 500
    root 29532 29341   0 14:57:50 ?           0:00 /bin/sh /etc/init.d/init.cssd oprocd
    root  1135  9123   0 15:15:09 pts/2       0:00 grep oprocd
-bash-3.00# ps -ef|grep evmd  
  oracle 29340     1   0 14:57:50 ?           0:00 /opt/app/oracle/orahome/bin/evmd.bin
    root  1447  9123   0 15:15:20 pts/2       0:00 grep evmd
-bash-3.00# ps -ef|grep ocssd
  oracle 29640 29639   0 14:57:50 ?           0:01 /opt/app/oracle/orahome/bin/ocssd.bin
    root  1852  9123   0 15:15:34 pts/2       0:00 grep ocssd
-bash-3.00# ps -ef|grep crsd  
    root 29343     1   0 14:57:50 ?           0:01 /opt/app/oracle/orahome/bin/crsd.bin reboot
    root  2226  9123   0 15:15:47 pts/2       0:00 grep crsd
-bash-3.00#

++

  ps -ef|grep oprocd
  ps -ef|grep evmd  
  ps -ef|grep ocssd
  ps -ef|grep crsd  
   
+++




//see if all the cpus are online  (optional)


-bash-3.00# psrinfo
0       on-line   since 07/24/2008 13:27:08
1       on-line   since 07/24/2008 13:27:11




8) Install oracle softwares

on node sxrtfs01

-bash-3.00# unzip 10201_database_solx86_64.zip


$runInstaller


// * install software only


* I manual run #vipca &


then

$netca


bash-3.00$ ./netca

Oracle Net Services Configuration:
Configuring Listener:LISTENER
Default local naming configuration complete.
sxrtfs01...
sxrtfs02...
sxrtfs03...
Listener configuration complete.
Oracle Net Services configuration successful. The exit code is 0



bash-3.00$



9) prepare oracle datafile (raw devices)

-bash-3.00# pwd
/export/home/oracle
-bash-3.00# mkdir db
  
# chown -R oracle:dba db



ln -s /dev/rdsk/c4t50060E801002A030d0s3 system
ln -s /dev/rdsk/c4t50060E801002A030d0s4 sysaux
ln -s /dev/rdsk/c4t50060E801002A030d0s5 undotbs1
ln -s /dev/rdsk/c4t50060E801002A030d2s5 undotbs2
ln -s /dev/rdsk/c4t50060E801002A030d2s6 undotbs3
ln -s /dev/rdsk/c4t50060E801002A030d0s6 temp
ln -s /dev/rdsk/c4t50060E801002A030d0s7 example
ln -s /dev/rdsk/c4t50060E801002A030d0s0 users
ln -s /dev/rdsk/c4t50060E801002A030d0s1 redo1_1
ln -s /dev/rdsk/c4t50060E801002A030d1s0 redo1_2
ln -s /dev/rdsk/c4t50060E801002A030d1s1 redo2_1
ln -s /dev/rdsk/c4t50060E801002A030d1s3 redo2_2
ln -s /dev/rdsk/c4t50060E801002A030d1s4 redo3_1
ln -s /dev/rdsk/c4t50060E801002A030d1s5 redo3_2
ln -s /dev/rdsk/c4t50060E801002A030d1s6 control1
ln -s /dev/rdsk/c4t50060E801002A030d1s7 control2
ln -s /dev/rdsk/c4t50060E801002A030d2s0 spfile
ln -s /dev/rdsk/c4t50060E801002A030d2s1 pwdfile

  chown -RL oracle:dba *
  chmod -R 660 *

论坛徽章:
0
5 [报告]
发表于 2008-07-24 17:59 |只看该作者
10)prepare oracle raw datafile configuration file


-bash-3.00# pwd
/opt/app/oracle

-bash-3.00# touch tree_raw.conf

-bash-3.00# cat tree_raw.conf
system=/export/home/oracle/db/system
sysaux=/export/home/oracle/db/sysaux
example=/export/home/oracle/db/example
users=/export/home/oracle/db/users
temp=/export/home/oracle/db/temp
undotbs1=/export/home/oracle/db/undotbs1
undotbs2=/export/home/oracle/db/undotbs2
undotbs3=/export/home/oracle/db/undotbs3
redo1_1=/export/home/oracle/db/redo1_1
redo1_2=/export/home/oracle/db/redo1_2
redo2_1=/export/home/oracle/db/redo2_1
redo2_2=/export/home/oracle/db/redo2_2
redo3_1=/export/home/oracle/db/redo3_1
redo3_2=/export/home/oracle/db/redo3_2
control1=/export/home/oracle/db/control1
control2=/export/home/oracle/db/control2
spfile=/export/home/oracle/db/spfile
pwdfile=/export/home/oracle/db/pwdfile
-bash-3.00#



set RAW device configuration file environment varible


/etc/profile or oracle user's profile

DBCA_RAW_CONFIG=/opt/app/oracle/tree_raw.conf
export DBCA_RAW_CONFIG






11) set env on each node


on node sxrtfs01
export ORACLE_SID=rac1

on node sxrtfs02
export ORACLE_SID=rac2

on node sxrtfs03
export ORACLE_SID=rac3




12)create Cluster database

On node sxrtfs01

$dbca






13) verify  oracle  rac installation

-bash-3.00$ srvctl status database -d rac
Instance rac1 is running on node sxrtfs01
Instance rac2 is running on node sxrtfs02
Instance rac3 is running on node sxrtfs03


-bash-3.00$ srvctl status nodeapps -n sxrtfs01
VIP is running on node: sxrtfs01
GSD is running on node: sxrtfs01
Listener is running on node: sxrtfs01
ONS daemon is running on node: sxrtfs01
-bash-3.00$ srvctl status nodeapps -n sxrtfs02
VIP is running on node: sxrtfs02
GSD is running on node: sxrtfs02
Listener is running on node: sxrtfs02
ONS daemon is running on node: sxrtfs02
-bash-3.00$ srvctl status nodeapps -n sxrtfs03
VIP is running on node: sxrtfs03
GSD is running on node: sxrtfs03
Listener is running on node: sxrtfs03
ONS daemon is running on node: sxrtfs03
-bash-3.00$



-bash-3.00$ srvctl stop database -d rac
-bash-3.00$




-bash-3.00$ srvctl status database -d rac
Instance rac1 is not running on node sxrtfs01
Instance rac2 is not running on node sxrtfs02
Instance rac3 is not running on node sxrtfs03
-bash-3.00$





-bash-3.00$ srvctl status nodeapps -n sxrtfs01
VIP is running on node: sxrtfs01
GSD is running on node: sxrtfs01
Listener is running on node: sxrtfs01
ONS daemon is running on node: sxrtfs01
-bash-3.00$ srvctl status nodeapps -n sxrtfs02
VIP is running on node: sxrtfs02
GSD is running on node: sxrtfs02
Listener is running on node: sxrtfs02
ONS daemon is running on node: sxrtfs02
-bash-3.00$ srvctl status nodeapps -n sxrtfs03
VIP is running on node: sxrtfs03
GSD is running on node: sxrtfs03
Listener is running on node: sxrtfs03
ONS daemon is running on node: sxrtfs03
-bash-3.00$






-bash-3.00$ srvctl start database -d rac





-bash-3.00$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jul 24 17:00:52 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> SELECT * FROM V$ACTIVE_INSTANCES;

INST_NUMBER INST_NAME
----------- ------------------------------------------------------------
          1 sxrtfs01:rac1
          2 sxrtfs02:rac2
          3 sxrtfs03:rac3

SQL>




14)go get a coffee.

论坛徽章:
0
6 [报告]
发表于 2008-07-24 20:39 |只看该作者
连个说明、注释都没有

论坛徽章:
0
7 [报告]
发表于 2008-07-25 13:30 |只看该作者
呵呵,这个东西,版主不给加精,打击积极性啊。呵呵

论坛徽章:
0
8 [报告]
发表于 2008-07-25 13:52 |只看该作者
原帖由 Chinese_Dragon 于 2008-7-25 13:30 发表
呵呵,这个东西,版主不给加精,打击积极性啊。呵呵

这里现在已经是处于无人管理状态了

我的几个帖子,不敢说加精了,但也是费了半天功夫,精力整理的东东,对网友也有所帮助,连一个保留也没有。。。。

论坛徽章:
0
9 [报告]
发表于 2008-07-25 14:01 |只看该作者
UP!

论坛徽章:
0
10 [报告]
发表于 2008-07-25 14:33 |只看该作者
原帖由 Chinese_Dragon 于 2008-7-25 13:30 发表
呵呵,这个东西,版主不给加精,打击积极性啊。呵呵

大哥你做啥的.


感觉你是 v公司专业做测试 oracle这块的.

您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP