- 论坛徽章:
- 0
|
你刚才看到的帖子是单机上建立裸设备, 添加到oracle数据库的文章。
如果必须用命令建立供rac使用的裸设备,请参考aix上安装oracle rac的文章。
http://bbs.chinaunix.net/viewthr ... 0&highlight=aix
主要部分摘录如下:
2.4.2 Create Shared RAW Logical Volumes if not using GPFS. See section 2.4.6 for details about GPFS.
mklv -y'db_name_cntrl1_110m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_cntrl2_110m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_system_400m' -w'n' -s'n' -r'n' usupport_vg 13 hdisk5
mklv -y'db_name_users_120m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_drsys_90m' -w'n' -s'n' -r'n' usupport_vg 3 hdisk5
mklv -y'db_name_tools_12m' -w'n' -s'n' -r'n' usupport_vg 1 hdisk5
mklv -y'db_name_temp_100m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_undotbs1_312m' -w'n' -s'n' -r'n' usupport_vg 10 hdisk5
mklv -y'db_name_undotbs2_312m' -w'n' -s'n' -r'n' usupport_vg 10 hdisk5
mklv -y'db_name_log11_120m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_log12_120m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_log21_120m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_log22_120m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_indx_70m' -w'n' -s'n' -r'n' usupport_vg 3 hdisk5
mklv -y'db_name_cwmlite_100m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_example_160m' -w'n' -s'n' -r'n' usupport_vg 5 hdisk5
mklv -y'db_name_oemrepo_20m' -w'n' -s'n' -r'n' usupport_vg 1 hdisk5
mklv -y'db_name_spfile_5m' -w'n' -s'n' -r'n' usupport_vg 1 hdisk5
mklv -y'db_name_srvmconf_100m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
Substitute your database name in place of the "db_name" value. When the volume group was created a partition size of 32 megabytes was used. The seventh field is the number of partitions that make up the file so for example if "db_name_cntrl1_110m" needs to be 110 megabytes we would need 4 partitions.
The raw partitions are created in the "/dev" directory and it is the character devices that will be used. The "mklv -y'db_name_cntrl1_110m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5" creates two files:
/dev/db_name_cntrl1_110m
/dev/rdb_name_cntrl1_110m
Change the permissions on the character devices so the software owner owns them:
# chown oracle:dba /dev/rdb_name*
2.4.3 Import the Volume Group on to the Other Nodes
Use "importvg" to import the oracle_vg volume group on all of the other nodes
On the first machine, type:
% varyoffvg oracle_vg
On the other nodes, import the definition of the volume group using "smit vg" :
Select "Import a Volume Group"
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
Import a Volume GroupType or select values in entry fields.Press Enter AFTER making all desired changes. [Entry Fields] VOLUME GROUP name [oracle_vg]* PHYSICAL VOLUME name [hdisk5] + Volume Group MAJOR NUMBER [57] +# Make this VG Concurrent Capable? no + Make default varyon of VG Concurrent? no +
It is possible that the physical volume name (hdisk) could be different on each node. Check the PVID of the disk using "lspv", and be sure to pick the hdisk that has the same PVID as the disk used to create the volume group on the first node. Also make sure the same major number is used as well.. This number has to be undefined on all the nodes. The "Make default varyon of VG Concurrent?" option should be set to "no". The volume group was created concurrent capable so the option "Make this VG Concurrent Capable?" can be left at "no". The command line for importing the volume group after varying it off on the node where the volume group was orginally created on would be:
% importvg -V<major #>; -y <vgname>; h disk#
% chvg -an <vgname>;
% varyoffvg <vgname>;
After importing the volume group onto each node be sure to change the ownership of the character devices to the software owner:
# chown oracle:dba /dev/rdb_name* |
|