本帖最后由 lem0 于 2016-08-11 14:32 编辑
闲置一台T5140,闲来无事,创建两个ldom,做个双机环境,可以用来做cluster或者RAC环境。
规划:
浮动IP:db:192.168.2.9 |
浮动IP:app:192.168.2.10
|
虚拟机db1 ,3.5*core,6G内存 Solaris 10 (100gb ospool/db1) root/root123 192.168.2.7
|
虚拟机db2 ,3.5*core,6G内存 Solaris 10 (100gb ospool/db2) root/root123 192.168.2.8
|
T5140 2*4 core 16G 内存,1*146G硬盘,3*300G硬盘,4*以太网口 Solaris 11.1 root/root123 ( 146G硬盘作系统盘,第二个盘作两个虚机的系统盘,第三个做cluster 仲裁盘[vote],第四个盘模拟数据盘[data] ) **注意ldom 里只有整个硬盘放进两个ldom 里sun cluster才能识别为同一个did 设备号,单个分区或者zfs卷cluster 软件会认为是两个设备,did号会出现两个。 192.168.2.6 SP_IP:192.168.2.5
|
1. 创建控制域primary domain,1*core,3G内存 svcadm enable svc:/ldoms/vntsd:default
ldm add-vds primary-vds0 primary ldm add-vcc port-range=5000-5200primary-vcc0 primary ldm add-vsw net-dev=net0 primary-vsw0primary //系统网卡1 ldm add-vsw net-dev=net1 primary-vsw1primary //系统网卡2 ldm add-vsw net-dev=net2 primary-vsw2primary //心跳网卡 ldm add-vswnet-dev=net3 primary-vsw3 primary //心跳网卡 ldm start-reconf primary ldm set-vcpu 8 primary ldm set-memory 3104M primary
ldm add-spconfig initial reboot
2. 创建虚拟机db1 ,3.5*core,6G内存
ldm add-domain db1 ldm set-vcpu 28 db1 ldm set-memory 6G db1 ldm set-variable auto-boot\?=false db1 ldm add-vnet linkprop=phys-state vnet0primary-vsw0 db1 //系统网卡1 ldm add-vnet linkprop=phys-state vnet1primary-vsw1 db1 //系统网卡2 ldm add-vnet linkprop=phys-statevnet2 primary-vsw2 db1 //心跳网卡1 ldm add-vnet linkprop=phys-state vnet3primary-vsw2 db1 //心跳网卡2
3. 创建虚拟机db2,3.5*core,6G内存 ldm add-domain db2 ldm set-vcpu 28 db2 ldm set-memory 6G db2 ldm set-variable auto-boot\?=false db2 ldm add-vnet linkprop=phys-state vnet0primary-vsw0 db2 //系统网卡1 ldm add-vnet linkprop=phys-state vnet1primary-vsw1 db2 //系统网卡2 ldm add-vnet linkprop=phys-state vnet2 primary-vsw3db2 //心跳网卡1 ldm add-vnet linkprop=phys-state vnet3primary-vsw3 db2 //心跳网卡2
6. 创建两个操作系统卷 zpool create ospool c3t1d0 zfs create -V 100g ospool/db1 zfs create -V 100g ospool/db2
7. 将各个盘做成虚拟devices在vdisk服务中 ldm add-vdsdev/dev/zvol/dsk/ospool/db1 osdb1@primary-vds0 ldm add-vdsdev/dev/zvol/dsk/ospool/db2 osdb2@primary-vds0
ldm add-vdsdev/dev/dsk/c3t2d0s2 quo1@primary-vds0 ldm add-vdsdev -f /dev/dsk/c3t2d0s2 quo2@primary-vds0 ldm add-vdsdev/dev/dsk/c3t3d0s2 data1@primary-vds0 ldm add-vdsdev -f/dev/dsk/c3t3d0s2 data2@primary-vds0
8. 将各个虚拟disk分别加入各个虚机 ldm add-vdiskbootdisk osdb1@primary-vds0 db1 ldm add-vdiskbootdisk osdb2@primary-vds0 db2
ldm add-vdiskquodisk quo1@primary-vds0 db1 ldm add-vdisk quodisk quo2@primary-vds0 db2 ldm add-vdisk dbdisk data1@primary-vds0 db1 ldm add-vdisk dbdisk data2@primary-vds0 db2
9. 将操作系统sol11的ISO光盘加入到两台虚机并安装操作系统
ldm add-vdsdev/root/sol-11_2-text-sparc.iso iso1@primary-vds0 ldm add-vdsdev-f root/sol-11_2-text-sparc.iso iso2@primary-vds0
ldm add-vdiskcdrom iso1@primary-vds0 db1 ldm add-vdiskcdrom iso2@primary-vds0 db2
10. 启动虚拟机看到的硬盘,网卡设备
ldm add-spconfig final保存配置
11. 绑定并启动域 ldm binddb1 ldm binddb2 ldm startdb1 ldm startdb2
telnet 0 5000 telnet 0 5001
12. 开始安装域操作系统 {0} ok devalias cdrom /virtual-devices@100/channel-devices@200/disk@1 bootdisk /virtual-devices@100/channel-devices@200/disk@0 vnet2 /virtual-devices@100/channel-devices@200/network@2 vnet1 /virtual-devices@100/channel-devices@200/network@1 vnet0 /virtual-devices@100/channel-devices@200/network@0 net /virtual-devices@100/channel-devices@200/network@0 disk /virtual-devices@100/channel-devices@200/disk@0 virtual-console /virtual-devices/console@1 name aliases
OK> boot cdrom 安装完两个虚拟机的系统,就有两块盘相当于存储映射过来的盘了。接着就可以装cluster 或者RAC了,如果有第一个盘换成300G的,可以把两个ldom的系统放进第一个盘里,这样公共盘可以多一块出来。
补充一下,primary 域装solaris 10 或者11都没有问题,但是guest domain里装solaris 11 cluster 4 的话,quorum device 添加的时候会不停地报错,solaris 10 cluster 3.x 没问题。
|