dfck001 发表于 2012-08-11 01:32

vmware+solaris10+suncluster3.2问题

root@node2 # scdidadm -L
1      node1:/dev/rdsk/c1t0d0         /dev/did/rdsk/d1   
2      node1:/dev/rdsk/c2t1d0         /dev/did/rdsk/d2   
2      node2:/dev/rdsk/c2t1d0         /dev/did/rdsk/d2   
3      node2:/dev/rdsk/c0d0         /dev/did/rdsk/d3   
4      node2:/dev/rdsk/c3t2d0         /dev/did/rdsk/d4   
5      node1:/dev/rdsk/c3t2d0         /dev/did/rdsk/d5   
6      node2:/dev/rdsk/c4t0d0         /dev/did/rdsk/d6   
6      node1:/dev/rdsk/c4t0d0         /dev/did/rdsk/d6   
7      node2:/dev/rdsk/c4t1d0         /dev/did/rdsk/d7   
7      node1:/dev/rdsk/c4t1d0         /dev/did/rdsk/d7   
8      node2:/dev/rdsk/c4t2d0         /dev/did/rdsk/d8   
8      node1:/dev/rdsk/c4t2d0         /dev/did/rdsk/d8   
root@node2 # metaset -s oradg -a /dev/did/rdsk/d6 /dev/did/rdsk/d7 /dev/did/rdsk/d8
metaset: node2: d6s0: No such device or address

root@node2 # metaset

Set name = oradg, Set number = 1

Host                Owner
node1            
node2            
root@node2 # metaset -s oradg -a /dev/did/rdsk/d6
metaset: node2: d6s0: No such device or address

root@node2 # metaset -s oradg -a /dev/did/rdsk/d7
metaset: node2: d7s0: No such device or address


root@node2 # cldev populate
Configuring DID devices
cldev: (C507896) Inquiry on device "/dev/rdsk/c0d0s2" failed.
cldev: (C894318) Device ID "node2:/dev/rdsk/c4t0d0" does not match physical device ID for "d6".
Warning: Device "node2:/dev/rdsk/c4t0d0" might have been replaced.
cldev: (C894318) Device ID "node2:/dev/rdsk/c4t1d0" does not match physical device ID for "d7".
Warning: Device "node2:/dev/rdsk/c4t1d0" might have been replaced.
cldev: (C894318) Device ID "node2:/dev/rdsk/c4t2d0" does not match physical device ID for "d8".
Warning: Device "node2:/dev/rdsk/c4t2d0" might have been replaced.
cldev: (C745312) Device ID for device "/dev/rdsk/c4t0d0" does not match physical disk's ID.
Warning: The drive might have been replaced.
cldev: (C745312) Device ID for device "/dev/rdsk/c4t1d0" does not match physical disk's ID.
Warning: The drive might have been replaced.
cldev: (C745312) Device ID for device "/dev/rdsk/c4t2d0" does not match physical disk's ID.
Warning: The drive might have been replaced.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
root@node2 #


安装了好几回每次到metaset添加磁盘的时候老是出问题,添加不了。哪位兄弟见过这问题没
看了/dev/did/rdsk目录下面没有d6so d7s0 d8s0

dfck001 发表于 2012-08-11 01:45

Aug 11 01:38:46 node1 Cluster.CCR: reservation warning(reset_shared_bus) - Unable to open device /dev/did/rdsk/d7s2, errno 6, will retry in 2 seconds
Aug 11 01:38:47 node1 Cluster.scdpmd: The status of device: /dev/did/rdsk/d1s0 is set to MONITORED
Aug 11 01:38:47 node1 Cluster.scdpmd: The status of device: /dev/did/rdsk/d2s0 is set to MONITORED
Aug 11 01:38:47 node1 Cluster.scdpmd: The state of the path to device: /dev/did/rdsk/d1s0 has changed to OK
Aug 11 01:38:47 node1 Cluster.scdpmd: The state of the path to device: /dev/did/rdsk/d2s0 has changed to OK
Aug 11 01:38:47 node1 Cluster.scdpmd: The status of device: /dev/did/rdsk/d6s0 is set to MONITORED
Aug 11 01:38:47 node1 Cluster.scdpmd: The status of device: /dev/did/rdsk/d5s0 is set to MONITORED
Aug 11 01:38:47 node1 Cluster.scdpmd: The state of the path to device: /dev/did/rdsk/d6s0 has changed to FAILED
Aug 11 01:38:47 node1 Cluster.scdpmd: The status of device: /dev/did/rdsk/d7s0 is set to MONITORED
Aug 11 01:38:47 node1 Cluster.scdpmd: The state of the path to device: /dev/did/rdsk/d7s0 has changed to FAILED
Aug 11 01:38:47 node1 Cluster.scdpmd: The state of the path to device: /dev/did/rdsk/d5s0 has changed to OK
Aug 11 01:38:47 node1 Cluster.scdpmd: The status of device: /dev/did/rdsk/d8s0 is set to MONITORED
Aug 11 01:38:47 node1 Cluster.scdpmd: The state of the path to device: /dev/did/rdsk/d8s0 has changed to FAILED
Aug 11 01:38:48 node1 Cluster.CCR: reservation warning(reset_shared_bus) - Unable to open device /dev/did/rdsk/d7s2, errno 6, will retry in 2 seconds
Aug 11 01:38:50 node1 last message repeated 1 time
Aug 11 01:38:52 node1 Cluster.CCR: reservation error(reset_shared_bus) - Unable to open device /dev/did/rdsk/d7s2, errno 6
Aug 11 01:38:52 node1 Cluster.CCR: reservation warning(reset_shared_bus) - Unable to open device /dev/did/rdsk/d8s2, errno 6, will retry in 2 seconds
Aug 11 01:38:56 node1 last message repeated 2 times

zhoujianxin 发表于 2012-08-12 16:12

呵呵,我也在弄wmware + solaris ,这个问题是因为node不是owner,要先执行metaset -s oraset -t吧,
反正要使得metaset输出变成,才行
root@node2 # metaset

Set name = oradg, Set number = 1

Host                Owner
node1            YES
node2      

zhoujianxin 发表于 2012-08-12 16:14

我现在碰到的问题是第二个节点无法take 这个disk set。无法使得node2变成owner.
-bash-3.2# metaset -s dbset -t
metaset: failed to notify DCS of take

messsage:
Aug 12 03:52:46 rsnode1 genunix: WARNING: Failed to set this node as primary for service 'dbset'.

有任何ideal吗?

dfck001 发表于 2012-08-15 11:11

回复 3# zhoujianxin


    重新添加虚拟磁盘后可以添加到磁盘组当中,问题可能就是你说的不过没试过。
sun cluster已经配置成功,具体的操作文档已经在论坛贴出仅供参考
页: [1]
查看完整版本: vmware+solaris10+suncluster3.2问题