免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12
最近访问板块 发新帖
楼主: 木言
打印 上一主题 下一主题

[HACMP集群] 急问一个关于HACMP的问题! [复制链接]

论坛徽章:
0
11 [报告]
发表于 2003-06-19 10:41 |只看该作者

急问一个关于HACMP的问题!

粗粗看了一下你的hacmp.out,给我的感觉是你用的是rotating的构成.所以a机起动时不会主动接管.
b机停止 ha时用的是graceful选项,这选项不会切换,应该用 takeover 选项.现在你只有重新起动b机的ha,然后再停止它(用takeover).

论坛徽章:
0
12 [报告]
发表于 2003-06-19 10:48 |只看该作者

急问一个关于HACMP的问题!

看得晕,只看到用的是HACMP4.4,7133,Oracle,concurrent VG

论坛徽章:
0
13 [报告]
发表于 2003-06-19 11:02 |只看该作者

急问一个关于HACMP的问题!

不好意思了。要不我将hacmp.out.2也贴出来。

测试的时候在A上就是takeover,B上面没有任何动作,然后A再clstart。

论坛徽章:
0
14 [报告]
发表于 2003-06-19 11:14 |只看该作者

急问一个关于HACMP的问题!

原帖由 "木言" 发表:
不好意思了。要不我将hacmp.out.2也贴出来。

测试的时候在A上就是takeover,B上面没有任何动作,然后A再clstart。
   

好的,将hacmp.out.2也贴出来。

论坛徽章:
0
15 [报告]
发表于 2003-06-19 11:32 |只看该作者

急问一个关于HACMP的问题!

是启动了很多次的。太长了,实在不好意思

Jun 17 21:22:12 EVENT START: node_down a

Jun 17 21:22:15 EVENT START: release_vg_fs /home/bill filesys_vg01

cl_swap_IP_address[35] print route change -net default -interface 10.105.88.1
cl_swap_IP_address[35] 1>;>; /usr/es/sbin/cluster/.restore_routes
cl_swap_IP_address[36] route change -net default -interface 127.0.0.1
127.0.0.1 net default: gateway 127.0.0.1
cl_swap_IP_address[22] read DEST GW FLAGS OTHER
cl_swap_IP_address[28] route delete 129.9.1.15 10.105.88.1
writing to routing socket: No such process
10.105.88.1 host 129.9.1.15: gateway 10.105.88.1: not in table
cl_swap_IP_address[22] read DEST GW FLAGS OTHER
cl_swap_IP_address[28] route delete 129.9.1.16 10.105.88.1
writing to routing socket: No such process
10.105.88.1 host 129.9.1.16: gateway 10.105.88.1: not in table
cl_swap_IP_address[22] read DEST GW FLAGS OTHER
cl_swap_IP_address[28] route delete 10.32.24.93 10.105.88.1
writing to routing socket: No such process
10.105.88.1 host 10.32.24.93: gateway 10.105.88.1: not in table
cl_swap_IP_address[22] read DEST GW FLAGS OTHER
cl_swap_IP_address[28] route delete 10.105.9.8 10.105.88.1
writing to routing socket: No such process
10.105.88.1 host 10.105.9.8: gateway 10.105.88.1: not in table
cl_swap_IP_address[22] read DEST GW FLAGS OTHER
cl_swap_IP_address[28] route delete 10.105.9.76 10.105.88.1
writing to routing socket: No such process
10.105.88.1 host 10.105.9.76: gateway 10.105.88.1: not in table
cl_swap_IP_address[22] read DEST GW FLAGS OTHER
cl_swap_IP_address[28] route delete 10.105.9.199 10.105.88.1
writing to routing socket: No such process
10.105.88.1 host 10.105.9.199: gateway 10.105.88.1: not in table
cl_swap_IP_address[22] read DEST GW FLAGS OTHER
cl_swap_IP_address[25] route delete -net 10.105.88/27 10.105.88.4
10.105.88.4 net 10.105.88: gateway 10.105.88.4
cl_swap_IP_address[22] read DEST GW FLAGS OTHER
cl_swap_IP_address[42] cat
cl_swap_IP_address[42] 1>;>; /usr/es/sbin/cluster/.restore_routes 0<<
#
netstat -nrf inet | fgrep lo0 | while read DEST GW FLAGS OTHER ; do
        case $FLAGS in
        U*H*)
            route change -host $DEST -interface $GW
            ;;
        U*)
            route change -net $DEST -interface $GW
            ;;
        esac
done
exit 0

cl_swap_IP_address[58] chmod +x /usr/es/sbin/cluster/.restore_routes
cl_swap_IP_address[60] return 0
cl_swap_IP_address[505] cl_echo 60 cl_swap_IP_address: Configuring adapter en0 at IP address 10.105.88.6 cl_swap_IP_address en0 10.105.88.6
Jun 17 2003 21:22:28 cl_swap_IP_address: Configuring adapter en0 at IP address 10.105.88.6cl_swap_IP_address[505] [[ -n  ]]
cl_swap_IP_address[511] cl_swap_HW_address 10.105.88.6 en0
cl_swap_HW_address[216] [[ high = high ]]
cl_swap_HW_address[216] version=1.17
cl_swap_HW_address[217] cl_swap_HW_address[217] cl_get_path
HA_DIR=es
cl_swap_HW_address[219] UNDO=/usr/es/sbin/cluster/.hwundo
cl_swap_HW_address[220] DELAY=1
cl_swap_HW_address[221] STATUS=0
cl_swap_HW_address[222] cl_echo 33 Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_HW_address with parameters 10.105.88.6 en0 /usr/es/sbin/cluster/events/utils/cl_swap_HW_address 10.105.88.6 en0
Jun 17 2003 21:22:28 Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_HW_address with parameters 10.105.88.6 en0cl_swap_HW_address[224] set -u
cl_swap_HW_address[225] [ 2 -ne 2 ]
cl_swap_HW_address[230] ADDRESS=10.105.88.6
cl_swap_HW_address[231] INTERFACE=en0
cl_swap_HW_address[238] [ -f /usr/es/sbin/cluster/.hwundo ]
cl_swap_HW_address[246] cl_swap_HW_address[246] mkdevname en0
cl_swap_HW_address[2] cl_swap_HW_address[2] expr en0 : ^\([a-z]*\)[0-9]*
TYPE=en
cl_swap_HW_address[3] cl_swap_HW_address[3] expr en0 : ^[a-z]*\([0-9]*\)
NUM=0
cl_swap_HW_address[5] [ -z 0 ]
cl_swap_HW_address[13] NAME=ent0
cl_swap_HW_address[29] echo ent0
cl_swap_HW_address[30] return 0
DEVICE=ent0
cl_swap_HW_address[247] [ -z ent0 ]
cl_swap_HW_address[252] cl_swap_HW_address[252] awk /parent/ {gsub("\"", "",$3); print($3) }
cl_swap_HW_address[252] odmget -q name = ent0 CuDv
PARENT=pci4
cl_swap_HW_address[253] cl_swap_HW_address[253] awk -v ADDRESS=10.105.88.6 -v FS=: $7==ADDRESS {print $8 ; exit}
cl_swap_HW_address[253] cllsif -Sc
HARDWARE=
cl_swap_HW_address[255] [ -z  ]
cl_swap_HW_address[262] cl_swap_HW_address[262] awk FNR==2 {print $2}
cl_swap_HW_address[262] ifconfig en0
ADDRESS=10.105.88.4
cl_swap_HW_address[263] cl_swap_HW_address[263] awk -v ADDRESS=10.105.88.4 -v FS=: $7==ADDRESS {print $8 ; exit}
cl_swap_HW_address[263] cllsif -Sc
HARDWARE=
cl_swap_HW_address[265] [ -n  ]
cl_swap_HW_address[416] [ -f /usr/es/sbin/cluster/.hwundo ]
cl_swap_HW_address[420] exit 0
cl_swap_IP_address[511] [[ 0 -ne 0 ]]
cl_swap_IP_address[520] ifconfig en0 inet 10.105.88.6 netmask 255.255.255.224 up mtu 1500
cl_swap_IP_address[520] 2>; /dev/null
cl_swap_IP_address[521] sleep 2
cl_swap_IP_address[522] check_ifconfig_status en0 10.105.88.6 255.255.255.224
cl_swap_IP_address[2] set -u
cl_swap_IP_address[4] CH_INTERFACE=en0
cl_swap_IP_address[5] CH_ADDRESS=10.105.88.6
cl_swap_IP_address[6] CH_NETMASK=255.255.255.224
cl_swap_IP_address[8] cl_swap_IP_address[8] read
cl_swap_IP_address[8] ifconfig en0
cl_swap_IP_address[8] read a b c
cl_swap_IP_address[8] print 10.105.88.6
ADDR=10.105.88.6
cl_swap_IP_address[8] [[ 10.105.88.6 != 10.105.88.6 ]]
cl_swap_IP_address[13] return 0
cl_swap_IP_address[522] [[ 0 -ne 0 ]]
cl_swap_IP_address[530] /usr/es/sbin/cluster/.restore_routes
[8] route change -net default -interface 10.105.88.1
10.105.88.1 net default: gateway 10.105.88.1
[10] netstat -nrf inet
[10] read DEST GW FLAGS OTHER
[10] fgrep lo0
[16] route change -net 127/8 -interface 127.0.0.1
127.0.0.1 net 127: gateway 127.0.0.1
[10] read DEST GW FLAGS OTHER
[20] exit 0
cl_swap_IP_address[533] [ rotating = rotating ]
cl_swap_IP_address[535] cl_hats_adapter en0 -e 10.105.88.6
cl_hats_adapter[50] [[ high = high ]]
cl_hats_adapter[50] version=1.13
cl_hats_adapter[51] cl_hats_adapter[51] cl_get_path
HA_DIR=es
cl_hats_adapter[53] IF=en0
cl_hats_adapter[62] FLAG=-e
cl_hats_adapter[63] ADDRESS=10.105.88.6
cl_hats_adapter[64] ADDRESS1=
cl_hats_adapter[66] cldomain
cl_hats_adapter[66] export HA_DOMAIN_NAME=zt_cluster
cl_hats_adapter[67] export HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
cl_hats_adapter[69] set -u
cl_hats_adapter[71] clmixver
3
cl_hats_adapter[72] MIXVER=0
cl_hats_adapter[74] [ 0 -eq 1 -a en0 = css0 ]
cl_hats_adapter[80] cl_hats_adapter[80] /bin/cut -d: -f4
cl_hats_adapter[80] cllsif -cSn 10.105.88.6
TYPE=ether
cl_hats_adapter[83] GP_CONFIG_FILE=/usr/es/sbin/cluster/events/rp_grace_periods
cl_hats_adapter[84] cl_hats_adapter[84] /bin/cut -d  -f2
cl_hats_adapter[84] grep ether /usr/es/sbin/cluster/events/rp_grace_periods
GRACE_PERIOD=60
cl_hats_adapter[86] [ -z 60 ]
cl_hats_adapter[91] [ -e = -m ]
cl_hats_adapter[96] ACK=n
cl_hats_adapter[98] [ -e = -e ]
cl_hats_adapter[100] ACK=y
cl_hats_adapter[103] [ -z  ]
cl_hats_adapter[105] ADDR1=0
cl_hats_adapter[111] [ -e = -g ]
cl_hats_adapter[128] hats_adapter_notify en0 -e 10.105.88.6 0 60 y
cl_hats_adapter[130] [ -e = -m ]
cl_hats_adapter[136] hats_adapter -e 10.105.88.6
cl_hats_adapter[139] [ y = y ]
cl_hats_adapter[142] hats_adapter_notify -a
cl_swap_IP_address[539] flush_arp
cl_swap_IP_address[2] set -u
cl_swap_IP_address[4] arp -an
cl_swap_IP_address[4] tr -d ()
cl_swap_IP_address[4] read host addr other
cl_swap_IP_address[5] arp -d 10.105.88.5
10.105.88.5 (10.105.88.5) deleted
cl_swap_IP_address[4] read host addr other
cl_swap_IP_address[5] arp -d 10.105.88.7
10.105.88.7 (10.105.88.7) deleted
cl_swap_IP_address[4] read host addr other
cl_swap_IP_address[5] arp -d 10.105.88.17
10.105.88.17 (10.105.88.17) deleted
cl_swap_IP_address[4] read host addr other
cl_swap_IP_address[5] arp -d 10.105.88.21
10.105.88.21 (10.105.88.21) deleted
cl_swap_IP_address[4] read host addr other
cl_swap_IP_address[5] arp -d 10.105.88.1
10.105.88.1 (10.105.88.1) deleted
cl_swap_IP_address[4] read host addr other
cl_swap_IP_address[5] arp -d 99.99.99.5
99.99.99.5 (99.99.99.5) deleted
cl_swap_IP_address[4] read host addr other
cl_swap_IP_address[5] arp -d 10.105.88.4
10.105.88.4 (10.105.88.4) deleted
cl_swap_IP_address[4] read host addr other
cl_swap_IP_address[7] return 0
cl_swap_IP_address[635] enable_pmtu_gated
cl_swap_IP_address[637] cl_echo 32 Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en0 10.105.88.6 10.105.88.4 255.255.255.224.  Exit status = 0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating release en0 10.105.88.6 10.105.88.4 255.255.255.224 0
Jun 17 2003 21:22:31 Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en0 10.105.88.6 10.105.88.4 255.255.255.224.  Exit status = 0cl_swap_IP_address[639] exit 0
release_service_addr[192] [ 0 -ne 0 ]
release_service_addr[200] turn_on_DNS_NIS
release_service_addr[201] exit 0
Jun 17 21:22:31 EVENT COMPLETED: release_service_addr a_svc

node_down_local[48] [ 0 -ne 0 ]
node_down_local[440] [ -n  -a -z a_svc -a -z  ]
node_down_local[459] [ 0 -ne 0 ]
node_down_local[471] set +u
node_down_local[472] NOT_DOIT=
node_down_local[473] set -u
node_down_local[474] [  != TRUE ]
node_down_local[476] [ REAL = EMUL ]
node_down_local[484] clchdaemons -r -d clstrmgr_scripts -t resource_locator -o file_res
node_down_local[485] [ 0 -ne 0 ]
node_down_local[493] cl_RMupdate rg_down file_res
cl_RMupdate[137] version=1.27
cl_RMupdate[140] cl_RMupdate[140] cl_get_path
HA_DIR=es
cl_RMupdate[143] [ ! -n  ]
cl_RMupdate[145] EMULATE=REAL
cl_RMupdate[149] set -u
cl_RMupdate[186] [ REAL = EMUL ]
cl_RMupdate[195] clRMupdate rg_down file_res
executing clRMupdate
clRMupdate: checking operation rg_down
clRMupdate: found operation in table
clRMupdate: operating on file_res
clRMupdate: sending operation to resource manager
clRMupdate completed successfully
cl_RMupdate[196] STATUS=0
cl_RMupdate[199] [ 0 -eq 2 -a ( rg_down = suspend_appmon -o rg_down = resume_appmon ) ]
cl_RMupdate[209] [ 0 != 0 ]
cl_RMupdate[215] exit 0
node_down_local[494] [ 0 -ne 0 ]
node_down_local[498] exit 0
Jun 17 21:22:32 EVENT COMPLETED: node_down_local

node_down[191] ((  0 != 0  ))
node_down[195] UPDATESTATD=1
node_down[176] set -a
node_down[177] clsetenvres ora_ip2_res node_down
node_down[177] eval NFS_HOST= DISK= VOLUME_GROUP= CONCURRENT_VOLUME_GROUP= FILESYSTEM= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= APPLICATIONS= MOUNT_FILESYSTEM= SERVICE_LABEL= HTY_SERVICE_LABEL= TAKEOVER_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= CASCADE_WO_FALLBACK="" DISK_FENCING="false" FSCHECK_TOOL="" FS_BEFORE_IPADDR="" INACTIVE_TAKEOVER="false" RECOVERY_METHOD="" NFSMOUNT_LABEL="b_svc" SSA_DISK_FENCING="false"
node_down[177] NFS_HOST= DISK= VOLUME_GROUP= CONCURRENT_VOLUME_GROUP= FILESYSTEM= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= APPLICATIONS= MOUNT_FILESYSTEM= SERVICE_LABEL= HTY_SERVICE_LABEL= TAKEOVER_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= CASCADE_WO_FALLBACK= DISK_FENCING=false FSCHECK_TOOL= FS_BEFORE_IPADDR= INACTIVE_TAKEOVER=false RECOVERY_METHOD= NFSMOUNT_LABEL=b_svc SSA_DISK_FENCING=false
node_down[178] set +a
node_down[179] export GROUPNAME=ora_ip2_res
node_down[179] [[ a = a ]]
node_down[185] clcallev node_down_local

Jun 17 21:22:32 EVENT START: node_down_local

node_down_local[159] [[ high = high ]]
node_down_local[159] version=1.2.1.34
node_down_local[160] node_down_local[160] cl_get_path
HA_DIR=es
node_down_local[162] STATUS=0
node_down_local[164] [ ! -n  ]
node_down_local[166] EMULATE=REAL
node_down_local[169] [ 0 -ne 0 ]
node_down_local[175] set -u
node_down_local[183] clchdaemons -f -d clstrmgr_scripts -t resource_locator -o ora_ip2_res
node_down_local[184] [ 2 -ne 0 ]
node_down_local[186] NOT_REMOVING_GROUP=TRUE
node_down_local[192] set_resource_status RELEASING
node_down_local[3] set +u
node_down_local[4] NOT_DOIT=TRUE
node_down_local[5] set -u
node_down_local[6] [ TRUE != TRUE ]
node_down_local[197] [ -n  ]
node_down_local[230] [ -n  ]
node_down_local[248] [ -n  ]
node_down_local[267] [[ -n  ]]
node_down_local[292] [ -n  ]
node_down_local[312] CROSSMOUNT=0
node_down_local[313] export CROSSMOUNT
node_down_local[315] NFSSTOPPED=0
node_down_local[316] export NFSSTOPPED
node_down_local[318] [ -n  ]
node_down_local[336] [ -n  ]
node_down_local[366] [ 1 = 0 ]
node_down_local[401] [[  = true ]]
node_down_local[409] clcallev release_vg_fs  

Jun 17 21:22:32 EVENT START: release_vg_fs

release_vg_fs[48] [[ high = high ]]
release_vg_fs[48] version=1.4.1.13
release_vg_fs[49] release_vg_fs[49] cl_get_path
HA_DIR=es
release_vg_fs[51] STATUS=0
release_vg_fs[53] [ 2 -ne 2 ]
release_vg_fs[59] FS=
release_vg_fs[60] VG=
release_vg_fs[79] [ -n  ]
release_vg_fs[104] [ -n  ]
release_vg_fs[118] exit 0
Jun 17 21:22:32 EVENT COMPLETED: release_vg_fs

node_down_local[410] [ 0 -ne 0 ]
node_down_local[417] [ -n  ]
node_down_local[432] [[  != true ]]
node_down_local[434] release_addr
node_down_local[9] [ -n  ]
node_down_local[37] [ -n  ]
node_down_local[440] [ -n  -a -z  -a -z  ]
node_down_local[459] [ 0 -ne 0 ]
node_down_local[471] set +u
node_down_local[472] NOT_DOIT=TRUE
node_down_local[473] set -u
node_down_local[474] [ TRUE != TRUE ]
node_down_local[498] exit 0
Jun 17 21:22:32 EVENT COMPLETED: node_down_local

node_down[191] ((  0 != 0  ))
node_down[195] UPDATESTATD=1
node_down[176] set -a
node_down[177] clsetenvres ora_res node_down
node_down[177] eval NFS_HOST= DISK= VOLUME_GROUP= FILESYSTEM= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= SERVICE_LABEL= HTY_SERVICE_LABEL= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= APPLICATIONS="ora_srv" CASCADE_WO_FALLBACK="false" CONCURRENT_VOLUME_GROUP="share_vg01 share_vg02" DISK_FENCING="false" FSCHECK_TOOL="fsck" FS_BEFORE_IPADDR="false" INACTIVE_TAKEOVER="false" RECOVERY_METHOD="sequential" SSA_DISK_FENCING="false"
node_down[177] NFS_HOST= DISK= VOLUME_GROUP= FILESYSTEM= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= SERVICE_LABEL= HTY_SERVICE_LABEL= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= APPLICATIONS=ora_srv CASCADE_WO_FALLBACK=false CONCURRENT_VOLUME_GROUP=share_vg01 share_vg02 DISK_FENCING=false FSCHECK_TOOL=fsck FS_BEFORE_IPADDR=false INACTIVE_TAKEOVER=false RECOVERY_METHOD=sequential SSA_DISK_FENCING=false
node_down[178] set +a
node_down[179] export GROUPNAME=ora_res
node_down[179] [[ a = a ]]
node_down[185] clcallev node_down_local

Jun 17 21:22:32 EVENT START: node_down_local

node_down_local[159] [[ high = high ]]
node_down_local[159] version=1.2.1.34
node_down_local[160] node_down_local[160] cl_get_path
HA_DIR=es
node_down_local[162] STATUS=0
node_down_local[164] [ ! -n  ]
node_down_local[166] EMULATE=REAL
node_down_local[169] [ 0 -ne 0 ]
node_down_local[175] set -u
node_down_local[183] clchdaemons -f -d clstrmgr_scripts -t resource_locator -o ora_res
UP
node_down_local[184] [ 0 -ne 0 ]
node_down_local[192] set_resource_status RELEASING
node_down_local[3] set +u
node_down_local[4] NOT_DOIT=
node_down_local[5] set -u
node_down_local[6] [  != TRUE ]
node_down_local[8] [ REAL = EMUL ]
node_down_local[13] clchdaemons -d clstrmgr_scripts -t resource_locator -n a -o ora_res -v RELEASING
node_down_local[14] [ 0 -ne 0 ]
node_down_local[23] [ RELEASING != ERROR ]
node_down_local[25] cl_RMupdate releasing ora_res
cl_RMupdate[137] version=1.27
cl_RMupdate[140] cl_RMupdate[140] cl_get_path
HA_DIR=es
cl_RMupdate[143] [ ! -n  ]
cl_RMupdate[145] EMULATE=REAL
cl_RMupdate[149] set -u
cl_RMupdate[186] [ REAL = EMUL ]
cl_RMupdate[195] clRMupdate releasing ora_res
executing clRMupdate
clRMupdate: checking operation releasing
clRMupdate: found operation in table
clRMupdate: operating on ora_res
clRMupdate: sending operation to resource manager
clRMupdate completed successfully
cl_RMupdate[196] STATUS=0
cl_RMupdate[199] [ 0 -eq 2 -a ( releasing = suspend_appmon -o releasing = resume_appmon ) ]
cl_RMupdate[209] [ 0 != 0 ]
cl_RMupdate[215] exit 0
node_down_local[26] [ 0 -ne 0 ]
node_down_local[197] [ -n ora_srv ]
node_down_local[200] TMPLIST=
node_down_local[201] let cnt=0
node_down_local[202] print ora_srv
node_down_local[202] set -A appnames ora_srv
node_down_local[204] ((   cnt < 1  ))
node_down_local[205] TMPLIST=ora_srv
node_down_local[206] APPLICATIONS=ora_srv
node_down_local[207] let cnt=cnt+1
node_down_local[204] ((   cnt < 1  ))
node_down_local[210] APPLICATIONS=ora_srv
node_down_local[213] [ REAL = EMUL ]
node_down_local[218] clcallev stop_server ora_srv

Jun 17 21:22:32 EVENT START: stop_server ora_srv

stop_server[48] [[ high = high ]]
stop_server[48] version=1.4.1.6
stop_server[49] stop_server[49] cl_get_path
HA_DIR=es
stop_server[51] STATUS=0
stop_server[53] SS_FILE=/usr/es/sbin/cluster/server.status
stop_server[57] [ ! -n  ]
stop_server[59] EMULATE=REAL
stop_server[62] set -u
stop_server[71] stop_server[71] cut -d: -f3
stop_server[71] cllsserv -cn ora_srv
STOP=/usr/script/ora_stop
stop_server[73] PATTERN=a ora_srv
stop_server[80] grep -x a ora_srv /usr/es/sbin/cluster/server.status
stop_server[80] 2>; /dev/null
stop_server[80] [ a ora_srv !=  ]
stop_server[87] [ -x /usr/script/ora_stop ]
stop_server[89] [ REAL = EMUL ]
stop_server[94] /usr/script/ora_stop
stop_server[94] ODMDIR=/etc/objrepos

  ####   #####     ##             ####    #####   ####   #####
#    #  #    #   #  #           #          #    #    #  #    #
#    #  #    #  #    #           ####      #    #    #  #    #
#    #  #####   ######               #     #    #    #  #####
#    #  #   #   #    #          #    #     #    #    #  #
  ####   #    #  #    # #######   ####      #     ####   #

stop_server[96] [ 0 -ne 0 ]
stop_server[101] cat
stop_server[101] cat /usr/es/sbin/cluster/server.status
stop_server[101] stop_server[101] grep -vx a ora_srv
1>; /tmp/server.tmp
stop_server[102] mv /tmp/server.tmp /usr/es/sbin/cluster/server.status
stop_server[112] exit 0
Jun 17 21:22:32 EVENT COMPLETED: stop_server ora_srv

node_down_local[221] [ 0 -ne 0 ]
node_down_local[230] [ -n  ]
node_down_local[248] [ -n  ]
node_down_local[267] [[ -n  ]]
node_down_local[292] [ -n  ]
node_down_local[312] CROSSMOUNT=0
node_down_local[313] export CROSSMOUNT
node_down_local[315] NFSSTOPPED=0
node_down_local[316] export NFSSTOPPED
node_down_local[318] [ -n  ]
node_down_local[336] [ -n  ]
node_down_local[366] [ 1 = 0 ]
node_down_local[401] [[ false = true ]]
node_down_local[409] clcallev release_vg_fs  

Jun 17 21:22:32 EVENT START: release_vg_fs

release_vg_fs[48] [[ high = high ]]
release_vg_fs[48] version=1.4.1.13
release_vg_fs[49] release_vg_fs[49] cl_get_path
HA_DIR=es
release_vg_fs[51] STATUS=0
release_vg_fs[53] [ 2 -ne 2 ]
release_vg_fs[59] FS=
release_vg_fs[60] VG=
release_vg_fs[79] [ -n  ]
release_vg_fs[104] [ -n  ]
release_vg_fs[118] exit 0
Jun 17 21:22:32 EVENT COMPLETED: release_vg_fs

node_down_local[410] [ 0 -ne 0 ]
node_down_local[417] [ -n share_vg01 share_vg02 ]
node_down_local[421] cl_deactivate_vgs share_vg01 share_vg02
cl_deactivate_vgs[147] [[ high = high ]]
cl_deactivate_vgs[147] version=1.1.1.24
cl_deactivate_vgs[149] STATUS=0
cl_deactivate_vgs[150] TMP_FILENAME=_deactivate_vgs.tmp
cl_deactivate_vgs[152] [ ! -n  ]
cl_deactivate_vgs[154] EMULATE=REAL
cl_deactivate_vgs[157] EVENT_TYPE=not_set
cl_deactivate_vgs[158] EVENT_TYPE=not_set
cl_deactivate_vgs[161] set -u
cl_deactivate_vgs[164] [ 1 -eq 0 ]
cl_deactivate_vgs[172] [[ -f /tmp/_deactivate_vgs.tmp ]]
cl_deactivate_vgs[179] fgrep -s -x share_vg01
cl_deactivate_vgs[179] lsvg -o
cl_deactivate_vgs[181] [ 0 -ne 0 ]
cl_deactivate_vgs[185] [ REAL = EMUL ]
cl_deactivate_vgs[190] vgs_varyoff share_vg01 cl_deactivate_vgs _deactivate_vgs.tmp
cl_deactivate_vgs[4] VG=share_vg01
cl_deactivate_vgs[5] PROGNAME=cl_deactivate_vgs
cl_deactivate_vgs[6] TMP_FILENAME=_deactivate_vgs.tmp
cl_deactivate_vgs[7] STATUS=0
cl_deactivate_vgs[7] [[ not_set = reconfig* ]]
cl_deactivate_vgs[19] close_vg share_vg01 cl_deactivate_vgs _deactivate_vgs.tmp
cl_deactivate_vgs[4] VG=share_vg01
cl_deactivate_vgs[5] PROGNAME=cl_deactivate_vgs
cl_deactivate_vgs[6] TMP_FILENAME=_deactivate_vgs.tmp
cl_deactivate_vgs[7] STATUS=0
cl_deactivate_vgs[10] cl_deactivate_vgs[179] fgrep -s -x share_vg02
cl_deactivate_vgs[179] lsvg -o
cl_deactivate_vgs[10] awk {if ($2 == "jfs" && $6 ~ /open/) print $1}
cl_deactivate_vgs[10] lsvg -l share_vg01
cl_deactivate_vgs[181] [ 0 -ne 0 ]
cl_deactivate_vgs[185] [ REAL = EMUL ]
cl_deactivate_vgs[196] wait
cl_deactivate_vgs[190] vgs_varyoff share_vg02 cl_deactivate_vgs _deactivate_vgs.tmp
cl_deactivate_vgs[4] VG=share_vg02
cl_deactivate_vgs[5] PROGNAME=cl_deactivate_vgs
cl_deactivate_vgs[6] TMP_FILENAME=_deactivate_vgs.tmp
cl_deactivate_vgs[7] STATUS=0
cl_deactivate_vgs[7] [[ not_set = reconfig* ]]
cl_deactivate_vgs[19] close_vg share_vg02 cl_deactivate_vgs _deactivate_vgs.tmp
cl_deactivate_vgs[4] VG=share_vg02
cl_deactivate_vgs[5] PROGNAME=cl_deactivate_vgs
cl_deactivate_vgs[6] TMP_FILENAME=_deactivate_vgs.tmp
cl_deactivate_vgs[7] STATUS=0
cl_deactivate_vgs[10] cl_deactivate_vgs[10] awk {if ($2 == "jfs" && $6 ~ /open/) print $1}
cl_deactivate_vgs[10] lsvg -l share_vg02
OPEN_LVs=
cl_deactivate_vgs[13] [ -n  ]
cl_deactivate_vgs[22] odmget HACMPnode
cl_deactivate_vgs[22] grep name =
cl_deactivate_vgs[22] sort
cl_deactivate_vgs[22] uniq
cl_deactivate_vgs[22] wc -l
cl_deactivate_vgs[22] [ 2 -eq 2 ]
cl_deactivate_vgs[24] [ -n  ]
cl_deactivate_vgs[34] varyoffvg share_vg01
OPEN_LVs=
cl_deactivate_vgs[13] [ -n  ]
cl_deactivate_vgs[22] odmget HACMPnode
cl_deactivate_vgs[22] grep name =
cl_deactivate_vgs[22] sort
cl_deactivate_vgs[22] wc -l
cl_deactivate_vgs[22] uniq
cl_deactivate_vgs[22] [ 2 -eq 2 ]
cl_deactivate_vgs[24] [ -n  ]
cl_deactivate_vgs[34] varyoffvg share_vg02
cl_deactivate_vgs[35] [ 0 -ne 0 ]
cl_deactivate_vgs[42] echo 0
cl_deactivate_vgs[42] 1>;>; /tmp/_deactivate_vgs.tmp
cl_deactivate_vgs[43] return 0
cl_deactivate_vgs[35] [ 0 -ne 0 ]
cl_deactivate_vgs[42] echo 0
cl_deactivate_vgs[42] 1>;>; /tmp/_deactivate_vgs.tmp
cl_deactivate_vgs[43] return 0
cl_deactivate_vgs[199] [ -f /tmp/_deactivate_vgs.tmp ]
cl_deactivate_vgs[201] grep -q 1 /tmp/_deactivate_vgs.tmp
cl_deactivate_vgs[201] [[ 1 -eq 0 ]]
cl_deactivate_vgs[204] rm -f /tmp/_deactivate_vgs.tmp
cl_deactivate_vgs[208] exit 0
node_down_local[423] [ 0 -ne 0 ]
node_down_local[432] [[ false != true ]]
node_down_local[434] release_addr
node_down_local[9] [ -n  ]
node_down_local[37] [ -n  ]
node_down_local[440] [ -n  -a -z  -a -z  ]
node_down_local[459] [ 0 -ne 0 ]
node_down_local[471] set +u
node_down_local[472] NOT_DOIT=
node_down_local[473] set -u
node_down_local[474] [  != TRUE ]
node_down_local[476] [ REAL = EMUL ]
node_down_local[484] clchdaemons -r -d clstrmgr_scripts -t resource_locator -o ora_res
node_down_local[485] [ 0 -ne 0 ]
node_down_local[493] cl_RMupdate rg_down ora_res
cl_RMupdate[137] version=1.27
cl_RMupdate[140] cl_RMupdate[140] cl_get_path
HA_DIR=es
cl_RMupdate[143] [ ! -n  ]
cl_RMupdate[145] EMULATE=REAL
cl_RMupdate[149] set -u
cl_RMupdate[186] [ REAL = EMUL ]
cl_RMupdate[195] clRMupdate rg_down ora_res
executing clRMupdate
clRMupdate: checking operation rg_down
clRMupdate: found operation in table
clRMupdate: operating on ora_res
clRMupdate: sending operation to resource manager
clRMupdate completed successfully
cl_RMupdate[196] STATUS=0
cl_RMupdate[199] [ 0 -eq 2 -a ( rg_down = suspend_appmon -o rg_down = resume_appmon ) ]
cl_RMupdate[209] [ 0 != 0 ]
cl_RMupdate[215] exit 0
node_down_local[494] [ 0 -ne 0 ]
node_down_local[498] exit 0
Jun 17 21:22:35 EVENT COMPLETED: node_down_local

node_down[191] ((  0 != 0  ))
node_down[195] UPDATESTATD=1
node_down[199] [ -f /tmp/.NFSSTOPPED ]
node_down[235] [[ a = a ]]
node_down[235] [[ -f /usr/lpp/csd/bin/hacmp_vsd_down1 ]]
node_down[235] [[ REAL = EMUL ]]
node_down[240] /usr/lpp/csd/bin/hacmp_vsd_down1 a
hacmp_vsd_down1[54] [[ high = high ]]
hacmp_vsd_down1[54] version=1.6.1.10
hacmp_vsd_down1[55] hacmp_vsd_down1[55] cl_get_path
HA_DIR=es
hacmp_vsd_down1[57] VSDBPATH=/usr/lpp/csd/bin
hacmp_vsd_down1[58] VSDDPATH=/usr/lpp/csd/vsdfiles
hacmp_vsd_down1[61] [ 1 -ne 1 ]
hacmp_vsd_down1[69] hanode=a
hacmp_vsd_down1[75] [ a = a ]
hacmp_vsd_down1[76] NODE_IS_ME=1
hacmp_vsd_down1[84] NOVSD=1
hacmp_vsd_down1[85] [ -s /usr/lpp/csd/vsdfiles/VSD_ipaddr ]
hacmp_vsd_down1[235] hacmp2vsd
hacmp_vsd_down1[237] export LOCAL_NODE=1
hacmp_vsd_down1[238] export MEMBERSHIP= 2
hacmp_vsd_down1[239] echo VSD MEMBERSHIP= 2
VSD MEMBERSHIP= 2
hacmp_vsd_down1[240] echo LOCAL VSD NODE=1
LOCAL VSD NODE=1
hacmp_vsd_down1[241] echo VSD NODE IN ACTION=1
VSD NODE IN ACTION=1
hacmp_vsd_down1[242] echo hacmp_vsd_down1 STARTING CALLED WITH PARAMETER a
hacmp_vsd_down1 STARTING CALLED WITH PARAMETER a
hacmp_vsd_down1[364] [ -x /usr/lpp/csd/bin/HACMP.vsd.DOWN1 ]
hacmp_vsd_down1[406] echo hacmp_vsd_down1 EXITING RC=0 CALLED WITH PARAMETER a
hacmp_vsd_down1 EXITING RC=0 CALLED WITH PARAMETER a
hacmp_vsd_down1[408] exit 0
node_down[251] [[  != forced ]]
node_down[251] [[ REAL = EMUL ]]
node_down[259] cl_9333_fence down a
cl_9333_fence[158] [[ high = high ]]
cl_9333_fence[158] version=1.9
cl_9333_fence[159] cl_9333_fence[159] cl_get_path
HA_DIR=es
cl_9333_fence[162] echo PRE_EVENT_MEMBERSHIP=a b
PRE_EVENT_MEMBERSHIP=a b
cl_9333_fence[163] echo POST_EVENT_MEMBERSHIP=b
POST_EVENT_MEMBERSHIP=b
cl_9333_fence[165] EVENT=down
cl_9333_fence[166] NODENAME=a
cl_9333_fence[167] PARAM=
cl_9333_fence[168] STATUS=0
cl_9333_fence[170] set -u
cl_9333_fence[172] [ 2 -gt 1 ]
cl_9333_fence[286] [ a = a ]
cl_9333_fence[290] [ b !=  ]
cl_9333_fence[293] exit 0
node_down[269] [[ a = a ]]
node_down[269] [[ REAL = EMUL ]]
node_down[275] clchdaemons -r -d clstrmgr_scripts -t resource_locator
node_down[282] exit 0
Jun 17 21:22:36 EVENT COMPLETED: node_down a


Jun 17 21:22:46 EVENT START: node_down_complete a

node_down_complete[56] [[ high = high ]]
node_down_complete[56] version=1.2.3.16
node_down_complete[57] node_down_complete[57] cl_get_path
HA_DIR=es
node_down_complete[59] NODENAME=a
node_down_complete[60] PARAM=
node_down_complete[62] VSD_PROG=/usr/lpp/csd/bin/hacmp_vsd_down2
node_down_complete[63] HPS_PROG=/usr/es/sbin/cluster/events/utils/cl_HPS_init
node_down_complete[72] STATUS=0
node_down_complete[74] [ ! -n  ]
node_down_complete[76] EMULATE=REAL
node_down_complete[79] set -u
node_down_complete[81] [ 1 -lt 1 ]
node_down_complete[92] set -a
node_down_complete[93] clsetenvgrp a node_down_complete
clsetenvgrp[40] [[ high = high ]]
clsetenvgrp[40] version=1.14
clsetenvgrp[41] clsetenvgrp[41] cl_get_path
HA_DIR=es
clsetenvgrp[43] STATUS=0
clsetenvgrp[55] HAVERSION=440
clsetenvgrp[60] [[ -f /tmp/OLDCLSET ]]
clsetenvgrp[60] [[ -n 440 ]]
clsetenvgrp[66] ((  440 >;= 440  ))
clsetenvgrp[67] usingVer=N
clsetenvgrp[79] clsetenvgrpN a node_down_complete
clsetenvgrp[80] exit 0
node_down_complete[93] eval FORCEDOWN_GROUPS="" NFS_file_res="FALSE" NFS_ora_ip2_res="TRUE" RESOURCE_GROUPS="file_res ora_ip2_res ora_res"
node_down_complete[93] FORCEDOWN_GROUPS= NFS_file_res=FALSE NFS_ora_ip2_res=TRUE RESOURCE_GROUPS=file_res ora_ip2_res ora_res
node_down_complete[94] RC=0
node_down_complete[95] set +a
node_down_complete[96] [ 0 -ne 0 ]
node_down_complete[104] [ -f /usr/lpp/csd/bin/hacmp_vsd_down2 ]
node_down_complete[106] [ REAL = EMUL ]
node_down_complete[111] /usr/lpp/csd/bin/hacmp_vsd_down2 a
hacmp_vsd_down2[54] [[ high = high ]]
hacmp_vsd_down2[54] version=1.6.1.10
hacmp_vsd_down2[55] hacmp_vsd_down2[55] cl_get_path
HA_DIR=es
hacmp_vsd_down2[57] VSDBPATH=/usr/lpp/csd/bin
hacmp_vsd_down2[58] VSDDPATH=/usr/lpp/csd/vsdfiles
hacmp_vsd_down2[61] [ 1 -ne 1 ]
hacmp_vsd_down2[69] hanode=a
hacmp_vsd_down2[75] [ a = a ]
hacmp_vsd_down2[76] NODE_IS_ME=1
hacmp_vsd_down2[84] NOVSD=1
hacmp_vsd_down2[85] [ -s /usr/lpp/csd/vsdfiles/VSD_ipaddr ]
hacmp_vsd_down2[235] hacmp2vsd
hacmp_vsd_down2[237] export LOCAL_NODE=1
hacmp_vsd_down2[238] export MEMBERSHIP= 2
hacmp_vsd_down2[239] echo VSD MEMBERSHIP= 2
VSD MEMBERSHIP= 2
hacmp_vsd_down2[240] echo LOCAL VSD NODE=1
LOCAL VSD NODE=1
hacmp_vsd_down2[241] echo VSD NODE IN ACTION=1
VSD NODE IN ACTION=1
hacmp_vsd_down2[242] echo hacmp_vsd_down2 STARTING CALLED WITH PARAMETER a
hacmp_vsd_down2 STARTING CALLED WITH PARAMETER a
hacmp_vsd_down2[385] [ 1 -eq 0 ]
hacmp_vsd_down2[401] /bin/echo 1 127.0.0.1 0
hacmp_vsd_down2[401] 1>; /usr/lpp/csd/bin/machines.lst
hacmp_vsd_down2[406] echo hacmp_vsd_down2 EXITING RC=0 CALLED WITH PARAMETER a
hacmp_vsd_down2 EXITING RC=0 CALLED WITH PARAMETER a
hacmp_vsd_down2[408] exit 0
node_down_complete[114] [ 0 -ne 0 ]
node_down_complete[123] node_down_complete[123] odmget -qnodename = a HACMPadapter
node_down_complete[123] grep hps
node_down_complete[123] grep type
SP_SWITCH=
node_down_complete[129] [ -n  -a -f /usr/es/sbin/cluster/events/utils/cl_HPS_init ]
node_down_complete[162] set -a
node_down_complete[163] clsetenvres file_res node_down_complete
node_down_complete[163] eval NFS_HOST= DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= HTY_SERVICE_LABEL= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= APPLICATIONS="file_srv" CASCADE_WO_FALLBACK="false" DISK_FENCING="false" FILESYSTEM="/home/bill" FSCHECK_TOOL="fsck" FS_BEFORE_IPADDR="false" INACTIVE_TAKEOVER="false" RECOVERY_METHOD="sequential" SERVICE_LABEL="a_svc" SSA_DISK_FENCING="false" VOLUME_GROUP="filesys_vg01"
node_down_complete[163] NFS_HOST= DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= HTY_SERVICE_LABEL= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= APPLICATIONS=file_srv CASCADE_WO_FALLBACK=false DISK_FENCING=false FILESYSTEM=/home/bill FSCHECK_TOOL=fsck FS_BEFORE_IPADDR=false INACTIVE_TAKEOVER=false RECOVERY_METHOD=sequential SERVICE_LABEL=a_svc SSA_DISK_FENCING=false VOLUME_GROUP=filesys_vg01
node_down_complete[164] set +a
node_down_complete[165] export GROUPNAME=file_res
node_down_complete[171] [ a = a ]
node_down_complete[173] clcallev node_down_local_complete

Jun 17 21:22:46 EVENT START: node_down_local_complete

node_down_local_complete[52] [[ high = high ]]
node_down_local_complete[52] version=1.5.1.9
node_down_local_complete[53] node_down_local_complete[53] cl_get_path
HA_DIR=es
node_down_local_complete[55] [ 0 -gt 2 ]
node_down_local_complete[61] [ ! -n  ]
node_down_local_complete[63] EMULATE=REAL
node_down_local_complete[66] set -u
node_down_local_complete[71] cl_RMupdate rg_down file_res
cl_RMupdate[137] version=1.27
cl_RMupdate[140] cl_RMupdate[140] cl_get_path
HA_DIR=es
cl_RMupdate[143] [ ! -n  ]
cl_RMupdate[145] EMULATE=REAL
cl_RMupdate[149] set -u
cl_RMupdate[186] [ REAL = EMUL ]
cl_RMupdate[195] clRMupdate rg_down file_res
executing clRMupdate
clRMupdate: checking operation rg_down
clRMupdate: found operation in table
clRMupdate: operating on file_res
clRMupdate: sending operation to resource manager
clRMupdate completed successfully
cl_RMupdate[196] STATUS=0
cl_RMupdate[199] [ 0 -eq 2 -a ( rg_down = suspend_appmon -o rg_down = resume_appmon ) ]
cl_RMupdate[209] [ 0 != 0 ]
cl_RMupdate[215] exit 0
node_down_local_complete[73] exit 0
Jun 17 21:22:47 EVENT COMPLETED: node_down_local_complete

node_down_complete[179] [ 0 -ne 0 ]
node_down_complete[162] set -a
node_down_complete[163] clsetenvres ora_ip2_res node_down_complete
node_down_complete[163] eval NFS_HOST= DISK= VOLUME_GROUP= CONCURRENT_VOLUME_GROUP= FILESYSTEM= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= APPLICATIONS= MOUNT_FILESYSTEM= SERVICE_LABEL= HTY_SERVICE_LABEL= TAKEOVER_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= CASCADE_WO_FALLBACK="" DISK_FENCING="false" FSCHECK_TOOL="" FS_BEFORE_IPADDR="" INACTIVE_TAKEOVER="false" RECOVERY_METHOD="" NFSMOUNT_LABEL="b_svc" SSA_DISK_FENCING="false"
node_down_complete[163] NFS_HOST= DISK= VOLUME_GROUP= CONCURRENT_VOLUME_GROUP= FILESYSTEM= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= APPLICATIONS= MOUNT_FILESYSTEM= SERVICE_LABEL= HTY_SERVICE_LABEL= TAKEOVER_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= CASCADE_WO_FALLBACK= DISK_FENCING=false FSCHECK_TOOL= FS_BEFORE_IPADDR= INACTIVE_TAKEOVER=false RECOVERY_METHOD= NFSMOUNT_LABEL=b_svc SSA_DISK_FENCING=false
node_down_complete[164] set +a
node_down_complete[165] export GROUPNAME=ora_ip2_res
node_down_complete[171] [ a = a ]
node_down_complete[173] clcallev node_down_local_complete

Jun 17 21:22:47 EVENT START: node_down_local_complete

node_down_local_complete[52] [[ high = high ]]
node_down_local_complete[52] version=1.5.1.9
node_down_local_complete[53] node_down_local_complete[53] cl_get_path
HA_DIR=es
node_down_local_complete[55] [ 0 -gt 2 ]
node_down_local_complete[61] [ ! -n  ]
node_down_local_complete[63] EMULATE=REAL
node_down_local_complete[66] set -u
node_down_local_complete[71] cl_RMupdate rg_down ora_ip2_res
cl_RMupdate[137] version=1.27
cl_RMupdate[140] cl_RMupdate[140] cl_get_path
HA_DIR=es
cl_RMupdate[143] [ ! -n  ]
cl_RMupdate[145] EMULATE=REAL
cl_RMupdate[149] set -u
cl_RMupdate[186] [ REAL = EMUL ]
cl_RMupdate[195] clRMupdate rg_down ora_ip2_res
executing clRMupdate
clRMupdate: checking operation rg_down
clRMupdate: found operation in table
clRMupdate: operating on ora_ip2_res
clRMupdate: sending operation to resource manager
clRMupdate completed successfully
cl_RMupdate[196] STATUS=0
cl_RMupdate[199] [ 0 -eq 2 -a ( rg_down = suspend_appmon -o rg_down = resume_appmon ) ]
cl_RMupdate[209] [ 0 != 0 ]
cl_RMupdate[215] exit 0
node_down_local_complete[73] exit 0
Jun 17 21:22:48 EVENT COMPLETED: node_down_local_complete

node_down_complete[179] [ 0 -ne 0 ]
node_down_complete[162] set -a
node_down_complete[163] clsetenvres ora_res node_down_complete
node_down_complete[163] eval NFS_HOST= DISK= VOLUME_GROUP= FILESYSTEM= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= SERVICE_LABEL= HTY_SERVICE_LABEL= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= APPLICATIONS="ora_srv" CASCADE_WO_FALLBACK="false" CONCURRENT_VOLUME_GROUP="share_vg01 share_vg02" DISK_FENCING="false" FSCHECK_TOOL="fsck" FS_BEFORE_IPADDR="false" INACTIVE_TAKEOVER="false" RECOVERY_METHOD="sequential" SSA_DISK_FENCING="false"
node_down_complete[163] NFS_HOST= DISK= VOLUME_GROUP= FILESYSTEM= EXPORT_FILESYSTEM= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= SERVICE_LABEL= HTY_SERVICE_LABEL= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= APPLICATIONS=ora_srv CASCADE_WO_FALLBACK=false CONCURRENT_VOLUME_GROUP=share_vg01 share_vg02 DISK_FENCING=false FSCHECK_TOOL=fsck FS_BEFORE_IPADDR=false INACTIVE_TAKEOVER=false RECOVERY_METHOD=sequential SSA_DISK_FENCING=false
node_down_complete[164] set +a
node_down_complete[165] export GROUPNAME=ora_res
node_down_complete[171] [ a = a ]
node_down_complete[173] clcallev node_down_local_complete

Jun 17 21:22:48 EVENT START: node_down_local_complete

node_down_local_complete[52] [[ high = high ]]
node_down_local_complete[52] version=1.5.1.9
node_down_local_complete[53] node_down_local_complete[53] cl_get_path
HA_DIR=es
node_down_local_complete[55] [ 0 -gt 2 ]
node_down_local_complete[61] [ ! -n  ]
node_down_local_complete[63] EMULATE=REAL
node_down_local_complete[66] set -u
node_down_local_complete[71] cl_RMupdate rg_down ora_res
cl_RMupdate[137] version=1.27
cl_RMupdate[140] cl_RMupdate[140] cl_get_path
HA_DIR=es
cl_RMupdate[143] [ ! -n  ]
cl_RMupdate[145] EMULATE=REAL
cl_RMupdate[149] set -u
cl_RMupdate[186] [ REAL = EMUL ]
cl_RMupdate[195] clRMupdate rg_down ora_res
executing clRMupdate
clRMupdate: checking operation rg_down
clRMupdate: found operation in table
clRMupdate: operating on ora_res
clRMupdate: sending operation to resource manager
clRMupdate completed successfully
cl_RMupdate[196] STATUS=0
cl_RMupdate[199] [ 0 -eq 2 -a ( rg_down = suspend_appmon -o rg_down = resume_appmon ) ]
cl_RMupdate[209] [ 0 != 0 ]
cl_RMupdate[215] exit 0
node_down_local_complete[73] exit 0
Jun 17 21:22:49 EVENT COMPLETED: node_down_local_complete

node_down_complete[179] [ 0 -ne 0 ]
node_down_complete[188] exit 0
Jun 17 21:22:49 EVENT COMPLETED: node_down_complete a

hacmp_hc_start : Entered
hacmp_hc_start : Starting HC process
hacmp_hc_start : Entered
hacmp_hc_start : Starting HC process

Jun 17 21:42:33 EVENT START: node_up a

node_up[124] [[ high = high ]]
node_up[124] version=1.10.1.21
node_up[125] node_up[125] cl_get_path
HA_DIR=es
node_up[127] NODENAME=a
node_up[129] HPS_CMD=/usr/es/sbin/cluster/events/utils/cl_HPS_init
node_up[130] VSD_CMD=/usr/lpp/csd/bin/hacmp_vsd_up1
node_up[131] SS_FILE=/usr/es/sbin/cluster/server.status
node_up[133] STATUS=0
node_up[10] [ ! -n  ]
node_up[137] EMULATE=REAL
node_up[140] set -u
node_up[142] [ 1 -ne 1 ]
node_up[151] [ a = a ]
node_up[153] node_up[153] odmget -qnodename = a HACMPadapter
node_up[153] grep hps
node_up[153] grep type
SP_SWITCH=
node_up[155] [ REAL = EMUL ]
node_up[162] [ -n  -a -f /usr/es/sbin/cluster/events/utils/cl_HPS_init ]
node_up[207] [ REAL = EMUL ]
node_up[212] cl_ssa_fence up a
cl_ssa_fence[70] [[ high = high ]]
cl_ssa_fence[70] version=1.9
cl_ssa_fence[71] cl_ssa_fence[71] cl_get_path
HA_DIR=es
cl_ssa_fence[74] echo PRE_EVENT_MEMBERSHIP=b
PRE_EVENT_MEMBERSHIP=b
cl_ssa_fence[75] echo POST_EVENT_MEMBERSHIP=a b
POST_EVENT_MEMBERSHIP=a b
cl_ssa_fence[77] EVENT=up
cl_ssa_fence[78] NODENAME=a
cl_ssa_fence[79] STATUS=0
cl_ssa_fence[82] export EVENT_ON_NODE=a
cl_ssa_fence[84] [ 2 -gt 1 ]
cl_ssa_fence[91] [ a = a ]
cl_ssa_fence[93] [ b !=  ]
cl_ssa_fence[95] exit 0
node_up[215] [ 0 -ne 0 ]
node_up[225] [ REAL = EMUL ]
node_up[230] cl_9333_fence up a
cl_9333_fence[158] [[ high = high ]]
cl_9333_fence[158] version=1.9
cl_9333_fence[159] cl_9333_fence[159] cl_get_path
HA_DIR=es
cl_9333_fence[162] echo PRE_EVENT_MEMBERSHIP=b
PRE_EVENT_MEMBERSHIP=b
cl_9333_fence[163] echo POST_EVENT_MEMBERSHIP=a b
POST_EVENT_MEMBERSHIP=a b
cl_9333_fence[165] EVENT=up
cl_9333_fence[166] NODENAME=a
cl_9333_fence[167] PARAM=
cl_9333_fence[168] STATUS=0
cl_9333_fence[170] set -u
cl_9333_fence[172] [ 2 -gt 1 ]
cl_9333_fence[179] [ a = a ]
cl_9333_fence[181] [ b !=  ]
cl_9333_fence[183] exit 0
node_up[233] [ 0 -ne 0 ]
node_up[243] [ a = a ]
node_up[245] [ REAL = EMUL ]
node_up[251] clchdaemons -r -d clstrmgr_scripts -t resource_locator
node_up[252] [ 0 -ne 0 ]
node_up[264] set -a
node_up[265] clsetenvgrp a node_up
clsetenvgrp[40] [[ high = high ]]
clsetenvgrp[40] version=1.14
clsetenvgrp[41] clsetenvgrp[41] cl_get_path
HA_DIR=es
clsetenvgrp[43] STATUS=0
clsetenvgrp[55] HAVERSION=440
clsetenvgrp[60] [[ -f /tmp/OLDCLSET ]]
clsetenvgrp[60] [[ -n 440 ]]
clsetenvgrp[66] ((  440 >;= 440  ))
clsetenvgrp[67] usingVer=N
clsetenvgrp[79] clsetenvgrpN a node_up
clsetenvgrp[80] exit 0
node_up[265] eval FORCEDOWN_GROUPS="" NFS_file_res="FALSE" NFS_ora_ip2_res="TRUE" NFSNODE_ora_ip2_res="b" RESOURCE_GROUPS="file_res ora_ip2_res ora_res"
node_up[265] FORCEDOWN_GROUPS= NFS_file_res=FALSE NFS_ora_ip2_res=TRUE NFSNODE_ora_ip2_res=b RESOURCE_GROUPS=file_res ora_ip2_res ora_res
node_up[266] RC=0
node_up[267] set +a
node_up[268] [ 0 -ne 0 ]
node_up[276] [ a = a ]
node_up[278] [ -f /usr/lpp/csd/bin/hacmp_vsd_up1 ]
node_up[280] [ REAL = EMUL ]
node_up[285] /usr/lpp/csd/bin/hacmp_vsd_up1 a
hacmp_vsd_up1[54] [[ high = high ]]
hacmp_vsd_up1[54] version=1.6.1.10
hacmp_vsd_up1[55] hacmp_vsd_up1[55] cl_get_path
HA_DIR=es
hacmp_vsd_up1[57] VSDBPATH=/usr/lpp/csd/bin
hacmp_vsd_up1[58] VSDDPATH=/usr/lpp/csd/vsdfiles
hacmp_vsd_up1[61] [ 1 -ne 1 ]
hacmp_vsd_up1[69] hanode=a
hacmp_vsd_up1[75] [ a = a ]
hacmp_vsd_up1[76] NODE_IS_ME=1
hacmp_vsd_up1[84] NOVSD=1
hacmp_vsd_up1[85] [ -s /usr/lpp/csd/vsdfiles/VSD_ipaddr ]
hacmp_vsd_up1[235] hacmp2vsd
hacmp_vsd_up1[237] export LOCAL_NODE=1
hacmp_vsd_up1[238] export MEMBERSHIP= 1 1 1 1 1 1 1 1
hacmp_vsd_up1[239] echo VSD MEMBERSHIP= 1 1 1 1 1 1 1 1
VSD MEMBERSHIP= 1 1 1 1 1 1 1 1
hacmp_vsd_up1[240] echo LOCAL VSD NODE=1
LOCAL VSD NODE=1
hacmp_vsd_up1[241] echo VSD NODE IN ACTION=1
VSD NODE IN ACTION=1
hacmp_vsd_up1[242] echo hacmp_vsd_up1 STARTING CALLED WITH PARAMETER a
hacmp_vsd_up1 STARTING CALLED WITH PARAMETER a
hacmp_vsd_up1[258] [ 1 -eq 1 ]
hacmp_vsd_up1[259] touch /usr/lpp/csd/bin/norecov
hacmp_vsd_up1[260] hacmp_vsd_up1[260] ps -aef
hacmp_vsd_up1[260] awk { print $2 }
hacmp_vsd_up1[260] grep -v grep
hacmp_vsd_up1[260] grep /usr/lpp/csd/bin/hc
OLD_HCPID=1106
hacmp_vsd_up1[261] [ -n 1106 ]
hacmp_vsd_up1[263] grep -Fqw 127.0.0.1 /usr/lpp/csd/bin/machines.lst
hacmp_vsd_up1[263] [[ 0 -ne 0 ]]
hacmp_vsd_up1[272] echo 1106
hacmp_vsd_up1[272] 1>; /usr/lpp/csd/bin/hcpid
hacmp_vsd_up1[273] kill -9 1106
hacmp_vsd_up1[277] [ -x /usr/lpp/csd/bin/HACMP.vsd.UP1 ]
hacmp_vsd_up1[285] [ -x /usr/lpp/csd/bin/HACMP.vsd.UP1 ]
hacmp_vsd_up1[406] echo hacmp_vsd_up1 EXITING RC=0 CALLED WITH PARAMETER a
hacmp_vs

论坛徽章:
0
16 [报告]
发表于 2003-06-19 12:26 |只看该作者

急问一个关于HACMP的问题!

你给了a机的hacmp.out,我认为a机在六月17日21:22分成功的node_down,而且地址切换到了boot address.


Jun 17 2003 21:22:31 Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en0 10.105.88.6 10.105.88.4 255.255.255.224. Exit status = 0cl_swap_IP_address[639] exit 0
....................
Jun 17 21:22:49 EVENT COMPLETED: node_down_complete a

再给出同一时间带b机的hacmp.out, 在a机"Jun 17 21:22:49 EVENT COMPLETED: node_down_local_complete "以后 b 机上开始 node_down_remote 事件,时间应该从 六月17日21:22:49开始.
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP