免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 3546 | 回复: 0
打印 上一主题 下一主题

redhat套件如何断网切换节点 [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2011-03-16 11:58 |只看该作者 |倒序浏览
请问大侠如何配置fence 才能断网实现节点切换,
我的cluser.conf如下:
<?xml version="1.0"?>
<cluster alias="test_cluster" config_version="29" name="test_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="beg75.ex.com" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device domain="test70" name="xenfence"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="test70.ex.com" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device domain="beg75" name="xenfence"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices>
                <fencedevice agent="fence_xvm" name="xenfence"/>
        </fencedevices>
<fencedevice agent="fence_xvm" name="xenfence"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="webinfo" ordered="1" restricted="0">
                                <failoverdomainnode name="beg75.ex.com" priority="1"/>
                                <failoverdomainnode name="test70.ex.com" priority="2"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <ip address="192.168.0.100" monitor_link="0"/>
                        <script file="/etc/init.d/httpd" name="apache"/>
                        <fs device="/dev/sdb1" force_fsck="0" force_unmount="0" fsid="42848" fstype="ext3" mountpoint="/data" name="webdata" options="" self_fence="0"/>
                </resources>
                <service autostart="1" domain="webinfo" name="www">
                        <script ref="apache">
                                <ip ref="192.168.0.100"/>
                        </script>
                        <fs ref="webdata"/>
                </service>
        </rm>
</cluster>
在断网后启动fence 也失败,log如下:
r 16 11:55:19 beg75 openais[7269]: [SERV ] Initialising service handler 'openais checkpoint service B.01.01'
Mar 16 11:55:19 beg75 openais[7269]: [SERV ] Initialising service handler 'openais event service B.01.01'
Mar 16 11:55:19 beg75 openais[7269]: [SERV ] Initialising service handler 'openais distributed locking service B.01.01'
Mar 16 11:55:19 beg75 openais[7269]: [SERV ] Initialising service handler 'openais message service B.01.01'
Mar 16 11:55:19 beg75 openais[7269]: [SERV ] Initialising service handler 'openais configuration service'
Mar 16 11:55:19 beg75 openais[7269]: [SERV ] Initialising service handler 'openais cluster closed process group service v1.01'
Mar 16 11:55:19 beg75 openais[7269]: [SERV ] Initialising service handler 'openais CMAN membership service 2.01'
Mar 16 11:55:19 beg75 openais[7269]: [CMAN ] CMAN 2.0.73 (built Sep 19 2007 16:04:02) started
Mar 16 11:55:19 beg75 openais[7269]: [SYNC ] Not using a virtual synchrony filter.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] Creating commit token because I am the rep.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] Saving state aru 0 high seq received 0
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] Storing new sequence id for ring 204
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] entering COMMIT state.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] entering RECOVERY state.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] position [0] member 192.168.0.75:
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] previous ring seq 512 rep 192.168.0.75
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] aru 0 high delivered 0 received flag 1
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] Did not need to originate any messages in recovery.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] Sending initial ORF token
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] CLM CONFIGURATION CHANGE
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] New Configuration:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] Members Left:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] Members Joined:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] CLM CONFIGURATION CHANGE
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] New Configuration:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ]    r(0) ip(192.168.0.75)  
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] Members Left:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] Members Joined:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ]    r(0) ip(192.168.0.75)  
Mar 16 11:55:19 beg75 openais[7269]: [SYNC ] This node is within the primary component and will provide service.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] entering OPERATIONAL state.
Mar 16 11:55:19 beg75 openais[7269]: [CMAN ] quorum regained, resuming activity
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] got nodejoin message 192.168.0.75
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] entering GATHER state from 11.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] Saving state aru 9 high seq received 9
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] Storing new sequence id for ring 208
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] entering COMMIT state.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] entering RECOVERY state.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] position [0] member 192.168.0.70:
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] previous ring seq 516 rep 192.168.0.70
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] aru c high delivered c received flag 1
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] position [1] member 192.168.0.75:
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] previous ring seq 516 rep 192.168.0.75
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] aru 9 high delivered 9 received flag 1
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] Did not need to originate any messages in recovery.
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] CLM CONFIGURATION CHANGE
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] New Configuration:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ]    r(0) ip(192.168.0.75)  
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] Members Left:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] Members Joined:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] CLM CONFIGURATION CHANGE
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] New Configuration:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ]    r(0) ip(192.168.0.70)  
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ]    r(0) ip(192.168.0.75)  
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] Members Left:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] Members Joined:
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ]    r(0) ip(192.168.0.70)  
Mar 16 11:55:19 beg75 openais[7269]: [SYNC ] This node is within the primary component and will provide service.
Mar 16 11:55:19 beg75 openais[7269]: [TOTEM] entering OPERATIONAL state.
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] got nodejoin message 192.168.0.70
Mar 16 11:55:19 beg75 openais[7269]: [CLM  ] got nodejoin message 192.168.0.75
Mar 16 11:55:19 beg75 openais[7269]: [CPG  ] got joinlist message from node 2
Mar 16 11:55:19 beg75 groupd[7290]: found uncontrolled kernel object rgmanager in /sys/kernel/dlm
Mar 16 11:55:19 beg75 groupd[7290]: local node must be reset to clear 1 uncontrolled instances of gfs and/or dlm
Mar 16 11:55:19 beg75 openais[7269]: [CMAN ] cman killed by node 1 because we were killed by cman_tool or other application
Mar 16 11:55:21 beg75 fenced[7298]: cman_init error 0 111
Mar 16 11:55:22 beg75 dlm_controld[7304]: cman_init error 0 111
Mar 16 11:55:22 beg75 gfs_controld[7310]: cman_init error 111
ar 16 11:55:49 beg75 ccsd[2134]: Unable to connect to cluster infrastructure after 780 seconds.
Mar 16 11:55:50 beg75 fence_node[7291]: agent "fence_xvm" reports: Adding IP 127.0.0.1 to list (family 2) Adding IP 192.168.0.75 to list (family 2) Adding IP 192.168.0.100 to list (family 2) ipv4_listen: Setting up ipv4 listen socket ipv4_listen: Success; fd = 2 Setting up ipv4 multicast send (225.0.0.12:1229) Joining IP
Mar 16 11:55:50 beg75 fence_node[7291]: agent "fence_xvm" reports:  Multicast group (pass 1) Joining IP Multicast group (pass 2) Setting TTL to 2 for fd5 ipv4_send_sk: success, fd = 5 Setting up ipv4 multicast send (225.0.0.12:1229) Joining IP Multicast group (pass 1) Joining IP Multicast group (pass 2) Setting TTL to 2
Mar 16 11:55:50 beg75 fence_node[7291]: agent "fence_xvm" reports: for fd5 ipv4_send_sk: success, fd = 5 Setting up ipv4 multicast send (225.0.0.12:1229) Joining IP Multicast group (pass 1) Joining IP Multicast group (pass 2) Setting TTL to 2 for fd5 ipv4_send_sk: success, fd = 5 Setting up ipv4 multicast send (225.0.0.1
Mar 16 11:55:50 beg75 fence_node[7291]: agent "fence_xvm" reports: 2:1229) Joining IP Multicast group (pass 1) Joining IP Multicast group (pass 2) Setting TTL to 2 for fd5 ipv4_send_sk: success, fd = 5 Setting up ipv4 multicast send (225.0.0.12:1229) Joining IP Multicast group (pass 1) Joining IP Multicast group (pass 2)
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP