Chinaunix

标题: redhat AS 4用RHCS做HA,断掉第一台机器网线,服务不能切换! [打印本页]

作者: SUNfan    时间: 2006-12-26 17:36
标题: redhat AS 4用RHCS做HA,断掉第一台机器网线,服务不能切换!
两台机器系统Redhat AS 4 U4
集群软件 RHCS
两台机器相关的配置如下:
[root@vm002 ~]# more /etc/hosts   两台机器一样的内容
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       localhost
192.168.0.201   vm001
192.168.0.202   vm002

两台机器正常启动之后
[root@vm002 ~]#clustat -i 3
Member Status: Quorate

  Member Name                              Status
  ------ ----                              ------
  vm001                                    Online, rgmanager
  vm002                                    Online, Local, rgmanager

  Service Name         Owner (Last)                   State         
  ------- ----         ----- ------                   -----         
  ftpservice           vm001                          started   

但是我断掉第一根网线之后,等了1分钟之后,出现
[root@vm002 ~]#clustat -i 3
Member Status: Quorate

  Member Name                              Status
  ------ ----                              ------
  vm001                                    Offline
  vm002                                    Online, Local, rgmanager

  Service Name         Owner (Last)                   State         
  ------- ----         ----- ------                   -----         
  ftpservice           unknown                        started   


我的集群配置文件是:
[root@vm002 ~]# more /etc/cluster/cluster.conf
<?xml version="1.0" ?>
<cluster alias="zcbcluster" config_version="33" name="alpha_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="vm001" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="clusterfence" nodename="vm001"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="vm002" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="clusterfence" nodename="vm002"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices>
                <fencedevice agent="fence_manual" name="clusterfence"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="ftp-domain" ordered="1" restricted="1">
                                <failoverdomainnode name="vm001" priority="1"/>
                                <failoverdomainnode name="vm002" priority="2"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <ip address="192.168.0.203" monitor_link="1"/>
                        <script file="/etc/rc.d/init.d/vsftpdHA.sh" name="ftpHA"/>
                        <fs device="/dev/sdb1" force_fsck="0" force_unmount="1" fsid="61663" fstype="ext3" mountpoint="/ftp" name="f
tpcontent" options="rw" self_fence="0"/>
                </resources>
                <service autostart="1" domain="ftp-domain" name="ftpservice" recovery="relocate">
                        <ip ref="192.168.0.203">
                                <fs ref="ftpcontent"/>
                                <script ref="ftpHA"/>
                        </ip>
                </service>
        </rm>
</cluster>

请问有什么办法,解决网线断了,在备机起服务?(首先我这两台机器服务可以相互切换)
作者: chenjiuhai    时间: 2006-12-26 17:41
标题: 学习
我这里也正准备弄个集群,可不可以把安装文档共享一下???

非常感谢!
作者: SUNfan    时间: 2006-12-26 17:48
rh-cs-en-4.pdf,网上有的下载的!
作者: fuumax    时间: 2006-12-26 18:08
vsftpdHA.sh =
/var/log/messages =
ifconfig -a =

另外从4.4开始又加回了3里的仲裁分区
作者: FunBSD    时间: 2006-12-26 19:01
我也做的两个机器,我把其中一个机器的rgmanager停掉,整个cluster都down掉了,感觉这玩意可以对其上的服务提供高可用性,其本身没有啥高可用性
作者: quzhaojun    时间: 2006-12-28 09:05
我想知道楼主配置的双机的硬件设备都有什么??

两台主机+共享磁盘+powerswitch+双机软件?
作者: SUNfan    时间: 2006-12-28 13:37
装的系统是Redhat AS 4
双机软件是rhel-4-u4-rhcs-i386-disc1.iso
没有powerswitch,机器是双网卡,第二个网卡做心跳。
共享磁盘是dell cx300
就是弄不清楚,RHCS为什么不能实现网线断的切换,一直很纳闷,为什么会这样了?
作者: su_hub    时间: 2006-12-28 13:43
仔细看一下/var/log/messages文件,你会找到答案的
作者: SUNfan    时间: 2006-12-28 14:49
正常的情况
[root@vm002 ~]# clustat -i 3
Member Status: Quorate

  Member Name                              Status
  ------ ----                              ------
  vm001                                    Online, rgmanager
  vm002                                    Online, Local, rgmanager

  Service Name         Owner (Last)                   State         
  ------- ----         ----- ------                   -----         
  ftpservice           vm001                          started   


断掉第一块网卡的连接
[root@vm002 ~]# clustat -i 3
Member Status: Quorate

  Member Name                              Status
  ------ ----                              ------
  vm001                                    Offline
  vm002                                    Online, Local, rgmanager

  Service Name         Owner (Last)                   State         
  ------- ----         ----- ------                   -----         
  ftpservice           unknown                        started  
一直是这样,切换不过去!

看日志:
[root@vm002 ~]# tail -30 /var/log/messages
Dec 28 07:28:50 vm002 gpm: gpm startup succeeded
Dec 28 07:28:50 vm002 iiim: htt startup succeeded
Dec 28 07:28:50 vm002 crond: crond startup succeeded
Dec 28 07:28:50 vm002 htt_server[2788]: started.
Dec 28 07:28:52 vm002 xfs: xfs startup succeeded
Dec 28 07:28:52 vm002 anacron: anacron startup succeeded
Dec 28 07:28:52 vm002 atd: atd startup succeeded
Dec 28 07:28:53 vm002 messagebus: messagebus startup succeeded
Dec 28 07:28:53 vm002 cups-config-daemon: cups-config-daemon startup succeeded
Dec 28 07:28:53 vm002 haldaemon: haldaemon startup succeeded
Dec 28 07:28:53 vm002 rgmanager: clurgmgrd startup succeeded
Dec 28 07:28:53 vm002 fstab-sync[2893]: removed all generated mount points
Dec 28 07:28:54 vm002 clurgmgrd[2906]: <notice> Resource Group Manager Starting
Dec 28 07:28:54 vm002 clurgmgrd[2906]: <info> Loading Service Data
Dec 28 07:28:59 vm002 clurgmgrd[2906]: <info> Initializing Services
Dec 28 07:29:00 vm002 clurgmgrd: [2906]: <info> /dev/sdb1 is not mounted
Dec 28 07:29:00 vm002 fstab-sync[3668]: added mount point /media/cdrecorder for /dev/hdc
Dec 28 07:29:01 vm002 fstab-sync[3686]: added mount point /media/floppy for /dev/fd0
Dec 28 07:29:05 vm002 clurgmgrd: [2906]: <info> Executing /etc/rc.d/init.d/vsftpdHA.sh stop
Dec 28 07:29:05 vm002 vsftpdHA.sh: vsftpd shutdown failed
Dec 28 07:29:05 vm002 clurgmgrd[2906]: <info> Services Initialized
Dec 28 07:29:07 vm002 clurgmgrd[2906]: <info> Logged in SG "usrm::manager"
Dec 28 07:29:07 vm002 clurgmgrd[2906]: <info> Magma Event: Membership Change
Dec 28 07:29:07 vm002 clurgmgrd[2906]: <info> State change: Local UP
Dec 28 07:29:07 vm002 clurgmgrd[2906]: <info> State change: vm001 UP
Dec 28 07:33:02 vm002 sshd(pam_unix)[3788]: session opened for user root by root(uid=0)
Dec 28 07:34:48 vm002 kernel: CMAN: removing node vm001 from the cluster : Missed too many heartbeats
Dec 28 07:34:48 vm002 fenced[2591]: vm001 not a cluster member after 0 sec post_fail_delay
Dec 28 07:34:48 vm002 fenced[2591]: fencing node "vm001"
Dec 28 07:34:48 vm002 fence_manual: Node vm001 needs to be reset before recovery can procede.  Waiting for vm001 to rejoin the cluster or for manual acknowledgement that it has been reset (i.e. fence_ack_manual -n vm001)
作者: su_hub    时间: 2006-12-28 15:24
Dec 28 07:34:48 vm002 kernel: CMAN: removing node vm001 from the cluster : Missed too many heartbeats
Dec 28 07:34:48 vm002 fenced[2591]: vm001 not a cluster member after 0 sec post_fail_delay
Dec 28 07:34:48 vm002 fenced[2591]: fencing node "vm001"
Dec 28 07:34:48 vm002 fence_manual: Node vm001 needs to be reset before recovery can procede.  Waiting for vm001 to rejoin the cluster or for manual acknowledgement that it has been reset (i.e. fence_ack_manual -n vm001)



仔细看看红色的部分吧
作者: SUNfan    时间: 2006-12-28 15:26
怎么解决这个问题,请提一下,解决的办法!
作者: su_hub    时间: 2006-12-28 15:33
已经告诉你了
作者: SUNfan    时间: 2006-12-28 15:38
看不明白啊,说fence出错,可是有什么解决办法,请明讲!
作者: SUNfan    时间: 2006-12-28 15:42
Dec 28 07:34:48 vm002 fence_manual: Node vm001 needs to be reset before recovery can procede.  Waiting for vm001 to rejoin the cluster or for manual acknowledgement that it has been reset (i.e. fence_ack_manual -n vm001)

vm002fence指南:节点vm001在恢复之前需要重启该节点。等待vm001重新加入到cluster集群中或者确认vm001已经被重启。

那么,这种集群不能实现网线断了之后,服务的切换吗?
作者: su_hub    时间: 2006-12-28 15:44
Dec 28 07:34:48 vm002 fence_manual: Node vm001 needs to be reset before recovery can procede.  Waiting for vm001 to rejoin the cluster or for manual acknowledgement that it has been reset (i.e. fence_ack_manual -n vm001)


在vm002上执行
#fence_ack_manual -n vm001
因为你选择的fencing方法是手动嘛,你当然得“手动”告诉正常的节点有一个设备失效了,这样它才能“放心”的接替服务对吧?!
作者: SUNfan    时间: 2006-12-28 15:47
是的,打这个命令,却是在vm002上启动ftp服务,但是如何把fence改为“自动”的了?
请指点!
作者: su_hub    时间: 2006-12-28 15:51
那就需要有相应的fence device了,如APC等
作者: su_hub    时间: 2006-12-28 15:53
如果是使用RHCS3的话,也可以用quorum disk的,好像rhel4.4恢复了这个功能,不过没试过
作者: SUNfan    时间: 2006-12-28 15:55
我这里有一个brocade的光纤交换机,可以当作fence吗?
如果brocade switch 200e可以当作fence,那么怎么设置了?
Name:  是不是交换机的名字?
Ip Address:交换机的Ip地址?
Login:     默认admin用户名,是吗?
Password: admin的密码
是不是要光纤交换机开放什么端口?还要设置什么才能当作fence?
作者: su_hub    时间: 2006-12-28 15:57
我记得RHCS手册上,添加fencing device这一部分有详细说明的,你的brocade switch好像也有包括,你可以去看一下
作者: SUNfan    时间: 2006-12-28 16:03
嗯,好的,我马上去看看添加fence关于brocade switch的。
您说AS4的RHCS里面添加了”quorum disk“,是不是在共享磁盘里面添加两个共享裸分区,用于fence?
但是我不知道共享磁盘,在图里面什么地方添加?

123.JPG (65.54 KB, 下载次数: 42)

123.JPG

作者: su_hub    时间: 2006-12-28 16:06
Red Hat Cluster Suite 4 RHEL4 U4 Release Notes


Copyright(c) 2006 Red Hat, Inc.
        -------------------------------------------------------

September 19, 2006

Introduction

   The following topics are covered in this document:
   
     o Changes to Red Hat Cluster Suite 4

     o Important Notes
     
     o Bugs Fixed in the Release

     o Related Documentation
     
   
Changes to Red Hat Cluster Suite 4

  Quorum Disk
   
   Quorum Disk is a new feature available with this release. The
   Quorum Disk feature (also known as qdisk) allows you to configure
   arbitrary heuristics so that each cluster member can determine its
   fitness for participating in a cluster. The fitness information is
   communicated to other cluster members via a "quorum disk" residing
   on shared storage.
   
   With properly configured heuristics, you could define the following
   cluster behavior:

   * In the event of a network-partition failure, provide a method to
     decide which member wins the fence race in a two-node cluster.

   * Allow continued cluster operation after a majority failure without
     manual intervention.

  Quorum Disk communicates with CMAN, ccsd (the Cluster Configuration
  System daemon), and shared storage. It communicates with CMAN to
  advertise quorum-device availability. It communicates with ccsd to
  obtain configuration information. It communicates with shared storage to
  check and record states.

  You can find more information about Quorum Disk in the following man
  pages: mkqdisk(, qdiskd(, and qdisk(5).

  NOTE: For this release, you must configure Quorum Disk by editing
  the cluster configuration file, /etc/cluster/cluster.conf, directly
  rather than by using the cluster configuration graphical user
  interface (system-config-cluster).


ccs_tool Enhancements
  
  The ccs_tool includes new commands for this release. The new
  commands provide the ability to configure certain portions of the
  cluster configuration file (/etc/cluster/cluster.conf). In previous
  releases, the only tool available for creating and managing the
  cluster configuration file was the Cluster Configuration GUI
  (system-config-cluster). For more information about the new commands
  and usage examples, refer to the ccs_tool man page, ccs_tool(.


Important Notes

  The up2date command has changed for RHEL4 U4. When installing
  cluster suite software, use this syntax:

  up2date --installall=<channel-label>
作者: SUNfan    时间: 2006-12-28 16:12
好像没有看到如何添加仲裁磁盘到配置文件中啊?
作者: SUNfan    时间: 2006-12-28 16:20
这种APC,Brocade Switch的fence,都是自动的把,看了一下brocade的fence配置,好像就配这四项:
配置的信息,是不是直接远程登陆到这台机器的IP地址的用户名和密码?
还有是不是这些地址是不是要和服务器的地址一个网段,或者至少是相互能ping通?

456.JPG (25.36 KB, 下载次数: 53)

456.JPG

作者: su_hub    时间: 2006-12-28 16:50
You can find more information about Quorum Disk in the following man
  pages: mkqdisk(, qdiskd(, and qdisk(5).
作者: SUNfan    时间: 2006-12-28 16:59
通过man看的东西太混乱,有没有直接说如何设置共享的仲裁空间的?
作者: SUNfan    时间: 2006-12-29 19:18
断网之后,sybase服务切换,切换一直不过去,看了第二台机器的日志信息如下:
Dec 29 17:30:21 web kernel: bnx2: eth1: using MSI
Dec 29 17:30:21 web kernel: bonding: bond0: enslaving eth1 as a backup interface with a down link.
Dec 29 17:30:21 web kernel: ip_tables: (C) 2000-2002 Netfilter core team
Dec 29 17:30:21 web kernel: bnx2: eth0: using MSI
Dec 29 17:30:21 web kernel: bnx2: eth1 NIC Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec 29 17:30:21 web kernel: bonding: bond0: link status definitely up for interface eth1.
Dec 29 17:30:21 web kernel: bonding: bond0: making interface eth1 the new active one.
Dec 29 17:30:21 web kernel: bnx2: eth0 NIC Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec 29 17:30:21 web kernel: ip_tables: (C) 2000-2002 Netfilter core team
Dec 29 17:30:21 web kernel: NET: Registered protocol family 10
Dec 29 17:30:21 web kernel: Disabled Privacy Extensions on device c0344160(lo)
Dec 29 17:30:21 web kernel: IPv6 over IPv4 tunneling driver
Dec 29 17:30:21 web kernel: CMAN 2.6.9-45.2 (built Jul 13 2006 11:42:36) installed
Dec 29 17:30:22 web kernel: NET: Registered protocol family 30
Dec 29 17:30:22 web kernel: DLM 2.6.9-42.10 (built Jul 13 2006 11:48:04) installed
Dec 29 17:30:22 web kernel: CMAN: Waiting to join or form a Linux-cluster
Dec 29 17:30:22 web kernel: CMAN: sending membership request
Dec 29 17:30:22 web kernel: CMAN: sending membership request
Dec 29 17:30:22 web kernel: CMAN: got node sybase
Dec 29 17:30:22 web kernel: CMAN: quorum regained, resuming activity
Dec 29 17:31:43 web fenced: startup succeeded
Dec 29 17:31:43 web kernel: Attached scsi generic sg0 at scsi0, channel 0, id 8, lun 0,  type 13
Dec 29 17:31:43 web kernel: Attached scsi generic sg1 at scsi0, channel 2, id 0, lun 0,  type 0
Dec 29 17:31:43 web kernel: Attached scsi generic sg2 at scsi1, channel 0, id 0, lun 0,  type 0
Dec 29 17:31:43 web kernel: Attached scsi generic sg3 at scsi1, channel 0, id 0, lun 1,  type 0
Dec 29 17:31:43 web kernel: Attached scsi generic sg4 at scsi1, channel 0, id 0, lun 2,  type 0
Dec 29 17:31:46 web Navisphere Agent[4243]: Agent initializing with pid 4243
Dec 29 17:31:46 web EV_AGENT[4254]: Agent daemon process created, pid 4254
Dec 29 17:31:46 web EV_AGENT[4254]: Agent has started up.
Dec 29 17:31:46 web naviagent: naviagent startup succeeded
Dec 29 17:31:46 web netfs: Mounting other filesystems:  succeeded
Dec 29 17:31:46 web kernel: i2c /dev entries driver
Dec 29 17:31:46 web rc: Starting lm_sensors:  succeeded
Dec 29 17:31:46 web autofs: automount startup succeeded
Dec 29 17:31:46 web smartd[4335]: smartd version 5.33 [i386-redhat-linux-gnu] Copyright (C) 2002-4 Bruce Allen
Dec 29 17:31:46 web smartd[4335]: Home page is [url]http://smartmontools.sourceforge.net/[/url]  
Dec 29 17:31:46 web smartd[4335]: Opened configuration file /etc/smartd.conf
Dec 29 17:31:46 web smartd[4335]: Configuration file /etc/smartd.conf parsed.
Dec 29 17:31:46 web smartd[4335]: Device: /dev/sda, opened
Dec 29 17:31:46 web smartd[4335]: Device: /dev/sda, Bad IEC (SMART) mode page, err=-5, skip device
Dec 29 17:31:46 web smartd[4335]: Unable to register SCSI device /dev/sda at line 30 of file /etc/smartd.conf
Dec 29 17:31:46 web smartd[4335]: Unable to register device /dev/sda (no Directive -d removable). Exiting.
Dec 29 17:31:46 web smartd: smartd startup failed
Dec 29 17:31:46 web acpid: acpid startup succeeded
Dec 29 17:31:47 web kernel: lp: driver loaded but no devices found
Dec 29 17:31:48 web cups: cupsd startup succeeded
Dec 29 17:31:48 web sshd:  succeeded
Dec 29 17:31:48 web xinetd: xinetd startup succeeded
Dec 29 17:31:48 web gpm[4420]: *** info [startup.c(95)]:
Dec 29 17:31:48 web gpm[4420]: Started gpm successfully. Entered daemon mode.
Dec 29 17:31:48 web xinetd[4410]: xinetd Version 2.3.13 started with libwrap loadavg options compiled in.
Dec 29 17:31:48 web xinetd[4410]: Started working: 0 available services
Dec 29 17:31:48 web gpm[4420]: *** info [mice.c(1766)]:
Dec 29 17:31:48 web gpm[4420]: imps2: Auto-detected intellimouse PS/2
Dec 29 17:31:48 web gpm: gpm startup succeeded
Dec 29 17:31:49 web iiim: htt startup succeeded
Dec 29 17:31:49 web crond: crond startup succeeded
Dec 29 17:31:49 web htt_server[4452]: started.
Dec 29 17:31:50 web xfs: xfs startup succeeded
Dec 29 17:31:50 web anacron: anacron startup succeeded
Dec 29 17:31:50 web atd: atd startup succeeded
Dec 29 17:31:50 web messagebus: messagebus startup succeeded
Dec 29 17:31:51 web cups-config-daemon: cups-config-daemon startup succeeded
Dec 29 17:31:51 web haldaemon: haldaemon startup succeeded
Dec 29 17:31:51 web clurgmgrd[4556]: <notice> Resource Group Manager Starting
Dec 29 17:31:51 web clurgmgrd[4556]: <info> Loading Service Data
Dec 29 17:31:51 web rgmanager: clurgmgrd startup succeeded
Dec 29 17:31:51 web fstab-sync[5131]: removed all generated mount points
Dec 29 17:31:51 web fstab-sync[5231]: added mount point /media/cdrom for /dev/hda
Dec 29 17:31:51 web clurgmgrd[4556]: <info> Initializing Services
Dec 29 17:31:51 web clurgmgrd: [4556]: <info> /dev/sdb1 is not mounted
Dec 29 17:31:51 web clurgmgrd: [4556]: <info> /dev/sdc1 is not mounted
Dec 29 17:31:51 web clurgmgrd: [4556]: <info> /dev/sdd1 is not mounted
Dec 29 17:31:52 web fstab-sync[5659]: added mount point /media/floppy for /dev/fd0
Dec 29 17:31:56 web clurgmgrd: [4556]: <info> Executing /etc/rc.d/init.d/sybaseHA.sh stop
Dec 29 17:31:56 web clurgmgrd: [4556]: <info> Executing /etc/rc.d/init.d/webHA.sh stop
Dec 29 17:31:56 web sybaseHA.sh: dataserver shutdown failed
Dec 29 17:31:56 web clurgmgrd[4556]: <notice> stop on script "cms-content" returned 5 (program not installed)
Dec 29 17:31:57 web clurgmgrd[4556]: <info> Services Initialized
Dec 29 17:31:58 web clurgmgrd[4556]: <info> Logged in SG "usrm::manager"
Dec 29 17:31:58 web clurgmgrd[4556]: <info> Magma Event: Membership Change
Dec 29 17:31:58 web clurgmgrd[4556]: <info> State change: Local UP
Dec 29 17:31:58 web clurgmgrd[4556]: <info> State change: sybase UP
Dec 29 17:31:59 web clurgmgrd[4556]: <info> Magma Event: Membership Change
Dec 29 17:31:59 web clurgmgrd[4556]: <info> State change: cms UP
Dec 29 17:32:01 web clurgmgrd[4556]: <notice> Starting stopped service webservice
Dec 29 17:32:01 web clurgmgrd: [4556]: <info> Adding IPv4 address 61.160.65.10 to eth0
Dec 29 17:32:02 web clurgmgrd: [4556]: <info> mounting /dev/sdc1 on /export/home/web
Dec 29 17:32:03 web kernel: kjournald starting.  Commit interval 5 seconds
Dec 29 17:32:03 web kernel: EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
Dec 29 17:32:03 web kernel: EXT3 FS on sdc1, internal journal
Dec 29 17:32:03 web kernel: EXT3-fs: recovery complete.
Dec 29 17:32:03 web kernel: EXT3-fs: mounted filesystem with ordered data mode.
Dec 29 17:32:03 web clurgmgrd: [4556]: <info> Executing /etc/rc.d/init.d/webHA.sh start
Dec 29 17:33:20 web login(pam_unix)[4561]: session opened for user root by LOGIN(uid=0)
Dec 29 17:33:20 web  -- root[4561]: ROOT LOGIN ON tty1
Dec 29 17:34:29 web kernel: CMAN: removing node sybase from the cluster : Missed too many heartbeats
Dec 29 17:34:29 web fenced[4117]: sybase not a cluster member after 0 sec post_fail_delay
Dec 29 17:34:29 web fenced[4117]: fencing node "sybase"
Dec 29 17:34:31 web fenced[4117]: agent "fence_brocade" reports: failed: portshow 80 does not show DISABLED  
Dec 29 17:34:31 web fenced[4117]: fence "sybase" failed
Dec 29 17:34:36 web fenced[4117]: fencing node "sybase"
Dec 29 17:34:37 web fenced[4117]: agent "fence_brocade" reports: failed: portshow 80 does not show DISABLED  
Dec 29 17:34:37 web fenced[4117]: fence "sybase" failed
Dec 29 17:34:42 web fenced[4117]: fencing node "sybase"
Dec 29 17:34:44 web fenced[4117]: agent "fence_brocade" reports: failed: portshow 80 does not show DISABLED  
Dec 29 17:34:44 web fenced[4117]: fence "sybase" failed
Dec 29 17:34:49 web fenced[4117]: fencing node "sybase"
Dec 29 17:34:51 web fenced[4117]: agent "fence_brocade" reports: failed: portshow 80 does not show DISABLED  
Dec 29 17:34:51 web fenced[4117]: fence "sybase" failed
Dec 29 17:34:56 web fenced[4117]: fencing node "sybase"
Dec 29 17:34:57 web fenced[4117]: agent "fence_brocade" reports: failed: portshow 80 does not show DISABLED  
Dec 29 17:34:57 web fenced[4117]: fence "sybase" failed
Dec 29 17:35:02 web fenced[4117]: fencing node "sybase"

123.JPG (87.99 KB, 下载次数: 43)

123.JPG

作者: SUNfan    时间: 2006-12-30 09:30
添加Fence对于SW200e,选用什么端口?
作者: SUNfan    时间: 2006-12-30 14:42
在线等待fence方面的回答!
作者: fuumax    时间: 2006-12-30 16:59
原帖由 SUNfan 于 2006-12-30 14:42 发表
在线等待fence方面的回答!


这种参数其实和技术无关,至少和linux方面的技术无关,

只有3种人能清楚的解答你的问题

配过同型号设备的人
这种fence设备品牌的厂商或代理
红帽的开发人员

即使把电话打到红帽800,也很难得到一个满意的答复,原因很简单红帽的800工程师也不可能配过所有的fence设备

你可以去红帽邮件列表里查这个设备的关键字,或者选择可靠的硬件厂商支持,google一般很难搜索到这种答案

当然如果是我作这项目,我觉得看硬件说明书然后自己尝试最快

[ 本帖最后由 fuumax 于 2006-12-30 17:02 编辑 ]
作者: nntp    时间: 2006-12-30 17:15
RH800 (China) can handle this.  call them if you have purchased subscription of RHCS.
作者: SUNfan    时间: 2006-12-30 18:17
关键我们是没有购买RHCS软件,都是网上下载的做的集群,现在就遇到一些困难!
希望大家能提供援助!
作者: 好好先生    时间: 2006-12-31 10:08
大家都消消气,NNTP版主的意思也不是不能问问题。
原帖由 fuumax 于 2006-12-30 16:59 发表


这种参数其实和技术无关,至少和linux方面的技术无关,

只有3种人能清楚的解答你的问题

配过同型号设备的人
这种fence设备品牌的厂商或代理
红帽的开发人员

即使把电话打到红帽800,也很难得到一个 ...


而是像这位兄弟说的一样,要自己去查硬件的说明书,尝试自己去解决问题。论坛不是某个公司的收费的技术支持,就有回答问题和不回答问题的自由。而向别人提供收费服务的,就一定要有能力解决别人的问题。话回到论坛上来,如果想得到别人的帮助,就要虚心请教,同时也要乐于助人。帮别人就是帮自己。。。。




欢迎光临 Chinaunix (http://bbs.chinaunix.net/) Powered by Discuz! X3.2