免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12下一页
最近访问板块 发新帖
查看: 6236 | 回复: 16
打印 上一主题 下一主题

谁知道配双机的时候心跳线地址的文件在哪里么? [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2008-05-12 20:50 |只看该作者 |倒序浏览
9可用积分
配双机的时候把外网地址和心跳线的搞错了,改回来后以下报错误:

# cmcheckconf -v -C /etc/cmcluster/cluster.ascii
Checking cluster file: /etc/cmcluster/cluster.ascii
Note : a NODE_TIMEOUT value of 2000000 was found in line 129. For a
significant portion of installations, a higher setting is more appropriate.
Refer to the comments in the cluster configuration ascii file or Serviceguard
manual for more information on this parameter.
Checking nodes ... Done
Checking existing configuration ... Done
Node jk2hs1-1 is refusing Serviceguard communication.
Please make sure that the proper security access is configured on node
jk2hs1-1 through either file-based access (pre-A.11.16 version) or role-based
access (version A.11.16 or higher) and/or that the host name lookup
on node jk2hs1-1 resolves the IP address correctly.
cmcheckconf: Failed to gather configuration information

我感觉是没有把心跳线地址改回来。因为把外网线拔掉以后就不能rlogin了。谢谢大家了!!!

论坛徽章:
0
2 [报告]
发表于 2008-05-12 22:49 |只看该作者
那就修改正确了。同步分发一下配置文件

论坛徽章:
0
3 [报告]
发表于 2008-05-12 22:54 |只看该作者
原帖由 五“宅”一生 于 2008-5-12 22:49 发表
那就修改正确了。同步分发一下配置文件

怎么修改正确呀?

论坛徽章:
0
4 [报告]
发表于 2008-05-13 17:12 |只看该作者
# hostname jk2hs1-1 # more /.rhosts jk2hs1-1 root jk2hs2-1 root # more /etc/hosts.equiv jk2hs1-1 root jk2hs2-1 root # more /etc/cmcluster/cmclnodelist jk2hs1-1 root jk2hs2-1 root # more /etc/hosts 192.1.22.1 jk2hs1-1 192.1.22.2 jk2hs2-1 127.0.0.1 localhost loopback # rlogin jk2hs2-1 Please wait...checking for disk quotas (c)Copyright 1983-2003 Hewlett-Packard Development Company, L.P. (c)Copyright 1979, 1980, 1983, 1985-1993 The Regents of the Univ. of California (c)Copyright 1980, 1984, 1986 Novell, Inc. (c)Copyright 1986-2000 Sun Microsystems, Inc. (c)Copyright 1985, 1986, 1988 Massachusetts Institute of Technology (c)Copyright 1989-1993 The Open Software Foundation, Inc. (c)Copyright 1990 Motorola, Inc. (c)Copyright 1990, 1991, 1992 Cornell University (c)Copyright 1989-1991 The University of Maryland (c)Copyright 1988 Carnegie Mellon University (c)Copyright 1991-2003 Mentat Inc. (c)Copyright 1996 Morning Star Technologies, Inc. (c)Copyright 1996 Progressive Systems, Inc. RESTRICTED RIGHTS LEGEND Use, duplication, or disclosure by the U.S. Government is subject to restrictions as set forth in sub-paragraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause in DFARS 252.227-7013. Hewlett-Packard Company 3000 Hanover Street Palo Alto, CA 94304 U.S.A. Rights for non-DOD U.S. Government Departments and Agencies are as set forth in FAR 52.227-19(c)(1,2). You have mail. Value of TERM has been set to "70092". WARNING: YOU ARE SUPERUSER !! # hostname jk2hs2-1 # # more /.rhosts jk2hs1-1 root jk2hs2-1 root # more /etc/hosts.equiv jk2hs1-1 root jk2hs2-1 root # more /etc/cmcluster/cmclnodelist jk2hs1-1 root jk2hs2-1 root # more /etc/hosts 192.1.22.2 jk2hs2-1 192.1.22.1 jk2hs1-1 127.0.0.1 localhost loopback # vgdisplay --- Volume groups --- VG Name /dev/vg00 VG Write Access read/write VG Status available Max LV 255 Cur LV 9 Open LV 9 Max PV 16 Cur PV 2 Act PV 2 Max PE per PV 4356 VGDA 4 PE Size (Mbytes) 32 Total PE 8692 Alloc PE 5672 Free PE 3020 Total PVG 0 Total Spare PVs 0 Total Spare PVs in use 0 vgdisplay: Volume group not activated. vgdisplay: Cannot display volume group "/dev/vgdata". vgdisplay: Volume group not activated. vgdisplay: Cannot display volume group "/dev/vglock". # # exit logout root Connection closed. # # hostname jk2hs1-1 # vgdisplay --- Volume groups --- VG Name /dev/vg00 VG Write Access read/write VG Status available Max LV 255 Cur LV 9 Open LV 9 Max PV 16 Cur PV 2 Act PV 2 Max PE per PV 4356 VGDA 4 PE Size (Mbytes) 32 Total PE 8692 Alloc PE 5672 Free PE 3020 Total PVG 0 Total Spare PVs 0 Total Spare PVs in use 0 vgdisplay: Volume group not activated. vgdisplay: Cannot display volume group "/dev/vgdata". vgdisplay: Volume group not activated. vgdisplay: Cannot display volume group "/dev/vglock". # # more /etc/cmcluster/cluster.ascii # ********************************************************************** # ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE *************** # ***** For complete details about cluster parameters and how to ******* # ***** set them, consult the Serviceguard manual. ********************* # ********************************************************************** # Enter a name for this cluster. This name will be used to identify the # cluster when viewing or manipulating it. CLUSTER_NAME cluster1 # Cluster Lock Parameters # The cluster lock is used as a tie-breaker for situations # in which a running cluster fails, and then two equal-sized # sub-clusters are both trying to form a new cluster. The # cluster lock may be configured using only one of the # following alternatives on a cluster: # the LVM lock disk # the quorom server # # # Consider the following when configuring a cluster. # For a two-node cluster, you must use a cluster lock. For # a cluster of three or four nodes, a cluster lock is strongly # recommended. For a cluster of more than four nodes, a # cluster lock is recommended. If you decide to configure # a lock for a cluster of more than four nodes, it must be # a quorum server. # Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and # FIRST_CLUSTER_LOCK_PV parameters to define a lock disk. # The FIRST_CLUSTER_LOCK_VG is the LVM volume group that # holds the cluster lock. This volume group should not be # used by any other cluster as a cluster lock device. # Quorum Server Parameters. Use the QS_HOST, QS_POLLING_INTERVAL, # and QS_TIMEOUT_EXTENSION parameters to define a quorum server. # The QS_HOST is the host name or IP address of the system # that is running the quorum server process. The # QS_POLLING_INTERVAL (microseconds) is the interval at which # Serviceguard checks to make sure the quorum server is running. # The optional QS_TIMEOUT_EXTENSION (microseconds) is used to increase # the time interval after which the quorum server is marked DOWN. # # The default quorum server timeout is calculated from the # Serviceguard cluster parameters, including NODE_TIMEOUT and # HEARTBEAT_INTERVAL. If you are experiencing quorum server # timeouts, you can adjust these parameters, or you can include # the QS_TIMEOUT_EXTENSION parameter. # # The value of QS_TIMEOUT_EXTENSION will directly effect the amount # of time it takes for cluster reformation in the event of failure. # For example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster # reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION # was set to 0. This delay applies even if there is no delay in # contacting the Quorum Server. The recommended value for # QS_TIMEOUT_EXTENSION is 0, which is used as the default # and the maximum supported value is 30000000 (5 minutes). # # For example, to configure a quorum server running on node # "qshost" with 120 seconds for the QS_POLLING_INTERVAL and to # add 2 seconds to the system assigned value for the quorum server # timeout, enter: # # QS_HOST qshost # QS_POLLING_INTERVAL 120000000 # QS_TIMEOUT_EXTENSION 2000000 FIRST_CLUSTER_LOCK_VG /dev/vglock # Definition of nodes in the cluster. # Repeat node definitions as necessary for additional nodes. # NODE_NAME is the specified nodename in the cluster. # It must match the hostname and both cannot contain full domain name. # Each NETWORK_INTERFACE, if configured with IPv4 address, # must have ONLY one IPv4 address entry with it which could # be either HEARTBEAT_IP or STATIONARY_IP. # Each NETWORK_INTERFACE, if configured with IPv6 address(es) # can have multiple IPv6 address entries(up to a maximum of 2, # only one IPv6 address entry belonging to site-local scope # and only one belonging to global scope) which must be all # STATIONARY_IP. They cannot be HEARTBEAT_IP. NODE_NAME jk2hs1-1 NETWORK_INTERFACE lan2 HEARTBEAT_IP 192.168.1.1 NETWORK_INTERFACE lan0 HEARTBEAT_IP 192.1.22.1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c2t0d2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Warning: There are no standby network interfaces for lan1. # Warning: There are no standby network interfaces for lan2. # Warning: There are no standby network interfaces for lan0. NODE_NAME jk2hs2-1 NETWORK_INTERFACE lan2 HEARTBEAT_IP 192.168.1.2 NETWORK_INTERFACE lan0 HEARTBEAT_IP 192.1.22.2 FIRST_CLUSTER_LOCK_PV /dev/dsk/c2t0d2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Warning: There are no standby network interfaces for lan1. # Warning: There are no standby network interfaces for lan2. # Warning: There are no standby network interfaces for lan0. # Cluster Timing Parameters (microseconds). # The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds). # This default setting yields the fastest cluster reformations. # However, the use of the default value increases the potential # for spurious reformations due to momentary system hangs or # network load spikes. # For a significant portion of installations, a setting of # 5000000 to 8000000 (5 to 8 seconds) is more appropriate. # The maximum value recommended for NODE_TIMEOUT is 30000000 # (30 seconds). HEARTBEAT_INTERVAL 1000000 NODE_TIMEOUT 2000000 # Configuration/Reconfiguration Timing Parameters (microseconds). AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 # Network Monitor Configuration Parameters. etected. # If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound # message counts stop increasing. # If set to INOUT, both the inbound and outbound message counts must # stop increasing before the card is considered down. NETWORK_FAILURE_DETECTION INOUT # Package Configuration Parameters. # Enter the maximum number of packages which will be configured in the cluster. # You can not add packages beyond this limit. # This parameter is required. MAX_CONFIGURED_PACKAGES 5 # Access Control Policy Parameters. # # Three entries set the access control policy for the cluster: # First line must be USER_NAME, second USER_HOST, and third USER_ROLE. # Enter a value after each. # # 1. USER_NAME can either be ANY_USER, or a maximum of # 8 login names from the /etc/passwd file on user host. # 2. USER_HOST is where the user can issue Serviceguard commands. # If using Serviceguard Manager, it is the COM server. # Choose one of these three values: ANY_SERVICEGUARD_NODE, or # (any) CLUSTER_MEMBER_NODE, or a specific node. For node, # use the official hostname from domain name server, and not # an IP addresses or fully qualified name. # 3. USER_ROLE must be one of these three values: # * MONITOR: read-only capabilities for the cluster and packages # * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages # in the cluster # * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative # commands for the cluster. # # Access control policy does not set a role for configuration # capability. To configure, a user must log on to one of the # cluster's nodes as root (UID=0). Access control # policy cannot limit root users' access. # # MONITOR and FULL_ADMIN can only be set in the cluster configuration file, # and they apply to the entire cluster. PACKAGE_ADMIN can be set in the # cluster or a package configuration file. If set in the cluster # configuration file, PACKAGE_ADMIN applies to all configured packages. # If set in a package configuration file, PACKAGE_ADMIN applies to that # package only. # # Conflicting or redundant policies will cause an error while applying # the configuration, and stop the process. The maximum number of access # policies that can be configured in the cluster is 200. # # Example: to configure a role for user john from node noir to # administer a cluster and all its packages, enter: # USER_NAME john # USER_HOST noir # USER_ROLE FULL_ADMIN # List of cluster aware LVM Volume Groups. These volume groups will # be used by package applications via the vgchange -a e command. # Neither CVM or VxVM Disk Groups should be used here. # For example: # VOLUME_GROUP /dev/vgdatabase # VOLUME_GROUP /dev/vg02 VOLUME_GROUP /dev/vglock VOLUME_GROUP /dev/vgdata # # # cmcheckconf -v -C /etc/cmcluster/cluster.ascii Checking cluster file: /etc/cmcluster/cluster.ascii Note : a NODE_TIMEOUT value of 2000000 was found in line 129. For a significant portion of installations, a higher setting is more appropriate. Refer to the comments in the cluster configuration ascii file or Serviceguard manual for more information on this parameter. Checking nodes ... Done Checking existing configuration ... Done Node jk2hs1-1 is refusing Serviceguard communication. Please make sure that the proper security access is configured on node jk2hs1-1 through either file-based access (pre-A.11.16 version) or role-based access (version A.11.16 or higher) and/or that the host name lookup on node jk2hs1-1 resolves the IP address correctly. cmcheckconf: Failed to gather configuration information #

论坛徽章:
0
5 [报告]
发表于 2008-05-13 17:20 |只看该作者
那就改/etc/cmclsuter/cluster.ascii文件啊

论坛徽章:
0
6 [报告]
发表于 2008-05-13 17:21 |只看该作者
若是修改ip 可以用sam修改

论坛徽章:
0
7 [报告]
发表于 2008-05-13 19:09 |只看该作者
原帖由 wgyin 于 2008-5-13 17:20 发表
那就改/etc/cmclsuter/cluster.ascii文件啊

改那个200000?没用的

论坛徽章:
0
8 [报告]
发表于 2008-05-15 22:42 |只看该作者
补上一些信息 :
# vgdisplay -v vgdata
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "vgdata".
# vgchange -a y /dev/vgdata
Activated volume group
Volume group "/dev/vgdata" has been successfully changed.
# vgchange -a y /dev/vgdata
Volume group "/dev/vgdata" has been successfully changed.
# vgdisplay -v vgdata
--- Volume groups ---
VG Name                     /dev/vgdata
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255   
Cur LV                      0      
Open LV                     0      
Max PV                      16     
Cur PV                      1      
Act PV                      1      
Max PE per PV               12799        
VGDA                        2   
PE Size (Mbytes)            64              
Total PE                    12798   
Alloc PE                    0      
Free PE                     12798   
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0                     


   --- Physical volumes ---
   PV Name                     /dev/dsk/c2t0d1
   PV Name                     /dev/dsk/c6t0d1  Alternate Link
   PV Status                   available               
   Total PE                    12798   
   Free PE                     12798   
   Autoswitch                  On        


# vgchange -a n vgdata
Volume group "vgdata" has been successfully changed.
# vgchange -a n vglock
vgchange: Volume group "vglock" has been successfully changed.
# vgdisplay
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255   
Cur LV                      9      
Open LV                     9      
Max PV                      16     
Cur PV                      2      
Act PV                      2      
Max PE per PV               4356         
VGDA                        4   
PE Size (Mbytes)            32              
Total PE                    8692   
Alloc PE                    5672   
Free PE                     3020   
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0                     

vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgdata".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vglock".
# more /.rhosts
jk2hs1-1        root
jk2hs2-1        root

# more /etc/hosts.equiv
jk2hs1-1        root
jk2hs2-1        root

# more /etc/cmcluster/cmclnodelist
jk2hs1-1        root
jk2hs2-1        root

# more /etc/cmcluster/cluster.ascii
# **********************************************************************
# ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE ***************
# ***** For complete details about cluster parameters and how to *******
# ***** set them, consult the Serviceguard manual. *********************
# **********************************************************************

# Enter a name for this cluster.  This name will be used to identify the
# cluster when viewing or manipulating it.

CLUSTER_NAME            cluster1


# Cluster Lock Parameters
# The cluster lock is used as a tie-breaker for situations
# in which a running cluster fails, and then two equal-sized
# sub-clusters are both trying to form a new cluster.  The
# cluster lock may be configured using only one of the
# following alternatives on a cluster:
#          the LVM lock disk
#          the quorom server
#
#
# Consider the following when configuring a cluster.
# For a two-node cluster, you must use a cluster lock.  For
# a cluster of three or four nodes, a cluster lock is strongly
# recommended.  For a cluster of more than four nodes, a
# cluster lock is recommended.  If you decide to configure
# a lock for a cluster of more than four nodes, it must be
# a quorum server.

# Lock Disk Parameters.  Use the FIRST_CLUSTER_LOCK_VG and
# FIRST_CLUSTER_LOCK_PV parameters to define a lock disk.
# The FIRST_CLUSTER_LOCK_VG is the LVM volume group that
# holds the cluster lock. This volume group should not be
# used by any other cluster as a cluster lock device.  

# Quorum Server Parameters. Use the QS_HOST, QS_POLLING_INTERVAL,
# and QS_TIMEOUT_EXTENSION parameters to define a quorum server.
# The QS_HOST is the host name or IP address of the system
# that is running the quorum server process.  The
# QS_POLLING_INTERVAL (microseconds) is the interval at which
# Serviceguard checks to make sure the quorum server is running.
# The optional QS_TIMEOUT_EXTENSION (microseconds) is used to increase
# the time interval after which the quorum server is marked DOWN.
#
# The default quorum server timeout is calculated from the
# Serviceguard cluster parameters, including NODE_TIMEOUT and
# HEARTBEAT_INTERVAL.  If you are experiencing quorum server
# timeouts, you can adjust these parameters, or you can include
# the QS_TIMEOUT_EXTENSION parameter.
#
# The value of QS_TIMEOUT_EXTENSION will directly effect the amount
# of time it takes for cluster reformation in the event of failure.
# For example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster
# reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION
# was set to 0. This delay applies even if there is no delay in
# contacting the Quorum Server.  The recommended value for
# QS_TIMEOUT_EXTENSION is 0, which is used as the default
# and the maximum supported value is 30000000 (5 minutes).
#
# For example, to configure a quorum server running on node
# "qshost" with 120 seconds for the QS_POLLING_INTERVAL and to
# add 2 seconds to the system assigned value for the quorum server
# timeout, enter:
#
# QS_HOST qshost
# QS_POLLING_INTERVAL 120000000
# QS_TIMEOUT_EXTENSION 2000000

FIRST_CLUSTER_LOCK_VG           /dev/vglock


# Definition of nodes in the cluster.
# Repeat node definitions as necessary for additional nodes.
# NODE_NAME is the specified nodename in the cluster.
# It must match the hostname and both cannot contain full domain name.
# Each NETWORK_INTERFACE, if configured with IPv4 address,
# must have ONLY one IPv4 address entry with it which could
# be either HEARTBEAT_IP or STATIONARY_IP.
# Each NETWORK_INTERFACE, if configured with IPv6 address(es)
# can have multiple IPv6 address entries(up to a maximum of 2,
# only one IPv6 address entry belonging to site-local scope
# and only one belonging to global scope) which must be all
# STATIONARY_IP. They cannot be HEARTBEAT_IP.


NODE_NAME               jk2hs1-1
  NETWORK_INTERFACE     lan2
    HEARTBEAT_IP        192.168.1.1
  NETWORK_INTERFACE     lan0
    HEARTBEAT_IP        192.1.22.1
  FIRST_CLUSTER_LOCK_PV /dev/dsk/c2t0d2
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE    /dev/tty0p0


NODE_NAME               jk2hs2-1
  NETWORK_INTERFACE     lan2
    HEARTBEAT_IP        192.168.1.2
  NETWORK_INTERFACE     lan0
    HEARTBEAT_IP        192.1.22.2
  FIRST_CLUSTER_LOCK_PV /dev/dsk/c2t0d2
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE    /dev/tty0p0



# Cluster Timing Parameters (microseconds).

# The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds).
# This default setting yields the fastest cluster reformations.
# However, the use of the default value increases the potential
# for spurious reformations due to momentary system hangs or
# network load spikes.
# For a significant portion of installations, a setting of
# 5000000 to 8000000 (5 to 8 seconds) is more appropriate.
# The maximum value recommended for NODE_TIMEOUT is 30000000
# (30 seconds).

HEARTBEAT_INTERVAL              1000000
NODE_TIMEOUT            2000000


# Configuration/Reconfiguration Timing Parameters (microseconds).

AUTO_START_TIMEOUT      600000000
NETWORK_POLLING_INTERVAL        2000000

# Network Monitor Configuration Parameters.
# The NETWORK_FAILURE_DETECTION parameter determines how LAN card failures are d
etected.
# If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound
# message count stops increasing or when both inbound and outbound
# message counts stop increasing.
# If set to INOUT, both the inbound and outbound message counts must
# stop increasing before the card is considered down.
NETWORK_FAILURE_DETECTION               INOUT

# Package Configuration Parameters.
# Enter the maximum number of packages which will be configured in the cluster.
# You can not add packages beyond this limit.
# This parameter is required.
MAX_CONFIGURED_PACKAGES         5


# Access Control Policy Parameters.
#
# Three entries set the access control policy for the cluster:
# First line must be USER_NAME, second USER_HOST, and third USER_ROLE.
# Enter a value after each.
#
# 1. USER_NAME can either be ANY_USER, or a maximum of
#    8 login names from the /etc/passwd file on user host.
# 2. USER_HOST is where the user can issue Serviceguard commands.
#    If using Serviceguard Manager, it is the COM server.
#    Choose one of these three values: ANY_SERVICEGUARD_NODE, or
#    (any) CLUSTER_MEMBER_NODE, or a specific node. For node,
#    use the official hostname from domain name server, and not
#    an IP addresses or fully qualified name.
# 3. USER_ROLE must be one of these three values:
#    * MONITOR: read-only capabilities for the cluster and packages
#    * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages
#      in the cluster
#    * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative
#      commands for the cluster.
#
# Access control policy does not set a role for configuration
# capability. To configure, a user must log on to one of the
# cluster's nodes as root (UID=0). Access control
# policy cannot limit root users' access.
#
# MONITOR and FULL_ADMIN can only be set in the cluster configuration file,
# and they apply to the entire cluster. PACKAGE_ADMIN can be set in the  
# cluster or a package configuration file. If set in the cluster
# configuration file, PACKAGE_ADMIN applies to all configured packages.
# If set in a package configuration file, PACKAGE_ADMIN applies to that
# package only.
#
# Conflicting or redundant policies will cause an error while applying
# the configuration, and stop the process. The maximum number of access
# policies that can be configured in the cluster is 200.
#
# Example: to configure a role for user john from node noir to
# administer a cluster and all its packages, enter:
# USER_NAME  john
# USER_HOST  noir
# USER_ROLE  FULL_ADMIN


# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP          /dev/vgdatabase
# VOLUME_GROUP          /dev/vg02

VOLUME_GROUP            /dev/vglock
VOLUME_GROUP            /dev/vgdata

论坛徽章:
0
9 [报告]
发表于 2008-05-15 22:50 |只看该作者
#      
# hostname
jk2hs1-1
# more /.rhosts
jk2hs1-1        root
jk2hs2-1        root

# more /etc/hosts.equiv
jk2hs1-1        root
jk2hs2-1        root

# more /etc/cmcluster/cmclnodelist   
jk2hs1-1        root
jk2hs2-1        root

# more /etc/hosts
192.1.22.1      jk2hs1-1
192.1.22.2      jk2hs2-1               
127.0.0.1       localhost       loopback
# rlogin jk2hs2-1
Please wait...checking for disk quotas
(c)Copyright 1983-2003 Hewlett-Packard Development Company, L.P.
(c)Copyright 1979, 1980, 1983, 1985-1993 The Regents of the Univ. of California
(c)Copyright 1980, 1984, 1986 Novell, Inc.
(c)Copyright 1986-2000 Sun Microsystems, Inc.
(c)Copyright 1985, 1986, 1988 Massachusetts Institute of Technology
(c)Copyright 1989-1993  The Open Software Foundation, Inc.
(c)Copyright 1990 Motorola, Inc.
(c)Copyright 1990, 1991, 1992 Cornell University
(c)Copyright 1989-1991 The University of Maryland
(c)Copyright 1988 Carnegie Mellon University
(c)Copyright 1991-2003 Mentat Inc.
(c)Copyright 1996 Morning Star Technologies, Inc.
(c)Copyright 1996 Progressive Systems, Inc.
  

                  RESTRICTED RIGHTS LEGEND
Use, duplication, or disclosure by the U.S. Government is subject to
restrictions as set forth in sub-paragraph (c)(1)(ii) of the Rights in
Technical Data and Computer Software clause in DFARS 252.227-7013.


                  Hewlett-Packard Company
                  3000 Hanover Street
                  Palo Alto, CA 94304 U.S.A.

Rights for non-DOD U.S. Government Departments and Agencies are as set
forth in FAR 52.227-19(c)(1,2).
You have mail.
                                                                        
Value of TERM has been set to "70092".
WARNING:  YOU ARE SUPERUSER !!
# hostname
jk2hs2-1
#
# more /.rhosts
jk2hs1-1        root
jk2hs2-1        root
# more /etc/hosts.equiv
jk2hs1-1        root
jk2hs2-1        root

# more /etc/cmcluster/cmclnodelist
jk2hs1-1        root
jk2hs2-1        root
# more /etc/hosts
192.1.22.2      jk2hs2-1
192.1.22.1      jk2hs1-1
127.0.0.1       localhost       loopback
# vgdisplay
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255   
Cur LV                      9      
Open LV                     9      
Max PV                      16     
Cur PV                      2      
Act PV                      2      
Max PE per PV               4356         
VGDA                        4   
PE Size (Mbytes)            32              
Total PE                    8692   
Alloc PE                    5672   
Free PE                     3020   
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0                     

vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgdata".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vglock".
#
# exit
logout root
Connection closed.
#
# hostname
jk2hs1-1
# vgdisplay
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255   
Cur LV                      9      
Open LV                     9      
Max PV                      16     
Cur PV                      2      
Act PV                      2      
Max PE per PV               4356         
VGDA                        4   
PE Size (Mbytes)            32              
Total PE                    8692   
Alloc PE                    5672   
Free PE                     3020   
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0                     

vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgdata".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vglock".
#
# more /etc/cmcluster/cluster.ascii
# **********************************************************************
# ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE ***************
# ***** For complete details about cluster parameters and how to *******
# ***** set them, consult the Serviceguard manual. *********************
# **********************************************************************

# Enter a name for this cluster.  This name will be used to identify the
# cluster when viewing or manipulating it.

CLUSTER_NAME            cluster1


# Cluster Lock Parameters
# The cluster lock is used as a tie-breaker for situations
# in which a running cluster fails, and then two equal-sized
# sub-clusters are both trying to form a new cluster.  The
# cluster lock may be configured using only one of the
# following alternatives on a cluster:
#          the LVM lock disk
#          the quorom server
#
#
# Consider the following when configuring a cluster.
# For a two-node cluster, you must use a cluster lock.  For
# a cluster of three or four nodes, a cluster lock is strongly
# recommended.  For a cluster of more than four nodes, a
# cluster lock is recommended.  If you decide to configure
# a lock for a cluster of more than four nodes, it must be
# a quorum server.

# Lock Disk Parameters.  Use the FIRST_CLUSTER_LOCK_VG and
# FIRST_CLUSTER_LOCK_PV parameters to define a lock disk.
# The FIRST_CLUSTER_LOCK_VG is the LVM volume group that
# holds the cluster lock. This volume group should not be
# used by any other cluster as a cluster lock device.  

# Quorum Server Parameters. Use the QS_HOST, QS_POLLING_INTERVAL,
# and QS_TIMEOUT_EXTENSION parameters to define a quorum server.
# The QS_HOST is the host name or IP address of the system
# that is running the quorum server process.  The
# QS_POLLING_INTERVAL (microseconds) is the interval at which
# Serviceguard checks to make sure the quorum server is running.
# The optional QS_TIMEOUT_EXTENSION (microseconds) is used to increase
# the time interval after which the quorum server is marked DOWN.
#
# The default quorum server timeout is calculated from the
# Serviceguard cluster parameters, including NODE_TIMEOUT and
# HEARTBEAT_INTERVAL.  If you are experiencing quorum server
# timeouts, you can adjust these parameters, or you can include
# the QS_TIMEOUT_EXTENSION parameter.
#
# The value of QS_TIMEOUT_EXTENSION will directly effect the amount
# of time it takes for cluster reformation in the event of failure.
# For example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster
# reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION
# was set to 0. This delay applies even if there is no delay in
# contacting the Quorum Server.  The recommended value for
# QS_TIMEOUT_EXTENSION is 0, which is used as the default
# and the maximum supported value is 30000000 (5 minutes).
#
# For example, to configure a quorum server running on node
# "qshost" with 120 seconds for the QS_POLLING_INTERVAL and to
# add 2 seconds to the system assigned value for the quorum server
# timeout, enter:
#
# QS_HOST qshost
# QS_POLLING_INTERVAL 120000000
# QS_TIMEOUT_EXTENSION 2000000

FIRST_CLUSTER_LOCK_VG           /dev/vglock


# Definition of nodes in the cluster.
# Repeat node definitions as necessary for additional nodes.
# NODE_NAME is the specified nodename in the cluster.
# It must match the hostname and both cannot contain full domain name.
# Each NETWORK_INTERFACE, if configured with IPv4 address,
# must have ONLY one IPv4 address entry with it which could
# be either HEARTBEAT_IP or STATIONARY_IP.
# Each NETWORK_INTERFACE, if configured with IPv6 address(es)
# can have multiple IPv6 address entries(up to a maximum of 2,
# only one IPv6 address entry belonging to site-local scope
# and only one belonging to global scope) which must be all
# STATIONARY_IP. They cannot be HEARTBEAT_IP.


NODE_NAME               jk2hs1-1
  NETWORK_INTERFACE     lan2
    HEARTBEAT_IP        192.168.1.1
  NETWORK_INTERFACE     lan0
    HEARTBEAT_IP        192.1.22.1
  FIRST_CLUSTER_LOCK_PV /dev/dsk/c2t0d2
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE    /dev/tty0p0

# Warning: There are no standby network interfaces for lan1.
# Warning: There are no standby network interfaces for lan2.
# Warning: There are no standby network interfaces for lan0.

NODE_NAME               jk2hs2-1
  NETWORK_INTERFACE     lan2
    HEARTBEAT_IP        192.168.1.2
  NETWORK_INTERFACE     lan0
    HEARTBEAT_IP        192.1.22.2
  FIRST_CLUSTER_LOCK_PV /dev/dsk/c2t0d2
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE    /dev/tty0p0

# Warning: There are no standby network interfaces for lan1.
# Warning: There are no standby network interfaces for lan2.
# Warning: There are no standby network interfaces for lan0.


# Cluster Timing Parameters (microseconds).

# The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds).
# This default setting yields the fastest cluster reformations.
# However, the use of the default value increases the potential
# for spurious reformations due to momentary system hangs or
# network load spikes.
# For a significant portion of installations, a setting of
# 5000000 to 8000000 (5 to 8 seconds) is more appropriate.
# The maximum value recommended for NODE_TIMEOUT is 30000000
# (30 seconds).

HEARTBEAT_INTERVAL              1000000
NODE_TIMEOUT            2000000


# Configuration/Reconfiguration Timing Parameters (microseconds).

AUTO_START_TIMEOUT      600000000
NETWORK_POLLING_INTERVAL        2000000

# Network Monitor Configuration Parameters.
etected.
# If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound
# message counts stop increasing.
# If set to INOUT, both the inbound and outbound message counts must
# stop increasing before the card is considered down.
NETWORK_FAILURE_DETECTION               INOUT

# Package Configuration Parameters.
# Enter the maximum number of packages which will be configured in the cluster.
# You can not add packages beyond this limit.
# This parameter is required.
MAX_CONFIGURED_PACKAGES         5


# Access Control Policy Parameters.
#
# Three entries set the access control policy for the cluster:
# First line must be USER_NAME, second USER_HOST, and third USER_ROLE.
# Enter a value after each.
#
# 1. USER_NAME can either be ANY_USER, or a maximum of
#    8 login names from the /etc/passwd file on user host.
# 2. USER_HOST is where the user can issue Serviceguard commands.
#    If using Serviceguard Manager, it is the COM server.
#    Choose one of these three values: ANY_SERVICEGUARD_NODE, or
#    (any) CLUSTER_MEMBER_NODE, or a specific node. For node,
#    use the official hostname from domain name server, and not
#    an IP addresses or fully qualified name.
# 3. USER_ROLE must be one of these three values:
#    * MONITOR: read-only capabilities for the cluster and packages
#    * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages
#      in the cluster
#    * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative
#      commands for the cluster.
#
# Access control policy does not set a role for configuration
# capability. To configure, a user must log on to one of the
# cluster's nodes as root (UID=0). Access control
# policy cannot limit root users' access.
#
# MONITOR and FULL_ADMIN can only be set in the cluster configuration file,
# and they apply to the entire cluster. PACKAGE_ADMIN can be set in the  
# cluster or a package configuration file. If set in the cluster
# configuration file, PACKAGE_ADMIN applies to all configured packages.
# If set in a package configuration file, PACKAGE_ADMIN applies to that
# package only.
#
# Conflicting or redundant policies will cause an error while applying
# the configuration, and stop the process. The maximum number of access
# policies that can be configured in the cluster is 200.
#
# Example: to configure a role for user john from node noir to
# administer a cluster and all its packages, enter:
# USER_NAME  john
# USER_HOST  noir
# USER_ROLE  FULL_ADMIN


# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP          /dev/vgdatabase
# VOLUME_GROUP          /dev/vg02

VOLUME_GROUP            /dev/vglock
VOLUME_GROUP            /dev/vgdata
#
#
# cmcheckconf -v -C /etc/cmcluster/cluster.ascii
Checking cluster file: /etc/cmcluster/cluster.ascii
Note : a NODE_TIMEOUT value of 2000000 was found in line 129. For a
significant portion of installations, a higher setting is more appropriate.
Refer to the comments in the cluster configuration ascii file or Serviceguard
manual for more information on this parameter.
Checking nodes ... Done
Checking existing configuration ... Done
Node jk2hs1-1 is refusing Serviceguard communication.
Please make sure that the proper security access is configured on node
jk2hs1-1 through either file-based access (pre-A.11.16 version) or role-based
access (version A.11.16 or higher) and/or that the host name lookup
on node jk2hs1-1 resolves the IP address correctly.
cmcheckconf: Failed to gather configuration information
#

论坛徽章:
0
10 [报告]
发表于 2008-05-15 22:58 |只看该作者
cmcheckconf -C /etc/cmcluster/cmclconf.ascii后
出现Please make sure that the proper security access is configured on node
rlogin是信任关系,和你改IP没什么太大关系吧,我也是学习中,不知道我的理解对你有没有帮助,,
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP