Chinaunix
标题:
配置gfs遇到的问题。帮忙哦!
[打印本页]
作者:
04120103
时间:
2008-05-09 11:11
标题:
配置gfs遇到的问题。帮忙哦!
我的步骤:
1.在三台机子上改/etc/hosts(三台一样)
127.0.0.1 localhost.localdomain localhost
192.168.18.240 gfs-node01
192.168.20.224 gfs-node02
192.168.0.141 gnbd-server
//127.0.0.1那行原来就有,我没去掉
2.用system-config-cluster生成配置:
在192.168.18.240机子上生成配置,保存为:/etc/cluster/cluster.conf
复制一份到192.168.20.224上,保存为/etc/cluster/cluster.conf
生成配置过程如下:
选择使用DLM
添加两个节点,名字分别为gfs-node01,gfs-node02,Quorum Votes值都设置为1
添加fence devices type:Glodbal Network Block Device Name:gnbd Server:gnbd-server
添加Failover Domains 名字为gnbd-server,并加入刚才创建的节点gnbd-node1,gnbd-node2
在gfs-node01,gfs-node02,中"manange fencing for this node",选择"add a fence level"
//服务器不是虚拟机,两个节点是虚拟机
生成的配置文件如下:
<cluster config_version="2" name="alpha_cluster">
<fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="gfs-node01" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
<clusternode name="gfs-node02" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_gnbd" name="gnbd" server="gnbd-server"/>
</fencedevices>
<rm>
<failoverdomains>
<failoverdomain name="gnbd-server" ordered="0" restricted="0">
<failoverdomainnode name="gfs-node01" priority="1"/>
<failoverdomainnode name="gfs-node02" priority="1"/>
</failoverdomain>
</failoverdomains>
<resources/>
</rm>
</cluster>
3.在两个节点机子上加载模块
modprobe gnbd
modprobe gfs
modprobe lock_dlm
4.在两个节点机子上启动服务
service ccsd start
service cman start
service fenced start
service clvmd start
service gfs start
service rgmanager start
5.在gnbd-server上执行如下命令:
gnbd_serv -n
gnbd_export -c -d /dev/sda1 -e global_disk
结果
gnbd_export: created GNBD global_disk serving file /dev/sda1
6.在两个节点上执行如下命令:
gnbd_import -i gnbd-server
结果
gnbd_import: created directory /dev/gnbd
gnbd_import: created gnbd device global_disk
gnbd_recvd: gnbd_recvd started
fence_tool join
//执行ccsd 和 cman_tool join 提示already running 因为我前面已经启动这些服务了
7.在gfs-node01上
执行gfs_mkfs -p lock_dlm -t alpha_cluster:gfs -j 2 /dev/gnbd/global_disk
提示:gfs_mkfs: Partition too small for number/size of journals
我想是空间不够。改小一点://最小是32 默认是128
gfs_mkfs -p lock_dlm -t alpha_cluster:gfs -j 2 -J 32 /dev/gnbd/global_disk
执行结果
This will destroy any data on /dev/gnbd/global_disk.
It appears to contain a EXT2/3 filesystem.
Are you sure you want to proceed? [y/n] y
Device: /dev/gnbd/global_disk
Blocksize: 4096
Filesystem Size: 9680
Journals: 2
Resource Groups: 8
Locking Protocol: lock_dlm
Lock Table: alpha_cluster:gfs
Syncing...
All Done
可以了哦。
8.在两个节点机子 的/ 目录下建立文件夹 gfstest
执行:
mount -t gfs /dev/gnbd/global_disk /gfstest
问题:
在三个节点上执行:cat /proc/cluster/status
发现如下结果:
gfs-node01:
Protocol version: 5.0.1
Config version: 4
Cluster name: alpha_cluster
Cluster ID: 50356
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 1
Expected_votes: 1
Total_votes: 1
Quorum: 1
Active subsystems: 6
Node name: gfs-node01
Node addresses: 192.168.18.240
gfs-node02:
Protocol version: 5.0.1
Config version: 4
Cluster name: alpha_cluster
Cluster ID: 50356
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 1
Expected_votes: 1
Total_votes: 1
Quorum: 1
Active subsystems: 6
Node name: gfs-node02
Node addresses: 192.168.20.224
gfs-server:
gnbd_export: created GNBD global_disk serving file /dev/sda1
[root@gnbd-server ~]# cat /proc/cluster/status
Protocol version: 5.0.1
Config version: 0
Cluster name:
Cluster ID: 0
Cluster Member: No
Membership state: Not-in-Cluster
执行 cat /proc/cluster/nodes时:
结果如下:
gfs-node01:
Node Votes Exp Sts Name
1 1 1 M gfs-node01
gfs-node02
Node Votes Exp Sts Name
1 1 1 M gfs-node02
gfs-server
Node Votes Exp Sts Name
上述情况看感觉这三台机子是分开的。。。为什么呢?
怎么会这样!!!!
这是什么问题呢?怎么会没有节点呢?
help me !!!!
[
本帖最后由 04120103 于 2008-5-9 11:24 编辑
]
作者:
caicheng1015
时间:
2008-05-10 22:36
GNBD怎么配置的呀?我怎么看见你GNBD没有配置么
欢迎光临 Chinaunix (http://bbs.chinaunix.net/)
Powered by Discuz! X3.2