两台solaris zfs 系统挂接同一个卷问题
本帖最后由 shanren067 于 2013-10-20 13:45 编辑大家好,我这里遇到一个问题,如标题所说,现在我需要给两台M4000(solaris10U10 ZFS)挂接同一个卷,为了方便以后能扩空间,用zfs管理,把新加的卷填到DBpool里。但是发现先创建的pool可以成功(db02),再另一台(db01)server里创建就需要加“-f"才行! 但是这就导致先创建db02 的Pool 有问题,详见LOG,是操作方法有问题还是ZFS的局限性?请高人帮我看一下!先谢谢了!
我知道可以通过SVM也能实现磁盘动态扩展,但是想用ZFS来做,这样方便管理。
root@db01 # format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>Solaris
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0
2. c3t600143801259DA290000F000000F0000d0 <HP-HSV360-1100-1.50TB>
/scsi_vhci/ssd@g600143801259da290000f000000f0000
Specify disk (enter its number): ^C
root@pdb01 # exit
logout
Connection to db01 closed.
root@db02 # format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0
2. c3t600143801259DA290000F000000F0000d0 <HP-HSV360-1100-1.50TB>
/scsi_vhci/ssd@g600143801259da290000f000000f0000
Specify disk (enter its number):
root@Db02 # zpool list
NAME SIZEALLOC FREE CAPHEALTHALTROOT
rpool 136G39.7G96.3G 29%ONLINE-
root@Db02 # zpool create DBpool c3t600143801259DA290000F000000F0000d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3t600143801259DA290000F000000F0000d0s0 is part of exported or potenti
root@Db02 # zpool create -fDBpool c3t600143801259DA290000F000000F0000d
root@Db02 #
root@Db02 #
root@Db02 # zpool list
NAME SIZEALLOC FREE CAPHEALTHALTROOT
DBpool1.49T 80K1.49T 0%ONLINE-
rpool 136G39.7G96.3G 29%ONLINE-
root@Db02 # zpool status
pool: DBpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
DBpool ONLINE 0 0 0
c3t600143801259DA290000F000000F0000d0ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: resilvered 39.7G in 0h12m with 0 errors on Wed Sep 18 14:56:41 2013
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t1d0s0ONLINE 0 0 0
c0t0d0s0ONLINE 0 0 0
errors: No known data errors
root@Db02 # ssh db01
Last login: Sat Oct 19 19:34:31 2013 from db02
Oracle Corporation SunOS 5.10 Generic Patch January 2005
Sourcing //.profile-EIS.....
root@Db01 # zpool list
NAME SIZEALLOC FREE CAPHEALTHALTROOT
rpool 136G46.2G89.8G 33%ONLINE-
root@Db01 # format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>Solaris
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0
2. c3t600143801259DA290000F000000F0000d0 <HP-HSV360-1100-1.50TB>
/scsi_vhci/ssd@g600143801259da290000f000000f0000
Specify disk (enter its number): zp^H^C
root@Db01 # zpool create DBpool c3t600143801259DA290000F000000F0000d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3t600143801259DA290000F000000F0000d0s0 is part of exported or potenti.
root@Db01 # zpool create -f DBpool c3t600143801259DA290000F000000F0000d0
root@Db01 # zpool list
NAME SIZEALLOC FREE CAPHEALTHALTROOT
DBpool1.49T 80K1.49T 0%ONLINE-
rpool 136G46.2G89.8G 33%ONLINE-
root@Db01 # zpool status
pool: DBpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
DBpool ONLINE 0 0 0
c3t600143801259DA290000F000000F0000d0ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: resilvered 6.30G in 0h4m with 0 errors on Wed Sep 18 13:29:06 2013
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t0d0s0ONLINE 0 0 0
c0t1d0s0ONLINE 0 0 0
errors: No known data errors
root@Db01 # exit
logout
Connection to db01 closed.
root@Db02 # zpool status
pool: DBpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
DBpool ONLINE 0 0 0
c3t600143801259DA290000F000000F0000d0ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: resilvered 39.7G in 0h12m with 0 errors on Wed Sep 18 14:56:41 2013
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t1d0s0ONLINE 0 0 0
c0t0d0s0ONLINE 0 0 0
errors: No known data errors
root@Db02 #
root@Db02 # ssh db01
Last login: Sat Oct 19 19:47:43 2013 from db02
Oracle Corporation SunOS 5.10 Generic Patch January 2005
Sourcing //.profile-EIS.....
root@Db01 # zfs create DBpool/lm
root@Db01 # df -h
Filesystem size usedavail capacityMounted on
rpool/ROOT/s10s_u10wos_17b
134G 13G 87G 13% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 58G 432K 58G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
fd 0K 0K 0K 0% /dev/fd
swap 58G 32K 58G 1% /tmp
swap 58G 72K 58G 1% /var/run
rpool/export 134G 32K 87G 1% /export
rpool/export/home 134G 35K 87G 1% /export/home
rpool 134G 106K 87G 1% /rpool
DBpool 1.5T 31K 1.5T 1% /DBpool
DBpool/lm 1.5T 31K 1.5T 1% /DBpool/lm
root@Db01 # zfs set quota=500g DBpool/lm
root@Db01 # df -h
Filesystem size usedavail capacityMounted on
rpool/ROOT/s10s_u10wos_17b
134G 13G 87G 13% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 58G 432K 58G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
fd 0K 0K 0K 0% /dev/fd
swap 58G 32K 58G 1% /tmp
swap 58G 72K 58G 1% /var/run
rpool/export 134G 32K 87G 1% /export
rpool/export/home 134G 35K 87G 1% /export/home
rpool 134G 106K 87G 1% /rpool
DBpool 1.5T 32K 1.5T 1% /DBpool
DBpool/lm 500G 31K 500G 1% /DBpool/lm
root@Db01 # exit
logout
Connection to db01 closed.
root@Db02 # zfs create DBpool/data
cannot mount '/DBpool/data': failed to create mountpoint
filesystem successfully created, but not mounted
root@Db02 # df -h
Filesystem size usedavail capacityMounted on
rpool/ROOT/s10s_u10wos_17b
134G 6.2G 93G 7% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 58G 432K 58G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
fd 0K 0K 0K 0% /dev/fd
swap 58G 32K 58G 1% /tmp
swap 58G 72K 58G 1% /var/run
rpool/export 134G 32K 93G 1% /export
rpool/export/home 134G 38K 93G 1% /export/home
rpool 134G 106K 93G 1% /rpool
DBpool 1.5T 31K 1.5T 1% /DBpool
root@Db02 # cd DBpool/
root@Db02 # ls
root@Db02 #
SUNW-MSG-ID: ZFS-8000-GH, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Sat Oct 19 19:55:09 CST 2013
PLATFORM: SUNW,SPARC-Enterprise, CSN: BEF1014C28, HOSTNAME: Db02
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 69626b5a-6d3f-e193-c0ef-93c3ff91178a
DESC: The number of checksum errors associated with a ZFS device
exceeded acceptable levels.Refer to http://sun.com/msg/ZFS-8000-GH for more i.
AUTO-RESPONSE: The device has been marked as degraded.An attempt
will be made to activate a hot spare if available.
IMPACT: Fault tolerance of the pool may be compromised.
REC-ACTION: Run 'zpool status -x' and replace the bad device.
root@Db02 # zpoolstatus -x
pool: DBpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.Applications may be affected.
action: Restore the file in question if possible.Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scan: none requested
config:
NAME STATE READ WRITE CKSUM
DBpool DEGRADED 0 0 6
c3t600143801259DA290000F000000F0000d0DEGRADED 0 0 24tos
errors: 1 data errors, use '-v' for a list
root@Db02 # zpool status DBpool -v
cannot open '-v': name must begin with a letter
pool: DBpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.Applications may be affected.
action: Restore the file in question if possible.Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scan: none requested
config:
NAME STATE READ WRITE CKSUM
DBpool DEGRADED 0 0 10
c3t600143801259DA290000F000000F0000d0DEGRADED 0 0 40tos
errors: 1 data errors, use '-v' for a list
root@Db02 #
是不是你db02应经用了这个卷来创建一个pool,然后你又用这个卷在另外一台机器上创建pool,同一个卷被用来做两个pool的存储设备,这肯定有问题。这已不是共享挂载。据我所知,只要你在db02上用这个卷创建了DBpool,那你就可以在db01上看到这个pool,而且还可以直接import来使用(先在db02export)。至于你要共享这个卷,应该有其他方法的,再查查文档吧。
不对的地方请指正,,,,:lol ZFS不是全局文件系统,不能同时给两台机器使用 本帖最后由 shanren067 于 2013-10-19 23:39 编辑
回复 2# yejunlon
试过了,还是不行。 yejunlon 发表于 2013-10-19 22:15 static/image/common/back.gif
是不是你db02应经用了这个卷来创建一个pool,然后你又用这个卷在另外一台机器上创建pool,同一个卷被用来做 ...
试过了,还是不行。在db01上创建DBpool,然后在db02上 #zpool import能是看到DBpool,使用#zpool import -f DBpool也能导入了。重启之后两个系统DBpool都不见了!!! 我觉得可以这样,你无非是想使用DBpool下的空间,你可以在db02上的DBpool里,创建一个zfs文件系统/DBpool/lm
,然后把这个zfs文件系统/DBpool/lm共享,挂载到db01上,这样你就可以在db01上使用/DBpool/lm。
共享的具体做法,你可以上网上找找,,,
好像是要把一个磁盘放到两个池里? 不能这么用?
不过你可以在一台机器上创建zfs 然后挂到另一台机器上. 本帖最后由 shanren067 于 2013-10-20 13:34 编辑
wang_sy 发表于 2013-10-19 23:05 static/image/common/back.gif
ZFS不是全局文件系统,不能同时给两台机器使用
我没有查到ZFS是不是全局文件系统,如果不是全局文件系统,那么这个方案就要流产了?
我想要以后再对现有卷进行扩容,因为1.5TB的容量可能就能抗一年,不知道还有其他方案吗? yejunlon 发表于 2013-10-20 09:48 static/image/common/back.gif
我觉得可以这样,你无非是想使用DBpool下的空间,你可以在db02上的DBpool里,创建一个zfs文件系统/DBpool/l ...
那如果对db02维护(关机或是重启)的时候,db01业务就受到了影响。所以共享的方案应该是行不通的。
两台M4000用同一个lun做zpool?还真没这样来搞,同一个lun做的Pool在两台M4000中,如果没有配cluster,那只能单独在其中一台M4000手动export和import了。
页:
[1]
2