ZFS devices failure
在做完昨天的zfs实验后,虚拟机出现一个错误ZFS devices failure,具体症状如:bash-3.00# zpool status
池:pool1
状态:FAULTED
状态:一个或多个设备无法打开。没有足够的
副本可供池继续工作。
操作:附加缺少的设备并使用 'zpool online' 使之联机。
请参见:
http://www.sun.com/msg/ZFS-8000-D3
清理:未请求
配置:
NAME STATE READ WRITE CKSUM
pool1 不可用 0 0 0副本不够
c2t0d0s2不可用 0 0 0不能打开
上官方网找了一下资料:
ZFS devices failure
Type
Fault
Severity
Major
Description
A ZFS device failed during normal operation.
Automated Response
No automated response will occur.
Impact
The fault tolerance of the pool may be affected.
Suggested Action for System Administrator
A device within a ZFS pool failed and could not be reopened. Run 'zpool status -x' to determine exactly which device failed and why: # zpool status -x
pool: test
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
test DEGRADED 0 0 0
mirror DEGRADED 0 0 0
c0t0d0 ONLINE 0 0 0
c0t0d1 FAULTED 0 0 0corrupted data
errors: No known data errors.
The 'status' field will describe exactly why the pool failed. The 'action' field describes how the pool can be repaired, if at all.
Details
The Message ID: ZFS-8000-D3 indicates a failed ZFS device. Take the documented action to resolve the problem
由于其它原因使然,没有成功完成实验,只好期待故障重现了。
附上zfs replace
bash-3.00# df -thk -F zfs
文件系统 大小 用了 可用 容量 挂接在
pool1 2.0G 25K 2.0G 1% /pool1
pool1/zfs1 2.0G 158K 2.0G 1% /pool1/zfs1
bash-3.00# cd pool1/zfs1/
bash-3.00# pwd
/pool1/zfs1
bash-3.00# mkfile -nv 100m test
test 104857600 bytes
bash-3.00# ls -l
总数 265
-rw------T 1 root root 1048576006月 21日 09:23 test
bash-3.00# zpool status
池:pool1
状态:ONLINE
清理:未请求
配置:
NAME STATE READ WRITE CKSUM
pool1 联机 0 0 0
c2t0d0s2联机 0 0 0
错误:无已知的数据错误
bash-3.00# zpool replace pool1 c2t0d0s2 c2t1d0s2
无效的 vdev 说明
使用 '-f' 覆盖以下错误:
/dev/dsk/c2t1d0s2 overlaps with /dev/dsk/c2t1d0s0
bash-3.00# zpool replace -f pool1 c2t0d0s2 c2t1d0s2
bash-3.00# zpool status
池:pool1
状态:ONLINE
清理:Thu Jun 21 09:28:10 2007
上的 resilver completed 发生 0 错误配置:
NAME STATE READ WRITE CKSUM
pool1 联机 0 0 0
replacing 联机 0 0 0
c2t0d0s2联机 0 0 0
c2t1d0s2联机 0 0 0
错误:无已知的数据错误
过一段时间后再查看,replace 完成
bash-3.00# zpool status
池:pool1
状态:ONLINE
清理:Thu Jun 21 09:28:10 2007
上的 resilver completed 发生 0 错误配置:
NAME STATE READ WRITE CKSUM
pool1 联机 0 0 0
c2t1d0s2联机 0 0 0
最后查看数据,仍然保留一致
bash-3.00# ls -l
总数 265
-rw------T 1 root root 1048576006月 21日 09:23 te
bash-3.00# pwd
/pool1/zfs1
本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/26090/showart_325510.html
页:
[1]