免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 1663 | 回复: 5
打印 上一主题 下一主题

raid问题,救命,各位大侠帮忙 [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2006-07-31 10:13 |只看该作者 |倒序浏览
非法断电关机,开机raid5启动失败,log以下,请大家分析下,如何恢复


audit subsystem ver 0.1 initialized
[events: 00000038]
[events: 00000036]
[events: 00000036]
[events: 00000038]
md: autorun ...
md: considering sdb1 ...
md:  adding sdb1 ...
md:  adding sdd1 ...
md:  adding sdc1 ...
md:  adding sda1 ...
md: created md0
md: bind<sda1,1>
md: bind<sdc1,2>
md: bind<sdd1,3>
md: bind<sdb1,4>
md: running: <sdb1><sdd1><sdc1><sda1>
md: sdb1's event counter: 00000038
md: sdd1's event counter: 00000036
md: sdc1's event counter: 00000036
md: sda1's event counter: 00000038
md: superblock update time inconsistency -- using the most recent one
md: freshest: sdb1
md: kicking non-fresh sdd1 from array!
md: unbind<sdd1,3>
md: export_rdev(sdd1)
md: kicking non-fresh sdc1 from array!
md: unbind<sdc1,2>
md: export_rdev(sdc1)
md: device name has changed from sdc1 to sdb1 since last import!
md: device name has changed from sdd1 to sda1 since last import!
md0: former device sda1 is unavailable, removing from array!
md0: former device sdb1 is unavailable, removing from array!
md: md0: raid array is not clean -- starting background reconstruction
md0: max total readahead window set to 768k
md0: 3 data-disks, max readahead per data-disk: 256k
raid5: device sdb1 operational as raid disk 0
raid5: device sda1 operational as raid disk 1
raid5: not enough operational devices for md0 (2/4 failed)
RAID5 conf printout:
--- rd:4 wd:2 fd:2
disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sdb1
disk 1, s:0, o:1, n:1 rd:1 us:1 dev:sda1
disk 2, s:0, o:0, n:2 rd:2 us:1 dev:[dev 00:00]
disk 3, s:0, o:0, n:3 rd:3 us:1 dev:[dev 00:00]
raid5: failed to run raid set md0
md: pers->run() failed ...
md :do_md_run() returned -22
md: md0 stopped.
md: unbind<sdb1,1>
md: export_rdev(sdb1)
md: unbind<sda1,0>
md: export_rdev(sda1)
md: ... autorun DONE.
[events: 00000038]
[events: 00000036]
[events: 00000036]
[events: 00000038]
md: autorun ...
md: considering sdb1 ...
md:  adding sdb1 ...
md:  adding sdd1 ...
md:  adding sdc1 ...
md:  adding sda1 ...
md: created md0
md: bind<sda1,1>
md: bind<sdc1,2>
md: bind<sdd1,3>
md: bind<sdb1,4>
md: running: <sdb1><sdd1><sdc1><sda1>
md: sdb1's event counter: 00000038
md: sdd1's event counter: 00000036
md: sdc1's event counter: 00000036
md: sda1's event counter: 00000038
md: superblock update time inconsistency -- using the most recent one
md: freshest: sdb1
md: kicking non-fresh sdd1 from array!
md: unbind<sdd1,3>
md: export_rdev(sdd1)
md: kicking non-fresh sdc1 from array!
md: unbind<sdc1,2>
md: export_rdev(sdc1)
md: device name has changed from sdc1 to sdb1 since last import!
md: device name has changed from sdd1 to sda1 since last import!
md0: former device sda1 is unavailable, removing from array!
md0: former device sdb1 is unavailable, removing from array!
md: md0: raid array is not clean -- starting background reconstruction
md0: max total readahead window set to 768k
md0: 3 data-disks, max readahead per data-disk: 256k
raid5: device sdb1 operational as raid disk 0
raid5: device sda1 operational as raid disk 1
raid5: not enough operational devices for md0 (2/4 failed)
RAID5 conf printout:
--- rd:4 wd:2 fd:2
disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sdb1
disk 1, s:0, o:1, n:1 rd:1 us:1 dev:sda1
disk 2, s:0, o:0, n:2 rd:2 us:1 dev:[dev 00:00]
disk 3, s:0, o:0, n:3 rd:3 us:1 dev:[dev 00:00]
raid5: failed to run raid set md0
md: pers->run() failed ...
md :do_md_run() returned -22
md: md0 stopped.
md: unbind<sdb1,1>
md: export_rdev(sdb1)
md: unbind<sda1,0>
md: export_rdev(sda1)
md: ... autorun DONE.

论坛徽章:
0
2 [报告]
发表于 2006-07-31 10:22 |只看该作者
在线等,各位帮忙

是否:
md: running: <sdb1><sdd1><sdc1><sda1>
md: sdb1's event counter: 00000038
md: sdd1's event counter: 00000036
md: sdc1's event counter: 00000036
md: sda1's event counter: 00000038
有问题,如何解决?

论坛徽章:
0
3 [报告]
发表于 2006-07-31 12:49 |只看该作者
问题!
您用的是 软 RAID还是硬RAID?
问题说不清!怎么帮你?

论坛徽章:
1
荣誉会员
日期:2011-11-23 16:44:17
4 [报告]
发表于 2006-08-01 13:30 |只看该作者
软RAID,有2颗盘被踢出来了。不知道是不是损坏了。如果是的话,数据就都没了哦。

谁给你做的RAID?找他帮你看看。

论坛徽章:
0
5 [报告]
发表于 2006-08-01 14:45 |只看该作者
linux 做的软raid5,2块盘没有坏,就是被踢出来了;
fdisk -l 还是可以单独列出raid5的4块盘的
这个还有救么?

论坛徽章:
0
6 [报告]
发表于 2006-08-01 14:47 |只看该作者
先前曾经发生过一块盘被提出来,但是raid5还是可以启动,我hotadd加了拿块盘,sync同步也完成;数据也没问题
但是重启后,又发生2块盘被提出了;不知道怎么回事?
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP