- 论坛徽章:
- 0
|
我做了一遍,所有盘(3块)的所有分区都是 softraid
Disk /dev/hda: 858 MB, 858993152 bytes
32 heads, 63 sectors/track, 832 cylinders
Units = cylinders of 2016 * 512 = 1032192 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 33 33232+ fd Linux raid autodetect
/dev/hda2 34 766 738864 fd Linux raid autodetect
/dev/hda3 767 831 65520 fd Linux raid autodetect
Disk /dev/hdb: 858 MB, 858993152 bytes
32 heads, 63 sectors/track, 832 cylinders
Units = cylinders of 2016 * 512 = 1032192 bytes
Device Boot Start End Blocks Id System
/dev/hdb1 * 1 33 33232+ fd Linux raid autodetect
/dev/hdb2 34 766 738864 fd Linux raid autodetect
/dev/hdb3 767 831 65520 fd Linux raid autodetect
Disk /dev/hdc: 858 MB, 858993152 bytes
16 heads, 63 sectors/track, 1664 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Device Boot Start End Blocks Id System
/dev/hdc1 * 1 66 33232+ fd Linux raid autodetect
/dev/hdc2 67 1532 738864 fd Linux raid autodetect
/dev/hdc3 1533 1662 65520 fd Linux raid autodetect
/proc/mdstat 的内容是这样的
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
Event: 3
md2 : active raid5 hda3[0] hdb3[1] hdc3[3]
130816 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
md1 : active raid5 hda2[0] hdb2[1] hdc2[3]
1477504 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
md0 : active raid1 hda1[0] hdb1[1] hdc1[3]
32640 blocks [3/3] [UUU]
unused devices: <none>
对应的映射是这样的(/dev/md2 是 swap)
/dev/md1 on / type ext3 (rw)
/dev/md0 on /boot type ext3 (rw)
由于引导区只能是 RAID1,所以我把三块盘的引导部分一起弄成了 RAID1 格式
当我关机后,删除一块硬盘后,只剩 hda 和 hdb
之后我重新添加一块硬盘,fdisk 看到的 hdc 是没有分过区的
此时 /proc/mdstat 内容如下
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
Event: 3
md2 : active raid5 hda3[0] hdb3[1]
130816 blocks level 5, 64k chunk, algorithm 0 [3/3] [UU_]
md1 : active raid5 hda2[0] hdb2[1]
1477504 blocks level 5, 64k chunk, algorithm 0 [3/3] [UU_]
md0 : active raid1 hda1[0] hdb1[1]
32640 blocks [3/3] [UU_]
unused devices: <none>
我用 sfdisk -d /dev/hda > disk.dat
再用 sfdisk /dev/hdc < disk.dat 直接将 hda 的分区结构复制到 hdc 中,fdisk 可以看到三块盘一模一样
只后,我用
raidhotadd /dev/md0 /dev/hdc1
raidhotadd /dev/md1 /dev/hdc2
raidhotadd /dev/md2 /dev/hdc3
来进行磁盘同步
此时 /proc/mdstat 内容如下(md0 和 md2 已经同步好,正在同步 md1)
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
Event: 9
md2 : active raid5 hda3[0] hdb3[1] hdc3[3] hdd3[2]
130816 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
md1 : active raid5 hdc2[3] hda2[0] hdc2[1]
1477504 blocks level 5, 64k chunk, algorithm 0 [3/2] [UU_]
[=======>.............] recovery = 35.7% (264496/738752) finish=1.0min speed=7282K/sec
md0 : active raid1 hda1[0] hdb1[1] hdc1[3] hdd1[2]
32640 blocks [3/3] [UUU]
unused devices: <none>
接下来,继续做了一个有趣的试验,我给 vmware 又添加了一块硬盘 hdd,并且也把 hda 的分区结构写了进去,之后
raidhotadd /dev/md0 /dev/hdd1
raidhotadd /dev/md1 /dev/hdd2
raidhotadd /dev/md2 /dev/hdd3
再看 /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
Event: 3
md2 : active raid5 hda3[0] hdb3[1] hdc3[3] hdd3[2]
130816 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
md1 : active raid5 hda2[0] hdb2[1] hdc2[3] hdd2[2]
1477504 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
md0 : active raid1 hda1[0] hdb1[1] hdc1[3] hdd1[2]
32640 blocks [3/3] [UUU]
unused devices: <none>
可以看到,每个 md 分区都有 4 块硬盘,其中后加进去的是 spare 备份盘
接着,我人为设置 /dev/hdc2 失效,结果有意思的事情发生了
[root@platinum root]# raidsetfaulty /dev/md1 /dev/hdc2
[root@platinum root]# cat /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
Event: 13
md2 : active raid5 hda3[0] hdb3[1] hdc3[3] hdd3[2]
130816 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
md1 : active raid5 hdd2[3] hda2[0] hdb2[1] hdc2[2](F)
1477504 blocks level 5, 64k chunk, algorithm 0 [3/2] [UU_]
[>....................] recovery = 2.2% (16872/738752) finish=6.4min speed=1874K/sec
md0 : active raid1 hda1[0] hdb1[1] hdc1[3] hdd1[2]
32640 blocks [3/3] [UUU]
unused devices: <none>
hdd2 自动开始同步,来顶替坏掉了的 hdc2
经过我的分析,可以判断两点
1、你的分区还有 ext3,不是所有都是软 RAID 结构,因此你可以 raidstop,否则会报告 busy 的
2、你不该 raidstop,那样的话,如果不在重启前 raidstart,系统恐怕就起不来了,第六步你做错了
如果真的用于生产的话,一定要多研究多试验才能用,除非你有足够的信心相信自己没有问题,后果有能力承担
|
|