Step 1: Find out the size of the array
Step 2: Create the array and set the size
Step 4: Change the partition type of /dev/hdc1
Step 5: Setup of fstab and GRUB
Step 6: /dev/hda1 configuration
shell# fdisk -l /dev/hda Disk /dev/hda: 4325 MB, 4325529600 bytes 255 heads, 63 sectors/track, 525 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 1 525 4217031 83 Linux shell# fdisk -l /dev/hdc Disk /dev/hdc: 30.7 GB, 30735581184 bytes 255 heads, 63 sectors/track, 3736 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdc1 1 535 4297356 83 Linux /dev/hdc2 536 3736 25712032+ f W95 Ext'd (LBA) /dev/hdc5 536 3736 25712001 83 Linux
mdadm
has given us the information:
shell# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/hda1 /dev/hdc1 mdadm: /dev/hda1 appears to contain an ext2fs file system size=4217028K mtime=Fri Dec 23 04:53:30 2005 mdadm: /dev/hdc1 appears to contain an ext2fs file system size=4297280K mtime=Fri Dec 23 02:10:28 2005 mdadm: /dev/hdc1 appears to be part of a raid array: level=1 devices=2 ctime=Thu Dec 22 23:02:50 2005 mdadm: size set to 4216960K mdadm: largest drive (/dev/hdc1) exceed size (4216960K) by more than 1% Continue creating array?Answer 'n', or enter ctrl-C (because we don't want to create the complete RAID array now, as /dev/hda1 is still mounted and in use).
shell# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 missing /dev/hdc1
missing
stands for /dev/hda1 which will be added later when it will not be mounted any longer.shell# mdadm --grow /dev/md0 -z 4216960
shell# mkfs.ext3 /dev/md0Copy all the files contained in / and its sub-diretories:
shell# mkdir /RAID1 shell# mount -t ext3 /dev/md0 /RAID1 shell# for f in /* do if [ "$f" != "/RAID1" -a "$f" != "/proc" -a "$f" != "/sys" ]; then echo "$f" ... cp -Rp "$f" /RAID1/. fi done shell# mkdir /RAID1/proc(remember that /dev/md0 must contain a complete system, on which we may want to boot later)
shell# fdisk /dev/hdcIn the fdisk interactive prompt, we choose to set the partition type to
fd
(raid autodetect).Disk /dev/hdc: 30.7 GB, 30735581184 bytes 255 heads, 63 sectors/track, 3736 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdc1 1 535 4297356 fd Linux raid autodetect /dev/hdc2 536 3736 25712032+ f W95 Ext'd (LBA) /dev/hdc5 536 3736 25712001 83 Linux
#/dev/hda1 / ext3 defaults 0 1 /dev/md0 / ext3 defaults 0 1(comment out the hda1 line)
title Linux kernel 2.6.14 (RAID as root) kernel (hd0,0)/boot/vmlinuz-2.6.14.4 root=/dev/md0 boot
# mdadm -D /dev/md0 /dev/md0: Version : 00.90.02 Creation Time : Thu Dec 22 23:02:50 2005 Raid Level : raid1 Array Size : 4216960 (4.02 GiB 4.32 GB) Device Size : 4216960 (4.02 GiB 4.32 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Dec 23 03:35:13 2005 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : af34a3b3:8d2eef00:73d43def:403507df Events : 0.997 Number Major Minor RaidDevice State 0 0 0 - removed 1 22 1 1 active sync /dev/hdc1
shell# mount /dev/md0 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) usbfs on /proc/bus/usb type usbfs (rw)
shell# cat /proc/mdstat Personalities : [raid1] [multipath] md0 : active raid1 hdc1[1] 4297280 blocks [2/1] [_U] unused devices: <none>We can now insert /dev/hda1 into our RAID array.
shell# mdadm /dev/md0 -a /dev/hda1We check that the recovery process is running:
shell# cat /proc/mdstat Personalities : [raid1] [multipath] md0 : active raid1 hda1[2] hdc1[1] 4216960 blocks [2/1] [_U] [==>..................] recovery = 13.1% (553984/4216960) finish=33.1min speed=1840K/secWhen it is done:
shell# cat /proc/mdstat Personalities : [raid1] [multipath] md0 : active raid1 hda1[0] hdc1[1] 4216960 blocks [2/2] [UU] unused devices: <none>
shell# grub grub> install (hd0,0)/boot/grub/stage1 (hd1) (hd0,0)/boot/grub/stage2 p (hd0,0)/boot/grub/menu.lstThis tells grub to install itself on hd1 (/dev/hdc), but when the booting will occur (if one day the hd0 disk fails and we put hd1 at the place of hd0), stage1 and stage 2 will have to be searched on hd0.
Disk /dev/hda: 4325 MB, 4325529600 bytes 255 heads, 63 sectors/track, 525 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 1 525 4217031 fd Linux raid autodetect
shell# mdadm -D /dev/md0 /dev/md0: Version : 00.90.02 Creation Time : Fri Dec 23 05:04:24 2005 Raid Level : raid1 Array Size : 4216960 (4.02 GiB 4.32 GB) Device Size : 4216960 (4.02 GiB 4.32 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sat Feb 11 03:49:48 2006 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 890a764b:627353ea:ab248d0a:40cc9e20 Events : 0.12961 Number Major Minor RaidDevice State 0 22 1 0 active sync /dev/hdc1 1 3 1 1 active sync /dev/hda1
shell# mdadm /dev/md0 -a /dev/hda1 mdadm: hot add failed for /dev/hda1: No space left on deviceLook at /var/log/kern.log. You might see:
[date] localhost kernel: md0: disk size 4216960 blocks < array size 4297280 [date] localhost kernel: md: export_rdev(hda1)This is because you did not set a correct size with the
mdadm --grow
command.
shell# mdadm --stop /dev/md0
shell# mdadm /dev/md0 -a /dev/hdc1and the recovery process starts.