Uncategorized

Fixing drive failure when using mdadm/raid1 on boot device (CentOS 5)

Linux
Drive_crushed

I am using CentOS 5 for one of my servers, and use raid1 and md for
the mirroring between two drives.

Then one of the drives fail (which they eventually will), here is how
you fix it:

I have taken info from these pages for this blog post:
http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array
http://blog.mydream.com.hk/howto/linux/howto-reinstall-grub-in-rescue-mode-wh…

In this example I have two hard drives, /dev/sda and /dev/sdb, with
the partitions /dev/sda1 and /dev/sda2 as well as /dev/sdb1 and
/dev/sdb2.
/dev/sda1 and /dev/sdb1 make up the RAID1 array /dev/md0.
/dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1.
/dev/sda1 + /dev/sdb1 = /dev/md0
/dev/sda2 + /dev/sdb2 = /dev/md1
/dev/sdb has failed, and we want to replace it.

First of all: ‘cat /proc/mdstat’ is your friend – it will show you the
status of your raid during the whole process.

In the output from ‘cat /proc/mdstat’ you will see an (F) behind a
failed device, or it will be missing alltogether.

First, fail and remove the failed device(s):
mdadm –manage /dev/md0 –fail /dev/sdb1
mdadm –manage /dev/md0 –remove /dev/sdb1
Repeat for other MD-devices containing sdb-parts.
Now the output from ‘cat /proc/mdstat’ should only contain parts from sda.

Power down, change the drive, and turn it back on.

To make the same partitions on sdb as you have on sda, do this:
sfdisk -d /dev/sda | sfdisk /dev/sdb

‘fdisk -l’ should now show the same partitions on sda and sdb.

Next, add the proper parts from sdb to the relevant md-device. So if
md0 contains sda1, do this:
mdadm –manage /dev/md0 –add /dev/sdb1
Repeat for all md-devices so you have the same parts from sda and sdb
in all of them.
Check with ‘cat /proc/mdstat’.

Let is sync back up (check with ‘watch -n 10 cat /proc/mdstat’ until
it finishes).

Now, fix grub:
grub
grub>root (hd0,0)
grub>setup (hd0)

If you’re unlucky and can’t’ boot because the wrong device is first
(trying to boot from the clean/new hard drive), follow these steps:

First boot into a live cd with your os.
Then activate the RAID:
1) mkdir /etc/mdadm
2) mdadm –examine –scan > /etc/mdadm/mdadm.conf
3) mdadm -A –scan

Then reinstall grub. In this example, you have /boot on md0 and / on md1:
1) mkdir /mnt/sysimage
2) mount /dev/md1 /mnt/sysimage
3) mount -o bind /dev /mnt/sysimage/dev
4) mount -o bind /proc /mnt/sysimage/proc
5) chroot /mnt/sysimage /bin/bash
6) mount /dev/md0 /boot
Then fix grub (same as above):
grub
grub>root (hd0,0)
grub>setup (hd0)

Voila, you have a working raid again with grub managing to boot your system 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *