RAID
My RAID setup for an Ubuntu NAS box.
Contents
Hardware
I used two identical PATA 180GB drives. Each drive was connected to a separate IDE bus (each has their own cable). Do not put two IDE RAID drives on the same IDE bus. Two drives on the same cable would slow everything down plus if one drive goes bad it brings down the entire IDE bus.
software
When installing mdadm you will be asked if you want to automatically start MD Arrays. Select "all".
apt-get install mdadm
First attempt -- a non-bootable RAID-1 partition
Format
I created a 4GB partition on both drives for the operating system (boot and / on an ext3 partition). The first partition (4G) of each drive was formatted as ext3. The 4GB partition on the second drive is unused. It's just there to make it easier to make sure that each drive is setup exactly the same. I created a second partition on each drive. I left the this partition unformatted. In other words, I installed the full operating system first on the 4GB partition before I even started with RAID configuration. I didn't create a swap partition. I could have, but this machine has 1GB of RAM and will do no work besides SaMBa, so swap isn't going to help much.
After all is said and done I had two drives partitioned and formatted identically:
# fdisk -l Disk /dev/hda: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 486 3903763+ 83 Linux /dev/hda2 487 21889 171919597+ 83 Linux Disk /dev/hdd: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdd1 1 486 3903763+ 83 Linux /dev/hdd2 487 21889 171919597+ 83 Linux
This setup is not "ideal", but it's easier to setup. I'm building a NAS, so I don't care as much for the integrity of the operating system. I just care about the files in the file server. If the boot sector goes bad I can reinstall Linux and recover my files offline. A more clever system would allow the boot and operating system partitions to be on the RAID array itself.
Create the RAID-1 MD Array
I had a small issue where after I created the two partitions (hda2 and hdd2) using fdisk they did not show up in /dev/hda2 or /dev/hdd2. I could see them with 'fdisk -l'. What I did was use cfdisk to create a filesystem on the partitions. After that they showed up in /dev/.
Once the partitions are ready it's easy to turn them into a RAID device:
mdadm /dev/md0 --create --auto=yes --level=1 --raid-devices=2 /dev/hda2 /dev/hdd2
This will create the RAID device /dev/md0 and it will start to "sync" the two partitions. Cat /proc/mdstat to see the status of the resync process:
# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdd2[1] hda2[0] 171919488 blocks [2/2] [UU] [>....................] resync = 2.5% (4372032/171919488) finish=105.1min speed=26557K/sec unused devices: <none>
Second attempt -- Bootable RAID-1 on root
That was so easy that I decided to start over and get the whole drive under properly RAID-1. Ubunty 7.04 is broken. It does not reliably allow you to build a bootable RAID array. Use Ubuntu 6.10 Server instead. I installed Ubuntu Server 6.10 on one of the disks (hda). I allowed the installer to repartition and use all of hda (it created one root partition and a swap partition). When all was done I had a stock system with the following partitions on /dev/hda:
Disk /dev/hda: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 21470 172457743+ 83 Linux /dev/hda2 21471 21889 3365617+ 5 Extended /dev/hda5 21471 21889 3365586 82 Linux swap / Solaris
Then I used `dd` to copy the main boot drive to the second drive.
dd bs=1M if=/dev/hda of=/dev/hdd
After that was done I had two identically partitioned drives:
# fdisk -l Disk /dev/hda: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 21470 172457743+ 83 Linux /dev/hda2 21471 21889 3365617+ 5 Extended /dev/hda5 21471 21889 3365586 82 Linux swap / Solaris Disk /dev/hdd: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdd1 * 1 21470 172457743+ 83 Linux /dev/hdd2 21471 21889 3365617+ 5 Extended /dev/hdd5 21471 21889 3365586 82 Linux swap / Solaris
It's probably not necessary to use dd since the RAID system is going to want to resync the drive anyways, but this was a lazy way to make sure both drives were identically partitioned. If I was building a RAID-1 with 1TB drives I would copy the partition table by hand, since this would be faster. I could also copy just the partition information without copying the data:
sfdisk -d /dev/hda | sed s/hda/hdd/ > /tmp/hdd sfdisk -f /dev/hdd < /tmp/hdd
Next set the secondary drive's first partition type to fd (Linux raid autodetect). I just used fdisk and the t command. Set type fd for only partition 1. The Extended/swap partitions will not be part of the RAID. It should look like this afterwards:
# fdisk -l /dev/hdd Disk /dev/hdd: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdd1 * 1 21470 172457743+ fd Linux raid autodetect /dev/hdd2 21471 21889 3365617+ 5 Extended /dev/hdd5 21471 21889 3365586 82 Linux swap / Solaris
Now I actually create the RAID array and add the secondary drive. This is a "degraded" array because one drive (hda) is marked as "missing". This drive will be added later.
mdadm /dev/md0 --create --auto=yes --level=1 --raid-devices=2 missing /dev/hdd1
At first I got this error message:
mdadm: Cannot open /dev/hdd1: Device or resource busy
This was likely due to the fact that installing `mdadm` started /dev/md0 for me (perhaps it saw that I set /dev/hdd to type fd or it may have seen the RAID super block on /dev/hdd from a previous attempt). This is easy to fix -- just shutdown RAID on /dev/md0.
# mdadm --stop /dev/md0 mdadm: stopped /dev/md0
Edit /etc/fstab. Change the / mount that looks like this:
/dev/hda1 / ext3 defaults,errors=remount-ro 0 1
Set to md0 device:
/dev/md0 / ext3 defaults,errors=remount-ro 0 1
Edit /boot/grub/menu.lst and add the following boot menu item (Be sure to copy the kernel image you need to boot):
title RAID kernel 2.6.15-26-server root (hd0,0) kernel /boot/vmlinuz-2.6.15-26-server root=/dev/md0 ro initrd /boot/initrd.img-2.6.15-26-server boot <pre> <pre> mkfs.ext3 /dev/md0
mkdir -p /mnt/md0
mount /dev/md0 /mnt/md0
telinit 1
cp -aux / /mnt/md0
Now restart:
shutdown -h now
It should boot off of /dev/md0. Check this after booting by running `mount`.
I'm not sure this step is necessary. It may be enough to simply set the partition ID to fd. But this step can't hurt since once the drive is added to the array the md driver will resync it anyway.
sfdisk -d /dev/hdd | sfdisk --no-reread /dev/hda
Next add the original primary disk to the array. This will trigger a resync.
mdadm /dev/md0 -a /dev/hda1
Monitor the resync with `mdadm --detail /dev/md0` or `cat /proc/mdstat`. When it shows the drives are in sync and the array is healthy then you can reboot.
errors
It may appear that Ubuntu has locked up on boot, but if you wait a couple minutes then you might see an error message. If you see this error then you may have the "Ubuntu MD race condition" bug.
Check root= bootarg cat /proc/cmdline or missing modules, devices: cat /proc/modules ls /dev ALERT! /dev/md0 does not exist. Dropping to a shell!
Here are some notes on this problem. The work-around is not satisfactory. The best thing to do is to downgrade to Ubutnu 6.10 Server (Edgy).
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/103177 https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/79204
speed tests
Speed isn't really important for my NAS purposes, but I was curious as to how RAID-1 would improve read performance. First I got a baseline speed for each drive before they were put into the RAID.
# hdparm -t /dev/hda /dev/hda: Timing buffered disk reads: 140 MB in 3.01 seconds = 46.49 MB/sec # hdparm -t /dev/hdd /dev/hdd: Timing buffered disk reads: 134 MB in 3.03 seconds = 44.20 MB/sec