RAID
My RAID setup for an Ubuntu NAS box.
Contents
Hardware
I used two identical PATA 180GB drives. Each drive was connected to a separate IDE bus (each has their own cable). Do not put two IDE RAID drives on the same IDE bus. Two drives on the same cable would slow everything down plus if one drive goes bad it brings down the entire IDE bus.
software
apt-get install mdadm apt-get install dmraid
Format
I created a 4GB partition on both drives for the operating system (boot and / on an ext3 partition). The first partition (4G) of each drive was formatted as ext3. I created a second partition on each drives. I left the this partition unformatted. In other words, I installed the full operating system first on the 4GB partition before I even started with RAID configuration. I didn't create a swap partition. I could have, but this machine has 1GB of RAM and will do no work besides SaMBa, so swap isn't going to help much.
After all is said and done I had two drives partitioned and formatted identically:
# fdisk -l Disk /dev/hda: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 486 3903763+ 83 Linux /dev/hda2 487 21889 171919597+ 83 Linux Disk /dev/hdd: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdd1 1 486 3903763+ 83 Linux /dev/hdd2 487 21889 171919597+ 83 Linux
This setup is not "ideal", but it's easier to setup. I'm building a NAS, so I don't care as much for the integrity of the operating system. I just care about the files in the file server. If the boot sector goes bad I can reinstall Linux and recover my files offline. A more clever system would allow the boot and operating system partitions to be on the RAID array itself.
make the RAID array
I had a small issue where after I created the two partitions (hda2 and hdd2) using fdisk they did not show up in /dev/hda2 or /dev/hdd2. I could see them with 'fdisk -l'. What I did was use cfdisk to create a filesystem on the partitions. After that they showed up in /dev/.
Once the partitions are ready it's easy to turn them into a RAID device:
mdadm /dev/md0 --create --auto=yes --level=1 --raid-devices=2 /dev/hda2 /dev/hdd2
This will create the RAID device /dev/md0 and it will start to "sync" the two partitions. Cat /proc/mdstat to see the status of the resync process:
# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdd2[1] hda2[0] 171919488 blocks [2/2] [UU] [>....................] resync = 2.5% (4372032/171919488) finish=105.1min speed=26557K/sec unused devices: <none>
Second attempt -- Bootable RAID-1 on root
That was so easy that I decided to start over and get the whole drive under properly RAID-1. I installed Ubuntu Server as normal on all of one disk. I allowed the installer to repartition and use all of hda. When all was done I had a stock system with the following partition on /dev/hda:
Disk /dev/hda: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 21470 172457743+ 83 Linux /dev/hda2 21471 21889 3365617+ 5 Extended /dev/hda5 21471 21889 3365586 82 Linux swap / Solaris
Then I used `dd` to copy the main boot drive to the second drive.
dd bs=1M if=/dev/hda of=/dev/hdd
After that was done I had two identically partitioned drives:
# fdisk -l Disk /dev/hda: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 21470 172457743+ 83 Linux /dev/hda2 21471 21889 3365617+ 5 Extended /dev/hda5 21471 21889 3365586 82 Linux swap / Solaris Disk /dev/hdd: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdd1 * 1 21470 172457743+ 83 Linux /dev/hdd2 21471 21889 3365617+ 5 Extended /dev/hdd5 21471 21889 3365586 82 Linux swap / Solaris
It's probably not necessary to use dd since the RAID system is going to want to resync the drive anyways, but this was a lazy way to make sure both drives were identically partitioned. If I was building a RAID-1 with 1TB drives I would copy the partition table by hand, since this would be faster. I could also copy just the partition information without copying the data:
sfdisk -d /dev/hda | sed s/hda/hdd/ > /tmp/hdd sfdisk -f /dev/hdd < /tmp/hdd
Next set the secondary drive's first partition type to fd (Linux raid autodetect). I just used fdisk and the t command. Set type fd for only partition 1. The Extended/swap partitions will not be part of the RAID. It should look like this afterwards:
# fdisk -l /dev/hdd Disk /dev/hdd: 180.0 GB, 180045766656 bytes 255 heads, 63 sectors/track, 21889 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdd1 * 1 21470 172457743+ fd Linux raid autodetect /dev/hdd2 21471 21889 3365617+ 5 Extended /dev/hdd5 21471 21889 3365586 82 Linux swap / Solaris
speed tests
Speed isn't really important for my NAS purposes, but I was curious as to how RAID-1 would improve read performance. First I got a baseline speed for each drive before they were put into the RAID.
# hdparm -t /dev/hda /dev/hda: Timing buffered disk reads: 140 MB in 3.01 seconds = 46.49 MB/sec # hdparm -t /dev/hdd /dev/hdd: Timing buffered disk reads: 134 MB in 3.03 seconds = 44.20 MB/sec