software raid1 and lvm on debian etch
by Anton Piatek
Background
I have a fileserver box, which currently has 2x200GB disks in lvm to give me a 400GB virtual disk. This arrangement gets good use of space, but if one disk has a failure, then the whole filesystem is trashed and cannot be recovered.
The solution is to start using raid. Before I go on, raid is not a backup solution. It cannot protect you from accidentally deleting all your files, and will not protect you from a virus or malicious user or hacker. Raid just reduces the damage if a disk happens to fail (which knowing my luck, is sometime soon).
The final solution I want is 2x500GB disks in raid1 (mirrored) with lvm on top to split into my partitions. This way I could add another pair of disks in raid, add them to the lvm and not have to worry about which partitions get new space, as lvm will allow me to expand any parition onto the new space, and have a partition across multiple disks.
Why not raid5? Raid 5 is great for getting space, as you have n+1 disks, and get the space of n disks out of it as one is the redundant disk. The problem with raid5 is it is limited to the smallest disk in the raid. So 2 500GB disks and one 200GB disk will only give 400GB as each disk can only be used up to 200GB. Raid5 is great if all your disks are the same size, but if I want to add disks, and not have to replace all 3+ disks, then with raid1 I just have to buy disks in pairs. My pc has 4 ide slots and 2 sata slots, so raid1 should be fine (disks are getting quite bit these days).
So the plan is to add 2 500GB disks. put them in raid1 with a partition for /boot (which cant be in lvm) and the rest becomes part of a lvm group, with my / and /home partitions in there (and /tmp, swap)
How I did it
Warning: This can seriously mess up your data. Please, please backup first – I didn’t and was sweating hard at one point when I thought I had lost my entire lvm array. I found the Debian From Scratch (DFS) a fabulous rescue CD
Note: This took me several attempts to actually finish writing, so there are probably some errors, so let me know if you spot anything that looks wrong or is ambiguous.
For background, my current setup has 2 disks. hda and sda. these are both 200GB, and /boot and swap are on hda, and the rest of hda and all of sdb are in my lvm group
I added my new 500GB disk as /dev/sdb and booted up.
Use fdisk to create a small (~100mb) parition for /boot. Set its type to fd (linux raid). This parition cannot be in lvm, as grub does not understand lvm.
Create another parition using the rest of the space for our lvm. Also set its type to fd
You may need to use partprobe or reboot to see the new partitions, however my Debian Etch box autodetected them when leaving fdisk
Installing raid
$apt-get install mdadm
$cat /proc/mdstat should look something like
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
$mknod /dev/md0 b 9 0
$echo M md0 b 9 0 >> /etc/udev/links.conf
$mdadm –create /dev/md0 –verbose –level=1 –raid-disks=2 /dev/sdb1 missing
$mdadm -C /dev/md0 -n 2 -l 1 /dev/sdb1 missing
$mdadm –detail –scan >> /etc/mdadm/mdadm.conf
$pvcreate /dev/md1
e.g.
$mkdir /mnt/root
$pvcreate /dev/md1
$lvcreate -L2G -n swap vg0
Then make sure you have edited /boot/grub/devices.map and /boot/grub/menu.lst to have the correct new devices
$dpkg-reconfigure linux-image-2.6.18-3-686
[…]
$mdadm –detail –scan
$watch cat /proc/mdstat
md1 : active raid1 sda2[2] sdb2[0]
unused devices:
$cat /proc/mdstat
md1 : active raid1 sda2[1] sdb2[0]
unused devices:
$mdadm –detail –scan >> /etc/mdadm/mdadm.conf
$dpkg-reconfigure linux-image-2.6.18-3-686
$install-grub /dev/sda
unused devices:
$mknod /dev/md1 b 9 1
$echo M md1 b 9 1 >> /etc/udev/links.conf
$mdadm –create /dev/md1–verbose –level=1 –raid-disks=2 /dev/sdb2 missing
$mdadm -C /dev/md1 -n 2 -l 1 /dev/sdb2 missing
$vgcreate vg0 /dev/md1
$vgscan
$lvcreate -L10G -n root vg0
$mkfs -t ext3 /dev/vg0/root
$mount /dev/vg0/root /mnt/root
$rsync -auHxv –exclude=/proc/* –exclude=/sys/* –exclude=/boot/* –exclude=/mnt / /mnt/root/
$mkdir /mntroot/proc /mntroot/boot /mntroot/sys
$chmod 555 /mntroot/proc
$vgextend vg0 dev/md1
$pvmove /dev/hda3 (this is gonna be slow)
$vgreduce vg0 /dev/hda3
$pvremove /dev/hda3
$mkswap /dev/vg0/swap
default 0
fallback 1
[…]
$mdadm /dev/md1 -a /dev/sda2
$mdadm /dev/md0 -a /dev/sda1
Personalities : [raid1]
md0 : active raid1 sda1[2] sdb1[0]
104320 blocks [2/1] [U_]
resync=DELAYED
488279488 blocks [2/1] [U_]
[>………………..] recovery = 0.1% (779328/488279488) finish=93.8min speed=86592K/sec
Personalities : [raid1]
md0 : active raid1 sda1[1] sdb1[0]
104320 blocks [2/2] [UU]
488279488 blocks [2/2] [UU]