backups for jupiter

backups for jupiter are done via regular rsync to an LVM volume on removable media. The basic idea here comes from a recipe in the LVM HOWTO

preparing the removable media (only do this once ever per disk!)

You should only need to prepare the removable media once. DO NOT repeat this step on existing backup drives unless you want to destroy the backed-up data!

in inserted an external iomega USB disk, and it showed up as /dev/sdd. I prepared it (as root) with the following:

pvcreate /dev/sdd
vgcreate vgbackup0_jupiter /dev/sdd
MAXEXTENTS=$(vgdisplay vgbackup0_jupiter | grep '^  Total PE' | awk '{print $3}')
lvcreate --name backup0 --extents $MAXEXTENTS vgbackup0_jupiter
mkfs -t ext3 /dev/vgbackup0_jupiter/backup0

if this needs to become bigger, you can just insert another disk, pvcreate it, add it to the volume group, and extend the backup0 volume.

since we should be rotating media, i expect that each volume group/logical volume will have an incrementing integer ID. e.g., the next one should be vgbackup1_jupiter/backup1

For now, we'll just remove the volume from the system so we can plug it in cleanly:

lvchange -an -pr /dev/vgbackup0_jupiter/backup0
vgexport vgbackup0_jupiter

You can remove the disk now, and then re-run vgscan to become aware of its absence.

Plugging and noticing the backup volume

Plug the disk back in and scan for it:

BACKUPVOL=$( vgs 2>/dev/null | awk '{ print $1 }' | grep 'vgbackup[0-9]*_jupiter')
echo "we'll be backing up with backup volume $BACKUPVOL"

in the example below, i'll be using vgbackup0_jupiter as the backup volume.

Import the backup volume (in case it was exported), and activate the logical volume on it:

vgimport vgbackup0_jupiter
lvchange -ay vgbackup0_jupiter/backup0

snapshotting the filesystem to back up

we already have the dm_snapshot kernel module loaded. So now we can snapshot the volume that the target filesystem resides on:

## remember the date because it will be useful:
TODAY=$(date +%Y%m%d)
lvcreate --snapshot --size 100GB  --name calhome.$TODAY /dev/vghome_jupiter/calhome 

Note that if more than 100GB changes on /dev/vghome_jupiter/calhome while we have this snapshot in place, the original filesystem will choke! So we shouldn't keep this around for much longer than it takes to run a backup, probably (though it would be nice to save it as a .yesterday directory and make it accessible for the users or something... i'm not sure how to do that properly without risking breakage. Also, i'm not sure how you'd be able to properly unmount it across machines.

Next we mount the snapshotted volume:

mkdir -p /var/local/backups/calhome.$TODAY
mount -o ro /dev/vghome_jupiter/calhome.$TODAY /var/local/backups/calhome.$TODAY

running the backup

now that we have a snapshot, we can mount the removable LVM volume:

mkdir -p /var/local/backups/calhome-target.$TODAY
mount /dev/vgbackup0_jupiter/backup0 /var/local/backups/calhome-target.$TODAY
mkdir -p /var/local/backups/calhome-target.$TODAY/srv/calfs/home

and then run the actual backup:

rsync -a /var/local/backups/calhome.$TODAY/ /var/local/backups/calhome-target.$TODAY/srv/calfs/home/

cleaning up

Cleaning up after a backup should be done simply and logically. First unmount both volumes (and sync disks just to be sure):

umount /var/local/backups/calhome.$TODAY
umount /var/local/backups/calhome-target.$TODAY

dispose of the snapshot:

lvremove -f /dev/vghome_jupiter/calhome.$TODAY

deactivate the target volume and tell the kernel to forget about its volume group:

lvchange -an /dev/vgbackup0_jupiter/backup0
vgexport vgbackup0_jupiter

You should now be able to remove the disk.

expanding the backup disk

If your backup data has grown bigger than yer removable media can handle, you can just expand the volume group by plugging in two disks. I'll fill in the details here later.

see also