Mondo Rescue LVM on RAID How-to Gerhard Gappmeier 2004-01-15

Transcription

Mondo Rescue LVM on RAID How-to Gerhard Gappmeier 2004-01-15
Mondo Rescue LVM on RAID How-to
Gerhard Gappmeier
2004-01-15
Abstract
Mondo Rescue is great to save all your data and restore it later to a different disk geometry. But
when changing to LVM on RAID you have to do more then Mondo can do. At first you have to
change the boot loader, because GRUB which is normally used in actual SuSE distributions can’t
boot RAID systems. When using LVM and RAID for your root file system you need the according
modules already during the boot process so that the kernel is able to mount it. All this changes can
lead to some troubles which costed me one and a half day to solve it.
You will find this also interesting if you only want to use one of LVM or RAID.
Contents
1 Prerequisites
2
2 Boot mindi
3
3 Making new partitions using parted
3
4 Starting RAID
4
5 Format the boot partition
5
6 Starting LVM
5
7 Create LVM physical volume
5
8 Create LVM volume group
5
9 Create LVM logical volumes
6
10 Create and initialize the swap space
6
11 Create file systems
7
1
12 Edit mountlist.txt
7
13 Restore all data to new partitions
7
14 Mounting the partitions
8
15 Edit fstab
8
16 Edit lilo.conf
8
17 Create initrd
9
18 Install LILO boot loader
11
19 Troubleshooting
11
1
Prerequisites
To understand this document you should already be familiar with the fundamentals of Mondo Rescue,
SOFT-RAID (software based Redundant Array of Independent Disks), LVM (Logical Volume Manager)
and initrd (initial RAM disk).
This document describes the procedure based on a normal SuSE 8.2 installation which uses a modular
kernel 2.4.20 but it should also work for most other systems with a kernel that supports LVM. If you
compile the drivers (reiserfs, lvm-mod, raid1, ...) into the kernel you don’t have to make an initrd as
described later in chapter 17 on page 9. But if you compile your kernel yourself you should already know
that ;-).
For the following sections I assume that you have an installed system which you have backed up with
mondo rescue via NFS (Network File System). The term “target system” will be used for your system
you want to change and “backup system” for your system where you are storing the backups.
You can also make a backup direct to CD, but then you have to adapt the following steps. I use NFS
because I don’t have a CD writer in my server and because it’s very convenient to use NFS.
Check your backup if you can restore it, because the next steps will destroy all data on your system. So
if you encounter any problems you can at least restore your old system.
In my example I’m using two IDE disks where each disk is master on one of the two channels of my IDE
controller. So the device names are /dev/hda and /dev/hdc. Change this names to your needs. This two
disks are used to make the RAID1 (mirroring) system.
Burn the mindi.iso image to CD to get your system up again. With mindi you can run mondorestore
and all other commands you need to configure the system.
In my case pico (a unix text editor) which comes with mindi produced a segmentation fault and vi
(another text editor) also didn’t work so I copied all files via NFS, that I wanted to edit and used KWrite
(graphical editor in KDE) to comfortable edit it on another machine.
2
2
Boot mindi
After booting from the mindi CD mondorestore gets automatically started and the first dialog appears.
Select Interactive -¿ NFS then hit enter without entering a path.
This will mount the NFS share you have used for the backup automatically on /tmp/isodir.
Now you can cancel the dialog to return to the command line. Before you can start mondorestore again
you have to change the partition layout and edit mondo’s mountlist.txt.
3
Making new partitions using parted
For booting you need just a small partition with 25MB which uses also RAID. This will be /dev/md0
and gets mounted on /boot. Then you make a second RAID partition which uses the rest of the disk
space. This will be /dev/md1 and will be the physical device for LVM.
Start parted for the first disk:
parted /dev/hda
If you don’t know the parted commands use
?
to list it.
View your partitions using the command
print
Delete all existing partitions using
rm MINOR
Create the boot partition with 25 MB
mkpart primary 0 25
Mark it as RAID partition (this sets the partition type FD)
set 1 raid on
Create the second partition with maximum size. You can view your disk geometry with print again.
mkpart primary 25 YOUR DISK SIZE
Mark it as RAID partition
set 2 raid on
Now use the same procedure for the second disk /dev/hdc.
3
4
Starting RAID
Create the following raidtab and copy it to your NFS share so you don’t need to write it on your mindi
system. Then copy it to /etc via NFS (cp /tmp/isodir/raidtab /etc). /etc is on your RAM disk at the
moment which mindi has created for you so this will be lost if you are rebooting. So it is always a good
idea to keep this and the following configuration files on your NFS.
raidtab:
# sample raiddev configuration file
#
# ’persistent’ RAID1 setup, with no spare disks:
#
raiddev /dev/md0
raid-level 1
nr-raid-disks 1
nr-spare-disks 0
persistent-superblock 1
chunk-size 4
device /dev/hda1
raid-disk 0
device /dev/hdc1
raid-disk 1
#
# persistent RAID1 array with 0 spare disk.
#
raiddev /dev/md1
raid-level 1
nr-raid-disks 1
nr-spare-disks 0
persistent-superblock 1
chunk-size 4
device /dev/hda2
raid-disk 0
device /dev/hdc2
raid-disk 1
Load the RAID module
modprobe raid1
Create the RAID partitions
mkraid /dev/md0
mkraid /dev/md1
mkraid also starts the RAID system. If you want to start an existing RAID partition use raidstart
/dev/md0.
You can check the RAID status with cat /proc/mdstat.
Now you are ready to format the boot partition and to set up LVM.
4
5
Format the boot partition
For /boot we have already created 25MB RAID partition which is now running as /dev/md0. Till now
this partition has no file system. For this little partition we create now an extended2 linux file system.
mke2fs /dev/md0
That’s all, you could already mount /dev/md0 now and access it.
6
Starting LVM
Before creating or accessing any physical volumes you have to start the LVM sub system.
This is done by
vgscan
This command also recognizes existing volumes and is creating /etc/lvmtab and /etc/lvmtab.d .
7
Create LVM physical volume
Now we create our LVM physical volume on /dev/md1. This physical volume will be used for our volume
group. A volume group can consist of more physical devices, but we have only one in our case.
pvcreate /dev/md1
You should see the following output:
pvcreate -- physical volume "/dev/md1" successfully created
8
Create LVM volume group
This volume group will use all data of our physical RAID1 partition /dev/md1.
Check if /dev/md1 is a symbolic link. The following command does not work with symbolic links. I had
to use
vgcreate -A n vg00 /dev/md/1
This creates the volume group vg00. -A n deactivates autobackup of the configuration which does not
make sense because we’re actually working on a RAM disk. You should see the following output:
vgcreate -- INFO: using default physical extent size 4.00 MB
vgcreate -- INFO: maximum logical volume size is 255.99 Gigabyte
vgcreate -- WARNING: you don’t have an automatic backup of "vg00"
vgcreate -- volume group "vg00" successfully created and activated
5
9
Create LVM logical volumes
LVM logical volumes can be used similar as normal partitions. They can contain any file system and
data and are also mounted the same way. The big advantage is that with LVM you can resize them while
they are in use and you can make snapshots for consistent data backup etc. I advise to use ReiserFS as
file system, because after resizing a LVM logical volume you must also resize the file system (or before
when making it smaller) and at the moment this is only possible with ReiserFS while the disk is in use
(hot resizing).
The partition layout depends on your needs, but I advise to make own partitions for /var, /tmp, /usr and
/home. /var contains spooling data and log files and can consume all your disk space when something
does not work correctly. And if you run out of disk space the system can hang. If it is on an own partition
the root file system is not affected. The same can occur with temporary files in /tmp and user data in
/usr and /home so it’s always a good idea to separate them.
I’m using the following partition layout for LVM:
device
/dev/vg00/root
/dev/vg00/home
/dev/vg00/usr
/dev/vg00/opt/
/dev/vg00/tmp
/dev/vg00/var
/dev/vg00/swap
mount point
/
/home
/usr
/opt
/tmp
/var
swap
size
4 GB
4 GB
4 GB
4 GB
512 MB
1 GB
512 MB
Create this logical volumes this way
lvcreate -A n -L 4096 -n root vg00
lvcreate -A n -L 4096 -n home vg00
lvcreate -A n -L 4096 -n usr vg00
lvcreate -A n -L 4096 -n opt vg00
lvcreate -A n -L 512 -n tmp vg00
lvcreate -A n -L 1024 -n var vg00
lvcreate -A n -L 512 -n swap vg00
For each lvcreate command you should see the following output
lvcreate -- WARNING: you don’t have an automatic backup of vg00
lvcreate -- logical volume ‘‘/dev/vg00/root’’ successfully created
10
Create and initialize the swap space
Create swap space on logical volume /dev/vg00/swap.
mkswap /dev/vg00/swap
Activate it.
swapon /dev/vg00/swap
6
11
Create file systems
We are creating ReiserFS on our volumes.
mkreiserfs /dev/vg00/root
mkreiserfs /dev/vg00/home
mkreiserfs /dev/vg00/usr
mkreiserfs /dev/vg00/opt
mkreiserfs /dev/vg00/tmp
mkreiserfs /dev/vg00/var
12
Edit mountlist.txt
Mondo has created a file /tmp/mountlist.txt which is reflecting you old partitions which have been backed
up. Copy this file to /tmp/isodir (NFS) so that you can edit it on another machine.
mountlist.txt:
# device
mount point
file system
/dev/md0 /boot ext2 25600
/dev/vg00/swap swap swap 524288
/dev/vg00/root / reiserfs 4194304
/dev/vg00/home /home reiserfs 4194304
/dev/vg00/opt /opt reiserfs 4194304
/dev/vg00/tmp /tmp reiserfs 131072
/dev/vg00/usr /usr reiserfs 4194304
/dev/vg00/var /var reiserfs 1048576
size[KB]
Now copy it back to /tmp so that we can start the restore procedure.
cp /tmp/isodir/mountlist.txt /tmp
13
Restore all data to new partitions
Now we can use the mondorestore program to restore the data to our new partition layout.
mondorestore --interactive
You’ll see the same dialog as after booting mindi. Choose again Interactive-¿NFS and hit enter without
entering a path.
Now you see the next dialog with your new partition layout. Well then you can proceed to restore the
data the normal way. If your partitions are big enough to hold the data this should work without any
errors.
7
14
Mounting the partitions
After restoring the data and returning to the command line again you can mount all partitions defined
in mountlist.txt using the script:
mount-me
With unmount-me you can unmount all this partitions again but before we have to change some configuration files.
The new system is mounted in /tmp/RESTORING but it contains still you old fstab and boot configuration. We need to change this so that you can boot it.
15
Edit fstab
At first we have to change your /mnt/RESTORING/etc/fstab to your new partition layout.
fstab:
/dev/vg00/swap swap swap pri=42 0 0
/dev/md0 /boot ext2 defaults 0 0
/dev/vg00/root / reiserfs defaults 0 0
/dev/vg00/usr /usr reiserfs defaults 0 0
/dev/vg00/tmp /tmp reiserfs defaults 0 0
/dev/vg00/opt /opt reiserfs defaults 0 0
/dev/vg00/var /var reiserfs defaults 0 0
/dev/vg00/home /home reiserfs defaults 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
proc /proc proc defaults 0 0
usbdevfs /proc/bus/usb usbdevfs noauto 0 0
/dev/cdrom /media/cdrom auto ro,noauto,user,exec 0 0
16
Edit lilo.conf
Now we have to configure LILO to boot our RAID system. Change or create the file /mnt/RESTORING/etc/lilo.conf
if it does not exist. LILO must be configured to use a initrd which contains the modules for RAID, LVM
and ReiserFS. Otherwise the root file system can not be mounted. The creation of the initrd is shown in
the next step.
lilo.conf:
# boot device is our first raid device, the boot sector is written onto both disks in the parti
boot = /dev/md0
# extra boot to write the MBR onto both disks
raid-extra-boot=/dev/hda,/dev/hdc
map=/boot/map
default = linux
8
disk=/dev/hda
bios=0x80
disk=/dev/hdc
bios=0x81
lba32
menu-scheme = Wg:kw:Wg:Wg
prompt
read-only
timeout = 80
image = /boot/vmlinuz
label = linux
append = "splash=silent"
initrd = /boot/initrd-lvm-2.4.20-4GB.gz
root = /dev/vg00/root
vga = 0x31a
image = /boot/vmlinuz.shipped
label = failsafe
append = "ide=nodma apm=off acpi=off vga=normal nosmp noapic maxcpus=0 3"
initrd = /boot/initrd-lvm-2.4.20-4GB.gz
root = /dev/vg00/root
vga = 0x31a
17
Create initrd
There exists a script lvmcreate initrd1 which comes with LVM. It creates an initrd for your kernel and
contains the necessary LVM module. But this alone is not enough because the RAID module is missing,
so we have to modify the resulting initrd.
We must call the script chrooted because we want create it on our target system not on our RAM disk.
chroot /mnt/RESTORING lvmcreate initrd
You can run it in verbose mode when adding the -v switch. So you can see which files are added.
This creates a file /mnt/RESTORING/boot/initrd-lvm-KERNELVERSION.gz. Go for sure that you are
using the same name in your lilo.conf.
After decompressing the file you can mount it via the loopback device as any other device to your desired
mount point.
gunzip initrd-lvm-2.4.20-4GB.gz
mount -o loop initrd-lvm-2.4.20-4GB /mnt/initrd
This initrd contains a mini linux with all modules and commands to setup LVM. We’ll extend this now
to also support RAID and ReiserFS. A ls -l should then show the following contents:
1 I would have expected the the ReiserFS module is added by this script because I was using ReiserFS already in my old
system. But the module was missing in the created initrd so it seems like only LVM is added and all other modules you
need you have to add manually.
9
drwxr-xr-x 8 root root 1024 2004-01-14 13:56 ./
drwxr-xr-x 3 root root 320 2004-01-14 16:55 ../
drwx------ 2 root root 1024 2004-01-14 13:08 bin/
drwxr-xr-x 10 root root 10240 2004-01-14 13:32 dev/
drwxr-xr-x 2 root root 1024 2004-01-14 16:16 etc/
drwxr-xr-x 3 root root 1024 2004-01-14 13:32 lib/
-r-xr-xr-x 1 root root 248 2004-01-14 13:56 linuxrc*
drwxr-xr-x 2 root root 1024 2004-01-14 13:32 proc/
drwx------ 2 root root 1024 2004-01-14 13:14 sbin/
During the boot process the initrd is loaded into memory and the containing file linuxrc gets executed.
We have to change this linuxrc to load all needed kernel modules and setup RAID and LVM.
linuxrc:
#!/bin/sh
export PATH=/bin:/sbin
echo ’Loading module reiserfs ...’
insmod reiserfs
echo ’Loading module raid1 ...’
insmod raid1
echo "raidautorun ..."
raidautorun
echo "done ..."
echo ’Loading module lvm-mod ...’
insmod lvm-mod
mount /proc
echo ’Starting LVM ...’
vgscan
vgchange -a y
echo ’done ...’
umount /proc
The commands insmod, modprobe, vgscan, vgchange, mount and umount are already added by the
lvmcreate initrd script. The problem is loading the kernel modules do not work: “Kernel requires old
modprobe, but couldn’t run modprobe.old: No such file or directory”
So we have to add modprobe.old and insmod.old too. Just copy it to /sbin in the initrd.
cp /sbin/insmod.old /mnt/initrd/sbin
cp /sbin/modprobe.old /mnt/initrd/sbin
Furthermore we need raidautorun to setup the RAID system.
cp /sbin/raidautorun /mnt/initrd/sbin
The kernel module lvm-mod.o should already be there in /lib/modules/2.4.20-4GB/kernel/drivers/md
but we also need raid1.o and reiserfs.o. Create the necessary directories in /lib/modules and add the
modules to the initrd.
10
initrd = /mnt/initrd/lib/modules/2.4.20-4GB/kernel # to make the path shorter
cp /lib/modules/2.4.20-4GB/kernel/drivers/md/raid1.o $initrd/drivers/md
cp /lib/modules/2.4.20-4GB/kernel/drivers/md/lvm-mod.o $initrd/drivers/md
cp /lib/modules/2.4.20-4GB/kernel/fs/reiserfs/reiserfs.o $initrd/fs/reiserfs
Now all needed files are in the initrd. Unmount it, compress it again and copy it to the target system.
umount /mnt/initrd
gzip initrd-lvm-2.4.20-4GB
cp /tmp/isodir/initrd-lvm-2.4.20-4GB.gz /mnt/RESTORING/boot
18
Install LILO boot loader
Lilo must also be executed chrooted so that it can read and write the directories of our target system
which is mounted in /mnt/RESTORING and not of the currently running mindi system. Lilo needs also
access to /dev so we have to move it into our chrooted environment. This is done by:
mount --bind /dev /mnt/RESTORING/dev
Now we can run lilo to install the boot loader.
chroot /mnt/RESTORING lilo
Congratulations. Your are ready to boot your new LVM on RAID linux. Remove the mindi CD and
press CTRL-ALT-DEL to reboot. Watch the kernel boot messages to find possible problems.
19
Troubleshooting
If you have to reboot mindi because of some problems you always have to do the same steps to access
your already created partitions. To make this easier I’ve written this script so after configuring NFS and
quitting mondorestore you only have to call /tmp/isodir/setup.sh.
setup.sh:
#!/bin/sh
cp /tmp/isodir/mountlist.txt /tmp
cp /tmp/isodir/raidtab /etc
raidstart --all
modprobe lvm-mod
vgscan
vgchange -a y vg00
mount-me
mount --bind /dev /mnt/RESTORING/dev
11