NSLU2-Linux
view · edit · print · history

Debian/NSLU2 and LVM

Debian 4.0 (etch) supports the use of LVM partitions on the NSLU2, but a little work is required if you want to set up LVM partitions using the installer, or if you want your root (/) partition to reside on a logical volume. This page describes the procedures to get LVM working on the NSLU2.


Before You Begin

You should have a reasonable familiarity with Debian GNU/Linux and LVM, and it is recommended, but not necessary, that you add a serial port to your NSLU2 in order to debug problems.

Please note that this page is not a tutorial on how to configure LVM, although I have provided an example LVM configuration that I tested. More information (recommended reading) can be found on other web pages, e.g. [1], [2].


Debian Installer and LVM on the NSLU2

Follow Martin Michlmayr's installation instructions, until you get to the selection of installer components. To configure LVM volume groups and logical volumes with partman during installation of Debian, select the following installer components:

  • ext3-modules-2.6.18-4-ixp4xx-di: EXT3 filesystem support
  • md-modules-2.6.18-4-ixp4xx-di: RAID and LVM support
  • partman-ext3: Add to partman support for ext3
  • partman-lvm: Adds support for LVM to partman
  • scsi-core-modules-2.6.18-4-ixp4xx-di: Core SCSI subsystem
  • usb-storage-modules-2.6.18-4-ixp4xx-di: USB storage support

Then continue with the installation by creating the LVM volume groups and logical volumes according to the instructions in the Debian Installer Manual [3]. You can also refer to the next section of this page which provides an example session of setting up LVM with partman. If you don't follow the example session, do ensure sure you set up a swap partition before you start with the LVM configuration, otherwise partman will fail.

Once you have finished partitioning the drive, go to the section on this page titled "Before Rebooting". If you do not follow the step in that section, your system will not boot.


Example Session of Creating LVM Partitions during Installation

Note: configuration of LVM volume groups and logical volumes using partman can cause the system to run out of memory. If this problem occurs, either restart the partitioner manually from the installer main menu and repeat until all volume groups and logical volumes are configured, or manually create the volume groups and logicals volumes on your NSLU2 hard drive using another Linux machine, and then use the installer to assign these existing LVM volumes to system partitions (/, /usr/, /home, etc.).

The following example is a modified version of the example at http://dev.jerryweb.org/raid/. Note that the example presented here is for reference only. You will probably want to adapt it to your needs.

Setup swap space

Swap space is required, and it has been found that the swap partition should be a logical partition as opposed to primary partition.

From the screen "Partitioning disks" in the table of partitions and mount points, select:

  • SCSI1 (0,0,0) (sda) - 40.0 GB ST340014 A
    • Create new empty partition table on this device? <Yes>

Then from the table, select:

  •  >         pri/log   40.0 GB     FREE SPACE
    

and then,

  • Create a new partition
  • New partition size: 128MB
  • Type for the new partition: Logical
  • Location for the new partition: End
  • Use as: swap area
  • Done setting up the partition

You should have the swap partition listed in the table,

 >      #5 logical  123.4 MB   f swap       swap

Create the physical volume for LVM

From the screen "Partitioning disks" in the table of partitions and mount points, select:

  •  >         pri/log   39.9 GB     FREE SPACE
    

and then,

  • Create a new partition
  • New partition size: 39.9 GB
  • Type for the new partition: Primary
  • Use as: physical volume for LVM
  • Done setting up the partition

The list of partitions should now look like this:

 >      #2 primary   39.9 GB   K lvm
 >      #5 logical  123.4 MB   F swap       swap

Configure LVM

From the screen "Partitioning disks", select

  • Configure the Logical Volume Manager (at the top of the menu)
    • Write the changes to disk <Yes>

The next screen is the mail interface for configuring LVM. A summary of the current LVM configuration is shown at the top of the dialog box.

 Summary of current LVM configuration:

  Free Physical Volumes:  1
  Used Physical Volumes:  0
  Volume Groups:          0
  Logical Volumes:        0

LVM configuration action:

  • Create volume group
    • Volume group name: vg00
    • [*] /dev/sda2                      (39884MB)
      
    • <Continue>
 Summary of current LVM configuration:

  Free Physical Volumes:  0
  Used Physical Volumes:  1
  Volume Groups:          1
  Logical Volumes:        0
  • Create logical volume
    • Volume group:
                    vg00                           (39883MB)
      
    • Logical volume name: root
    • Logical volume size: 15GB
    • <Continue>
 Summary of current LVM configuration:

  Free Physical Volumes:  0
  Used Physical Volumes:  1
  Volume Groups:          1
  Logical Volumes:        1

You can display the current LVM configuration by selecting

  • Display configuration details
                     Current LVM configuration:
 Unallocated physical volumes:
   * none

 Volume groups:
   * vg00                                                 (39883MB)
     - Uses physical volume:         /dev/sda2            (39883MB)
     - Provides logical volume:      root                 (16106MB)
  • Create logical volume
    • Volume group:
                    vg00                           (23777MB)
      
    • Logical volume name: home
    • Logical volume size: 23777MB
    • <Continue>
 Summary of current LVM configuration:

  Free Physical Volumes:  0
  Used Physical Volumes:  1
  Volume Groups:          1
  Logical Volumes:        2

The current LVM configuration should look like:

                     Current LVM configuration:
 Unallocated physical volumes:
   * none

 Volume groups:
   * vg00                                                 (39883MB)
     - Uses physical volume:         /dev/sda2            (39883MB)
     - Provides logical volume:      home                 (23777MB)
     - Provides logical volume:      root                 (16106MB)
  • Finish

The partitioner will restart, and return you to the main screen "Partitioning the disks". If an error occurs at this stage, it is likely that you have not set up a swap partition.

Create the partitions

From the "Partitioning disks" screen, in the table of partitions and mount points, select

  • >      #1  23.8 GB
    under LVM VG vg00, LV home - 23.8 GB Linux device-mapper

and then,

  • Use as: Ext3 journaling file system
  • Mount point: /home - user home directories
  • Done setting up the partition

From the "Partitioning disks" screen, in the table of partitions and mount points, select

  • >      #1  16.1 GB
    under LVM VG vg00, LV root - 16.1 GB Linux device-mapper

and then,

  • Use as: Ext3 journaling file system
  • Mount point: / - the root file system
  • Done setting up the partition

Finish

To finish, select

  • Finish partitioning and write changes to disk

The screen will now give you a summary of what has been done and the prompt "Write changes to disks?". Select <Yes>.


Before Rebooting

If you have placed your root (/) partition in an LVM logical volume (as in the example above), you need to write a modified version of the APEX bootloader to the NSLU2 flash before rebooting. To do this, after the "Configuring flash memory to boot the system" and "Finishing the installation" screens, but before selecting <Continue> in the box with the heading "Finish the installation", switch to a shell by pressing <Esc>, and select the menu item "Execute a shell".

Download the modified copy of the MTD block image that contains APEX (it contains the version of APEX that is shipped with etch).

Then write that file to the NSLU2 flash, and exit.

 ~ # cat etch-modified-mtdblock2.bin > /dev/mtdblock2
 ~ # exit

Now select "Finish the installation" and <Continue>. Your system will reboot.

Background to using a Root Partition on an LVM Logical Volume

If the root (/) partition exists on an LVM logical volume, the rootdelay parameter must be added to the kernel command line. I have found that a rootdelay of 10 seconds works well. Without this parameter, the scripts in the initramfs will attempt to activate the LVM volume group using vgchange before the USB disk has had time to register with the kernel through udev [3]. If rootdelay is not specified, or is too short, the USB disk will not be ready, vgchange will not be able to activate the volume group, and the root partition will not be accessible. If this happens, there will be no network access to the NSLU2, and you will most likely have to start the installation again, unless you are skilled at building your own flash images.

The APEX configuration environment contained in etch-modified-mtdblock2.bin is:

 $ sudo apex-env printenv
 bootaddr *= 0x00008000
 cmdline = console=ttyS0,115200 rtc-x1205.probe=0,0x6f noirqdebug rootdelay=10
 cmdline-alt *= console=ttyS0,115200 rtc-x1205.probe=0,0x6f noirqdebug
 fis-drv *= nor:0x7e0000+4k
 kernelsrc *= fis://kernel
 kernelsrc-alt *= fis://kernel
 ramdiskaddr *= 0x01000000
 ramdisksrc *= fis://ramdisk
 ramdisksrc-alt *= fis://ramdisk
 startup *= copy -s $kernelsrc $bootaddr; copy -s $ramdisksrc
 $ramdiskaddr; wait 10 Type ^C key to cancel autoboot.; boot

This APEX configuration has a kernel command line that contains the rootdelay parameter set to 10 seconds which should be enough time for your USB disk to be registered. In addition to having the rootdelay parameter in the kernel command line, the MTD block is padded with 0xff, which will allow you to change the APEX configuration environment with apex-env. See ChangeKernelCommandLine for more information.


Technical Information

Here is the fstab from a system installed using the procedure outlines above:

 $ cat /etc/fstab
 # /etc/fstab: static file system information.
 #
 # <file system>       <mount point>   <type>  <options>                  <dump>  <pass>
 proc                  /proc           proc    defaults                   0       0
 /dev/mapper/vg00-root /               ext3    defaults,errors=remount-ro 0       1
 /dev/mapper/vg00-home /home           ext3    defaults                   0       2
 /dev/sda5             none            swap    sw                         0       0
 /dev/sda2             /media/usb0     auto    rw,user,noauto             0       0
 /dev/sda5             /media/usb1     auto    rw,user,noauto             0       0

If you are manually editing fstab, note that you need to specify the root LV as /dev/mapper/vg00-root, /dev/vg00/root won't work.


Software RAID

The procedure for installing Debian on a software RAID device is probably very similar to that described above, but I have not tried it. The only difference are the installer components one selects. I guess that one would select:

  • ext3-modules-2.6.18-4-ixp4xx-di: EXT3 filesystem support
  • md-modules-2.6.18-4-ixp4xx-di: RAID and LVM support
  • partman-ext3: Add to partman support for ext3
  • partman-md: Add to partman support for MD
  • scsi-core-modules-2.6.18-4-ixp4xx-di: Core SCSI subsystem
  • usb-storage-modules-2.6.18-4-ixp4xx-di: USB storage support

References

[1] http://tldp.org/HOWTO/LVM-HOWTO/
[2] http://www.debian.org/releases/stable/arm/ch06s03.html.en#di-partition
[3] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=401916

view · edit · print · history · Last edited by marvin.
Based on work by benny and dumfrac.
Originally by dumfrac.
Page last modified on August 06, 2008, at 07:57 PM