NSLU2-Linux
view · edit · print · history

This article pertains to:


This article demonstrates how to setup a simple RAID-1 (mirrored) external rootfs with SlugOS.

Prerequisites

  • A functional NSLU2
  • A pair of working flash memory sticks, or powered external disk drives
  • A suitable system from which you can install the firmware, and login to the NSLU2 using an appropriate SSH client.

Why?

Reliability. The primary reason that simple RAID-1 (mirroring) has been provided in the base SlugOS image is so that users concerned about failures of their USB flash-memory stick can rest easily.

It is not intended to improve performance. In fact, due to the limitations of the I/O subsystem on the NSLU2, it is expected that using mirroring can be a significant performance drop.

Also, it should be noted that RAID-1 is not non-stop technology. It merely protects the data already on the storage devices by having multiple copies; there is no guarantee that a failing device will not fail in a manner that will crash the host operating system.

Never-the-less, this solution is simple enough, and storage has become inexpensive enough, that using RAID-1 with SlugOS is likely to become a very common option.

Prepare the NSLU2 and Flash the Base Firmware

Begin by removing all USB devices from the NSLU2, and flash a fresh copy of SlugOS using your favorite method. Boot the device, login via ssh, and do the basic network setup using "turnup init" in the normal fashion.

See InstallandTurnupABasicSlugOSSystem for details, if necessary.

Prepare the External Storage

For this example, we will be using a pair of similar (but not identical) USB memory sticks. We'll use them as they came from the manufacturer -- meaning a single partition, formatted as FAT or FAT32.

To begin, we simply plug both devices into the NSLU2. Make sure you have no other storage devices plugged in -- not only would that be completely unnecessary, it would be error-prone as well.

SlugOS will detect the new devices, and you should see both automagically mounted in /media. Step one is to unmount those partitions, as we cannot work with them if they are busy:

# mount
rootfs on / type rootfs (rw)
/dev/root on / type jffs2 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
/dev/root on /dev/.static/dev type jffs2 (rw)
udev on /dev type tmpfs (rw,size=2048k,mode=755)
tmpfs on /var/volatile type tmpfs (rw,mode=755)
tmpfs on /dev/shm type tmpfs (rw,mode=777)
usbfs on /proc/bus/usb type usbfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /media/sda1 type vfat (rw,sync,fmask=0022,dmask=0022,codepage=cp437,iocharset=utf8)
/dev/sdb1 on /media/sdb1 type vfat (rw,sync,fmask=0022,dmask=0022,codepage=cp437,iocharset=utf8)
# umount /dev/sda1
# umount /dev/sdb1
# mount
rootfs on / type rootfs (rw)
/dev/root on / type jffs2 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
/dev/root on /dev/.static/dev type jffs2 (rw)
udev on /dev type tmpfs (rw,size=2048k,mode=755)
tmpfs on /var/volatile type tmpfs (rw,mode=755)
tmpfs on /dev/shm type tmpfs (rw,mode=777)
usbfs on /proc/bus/usb type usbfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
#

Create the Metadisk

Use the mdadm command to create the metadisk (/dev/md<n>) device. Basically the mdadm command will write a signature to the two partitions we give it that mark the partitions as members of a metadisk, then creates the necessary in-kernel medadisk device. We will then be able to work with the metadisk device.

It is entirely possible to work with nothing but a set of partitions that are "marked" in this manner; using the correct arguments, mdadm can re-assemble a metadisk by scanning the available storage devices for metadisk components -- and indeed, SlugOS uses this technique as its preferred way to boot. However, certain things are a lot easier if we create a config file for mdadm that describes the setup, in a more-or-less human-readable format. We can actually have mdadm write its own config file after we have created the metadisk. We'll continue with the example:

# mdadm --create --auto=yes /dev/md0 --level=1 --raid-devices=2 \
/dev/sda1 /dev/sdb1
mdadm: largest drive (/dev/sda1) exceed size (124544K) by more than 1%
Continue creating array? y
mdadm: array /dev/md0 started.

# mdadm --detail --scan
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=a193c289:2d398359:74fd7c57:42d7d108

# mdadm --detail --scan >/etc/mdadm.conf
#

Write the New Filesystem

Now all that remains is to use the standard mkfs.ext3 utility to initialize the filesystem on the new metadisk, and turnup to it in the normal fashion:

# mkfs.ext3 /dev/md0
mke2fs 1.38 (30-Jun-2005)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
31232 inodes, 124544 blocks
6227 blocks (5.00%) reserved for the super user
First data block=1
16 block groups
8192 blocks per group, 8192 fragments per group
1952 inodes per group
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
#

Turnup

Doing the turnup operation is slightly complicated by a bug in the busybox mount command. Cut and paste the following command exactly as it appears, in order to create a patched copy of the turnup utility that will "dodge" the busybox bug:


sed s/' UUID="$uuid" '/' "$device" '/ </sbin/turnup >/tmp/turnup-patched

Then in order to do the turnup, run the patched version instead, as illustrated below:

# sed s/' UUID="$uuid" '/' "$device" '/ </sbin/turnup >/tmp/turnup-patched
# sh /tmp/turnup-patched memstick -i /dev/md0
/tmp/turnup-patched: umounting any existing mount of /dev/mtdblock4
turnup: copying root file system
17159 blocks
done
rootdir=/tmp/rootfs.1285
table='/tmp/flashdisk.1285/etc/device_table'
#

Reboot.

Once you hear the three-beeps indicating that the boot is complete, login and verify that the root ("/") is mounted on /dev/md0:

# mount
rootfs on / type rootfs (rw)
/dev/root on /initrd type jffs2 (ro)
/dev/md0 on / type ext3 (rw,noatime,errors=continue,data=ordered)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
/dev/md0 on /dev/.static/dev type ext3 (rw,noatime,errors=continue,data=ordered)
udev on /dev type tmpfs (rw,size=2048k,mode=755)
tmpfs on /var/volatile type tmpfs (rw,mode=755)
tmpfs on /dev/shm type tmpfs (rw,mode=777)
usbfs on /proc/bus/usb type usbfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
#

How is it Doing?

The easiest way to check on the health of your RAID storage is to look at the contents of /proc/mdstat.

This is the normal, steady-state (note the "UU"):

# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
      124544 blocks [2/2] [UU]

unused devices: <none>
#

Here's what it looks like when the cat has played with the USB cables, and one of the two USB storage devices is completely disconnected:

# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[2](F)
      124544 blocks [2/1] [U_]

unused devices: <none>
#

After reconnecting the missing device, the data on it is outdated, and needs to be updated. We use mdadm to tell the kernel to begin that process. First, we use the "dmesg" command to figure out what the device name is (it may not be /dev/sdb1, as it originally was since the kernel can assign any device name it wishes):

# dmesg | tail -16
usb 1-2: configuration #1 chosen from 1 choice
scsi2 : SCSI emulation for USB Mass Storage devices
usb-storage: device found at 4
usb-storage: waiting for device to settle before scanning
scsi 2:0:0:0: Direct-Access I-Stick2 IntelligentStick 2.00 PQ: 0 ANSI: 2
ready
sd 2:0:0:0: [sdc] 249344 512-byte hardware sectors (128 MB)
sd 2:0:0:0: [sdc] Write Protect is off
sd 2:0:0:0: [sdc] Mode Sense: 03 00 00 00
sd 2:0:0:0: [sdc] Assuming drive cache: write through
sd 2:0:0:0: [sdc] 249344 512-byte hardware sectors (128 MB)
sd 2:0:0:0: [sdc] Write Protect is off
sd 2:0:0:0: [sdc] Mode Sense: 03 00 00 00
sd 2:0:0:0: [sdc] Assuming drive cache: write through
 sdc: sdc1
sd 2:0:0:0: [sdc] Attached SCSI removable disk
usb-storage: device scan complete
# mdadm /dev/md0 --add /dev/sdc1
mdadm: re-added /dev/sdc1
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[2] sda1[0] sdb1[3](F)
      124544 blocks [2/1] [U_]
      [====>................]  recovery = 22.9% (28800/124544) finish=0.2min speed=7200K/sec

unused devices: <none>
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sda1[0] sdb1[2](F)
      124544 blocks [2/2] [UU]

unused devices: <none>
#

Don't worry about the fact that /proc/mdstat still lists sdb1 as a failed member of the array; that will be cleaned up at the next reboot.

view · edit · print · history · Last edited by mwester.
Originally by mwester.
Page last modified on March 08, 2009, at 05:31 PM