NSLU2-Linux
view · edit · print · history

Howto solve performance problems

This howto tries to give answers on the expected performance of a slug and how to solve problems if the performance is less.

But we will start with measuring the important performance counters so that we are able to detect what is slow compared to the hardware and what appears to be slow.

Raw network performance

The slug should be able to handle 100 Mbit of traffic. It is possible to measure the network performance with Netio.

Install netio on the slug with "ipkg netio". After that start it on the slug with "netio -s". Install the same version of netio on your system as well and run it with "netio -t <slug>" for the tcp test and "netio -u <slug>" for the udp test.

The results i have:

 $ netio -t slug

 NETIO - Network Throughput Benchmark, Version 1.23
 (C) 1997-2003 Kai Uwe Rommel

 TCP connection established.
 Packet size  1k bytes:  11441 KByte/s Tx,  11411 KByte/s Rx.
 Packet size  2k bytes:  11464 KByte/s Tx,  11389 KByte/s Rx.
 Packet size  4k bytes:  11465 KByte/s Tx,  11410 KByte/s Rx.
 Packet size  8k bytes:  11474 KByte/s Tx,  11405 KByte/s Rx.
 Packet size 16k bytes:  11475 KByte/s Tx,  11433 KByte/s Rx.
 Packet size 32k bytes:  11464 KByte/s Tx,  11423 KByte/s Rx.
 Done.

 $ netio -u slug

 NETIO - Network Throughput Benchmark, Version 1.23
 (C) 1997-2003 Kai Uwe Rommel

 UDP connection established.
 Packet size  1k bytes:  11438 KByte/s (0%) Tx,  11458 KByte/s (0%) Rx.
 Packet size  2k bytes:  11482 KByte/s (0%) Tx,  11498 KByte/s (0%) Rx.
 Packet size  4k bytes:  11669 KByte/s (0%) Tx,  11650 KByte/s (0%) Rx.
 Packet size  8k bytes:  11684 KByte/s (0%) Tx,  11609 KByte/s (0%) Rx.
 Packet size 16k bytes:  11687 KByte/s (0%) Tx,  11509 KByte/s (1%) Rx.
 Packet size 32k bytes:  11715 KByte/s (0%) Tx,  11309 KByte/s (3%) Rx.
 Done.

I have 100 Mbit performance for tcp and udp in both directions!

How to solve problems:

  • Use netstat -i to see if you have collisions.
  • Use a cross cable to see if you have problems with your switch or cables
  • Use a other system to see if the problem is the same from all connected systems on the network.

Raw disk performance

The slug is able to read 12 MB/s from disk and flash devices over the usb bus. It is possible to measure this with the "dd" command from the coreutils. This dd prints the speed of the transfer at the end of the command. You need to run this on the device with the problems. You can find the device names with the "df" command.

The results I have are:

  # dd if=/dev/sda of=/dev/null bs=4k conv=sync count=100000
  100000+0 records in
  100000+0 records out
  409600000 bytes (410 MB) copied, 32.5186 seconds, 12.6 MB/s

How to solve problems:

  • Check dmesg for errors
  • Try a powered USB hub.
    • [added 2011/6/12] If you have a 3.5" USB enclosure that has 2 USB jacks plug in both of those to deliver maximum electrical power to your hard drive - I saw a performance gain of 43% from 6.9Mbps to 9.9Mbps. Of course the draw back is you can only hook up one of those harddrives to SLUG

Filesystem performance

The speed of the filesystem depends on a lot more factors.

  • Type of the filesystem
  • Small files or big files
  • Large numer of files in one directory
  • disk fragmentation
  • mount options

Type of the filesystem

Unslung linux supports the following filesystems

  • ext2/ext3
    • The default filesystem for linux
    • The filesystem for the native disk
    • A normal disk with a ext3 filesystem isn't mounted automatically.
  • fat
    • The filesystem of dos and win95
  • ntfs
    • The filesystem of nt/w2k/xp/vista
    • The linksys driver for unslung is buggy and as a result your slug will hang under heavy write access,

Creating a large file

With the dd command we are able to create a large file and see how long it takes.

The results i have on the native data disk:

 # dd if=/dev/zero of=big1 bs=4k count=10240
 10240+0 records in
 10240+0 records out
 41943040 bytes (42 MB) copied, 5.45654 s, 7.7 MB/s

The results on the conf disk:

 # dd if=/dev/zero of=big1 bs=4k count=1024 
 1024+0 records in
 1024+0 records out
 4194304 bytes (4.2 MB) copied, 22.0186 s, 190 kB/s

The differences in the results are caused by the "sync" mount option on the "conf" filesystem. This option tells the kernel to write everything direct to disk without any delayed writes.

How to solve problems:

  • Don't use ntfs.
  • Check that the filesystems are mounted "async"

Creating a large number of files

With a small shell script we are able to create a large number of files. This will take some time because there is a lot metadata involved which has to be written to the disk. An other problem is that it takes longer to process the information in a directory with many files. So creating a extra file will take longer in a full directory.

I will create 4000 files in one directory with the following script:

 #!/opt/bin/bash

 mkdir cftest
 cd cftest
 for i in 0 1 2 3
 do
        time (touch t${i}{0,1,2,3,4,5,6,7,8,9}{0,1,2,3,4,5,6,7,8,9}{0,1,2,3,4,5,6,7,8,9};sync)
 done

The result from this script is:

 real    0m3.516s
 user    0m0.210s
 sys     0m3.270s

 real    0m5.505s
 user    0m0.220s
 sys     0m5.270s

 real    0m7.708s
 user    0m0.240s
 sys     0m7.470s

 real    0m9.546s
 user    0m0.250s
 sys     0m9.280s

Every time it takes about 2 seconds more to create extra 1000 files!

How to solve problems:

  • Divide your files over more then 1 directory

Samba performance

We can run comparable test over samba to measure the performance over the network share. But this will only work from a unix like system with software installed to run this tests.

The results from this are:

 # dd if=/dev/zero of=big1 bs=4k count=4096
 4096+0 records in
 4096+0 records out
 16777216 bytes (17 MB) copied, 4.25143 s, 3.9 MB/s

The result of the script after creating 2000 files:

 real    1m3.582s
 user    0m0.008s
 sys     0m0.004s

 real    2m46.872s
 user    0m0.004s
 sys     0m0.020s

We see that creating a large number of files in one directory over samba realy takes a lot time!

The same issues are with listings of files in directories with a large number of files. It's a good advice to keep the number of files in a certain directory below a 1000.

Probably there are ways to do these test under windows as well. I hope somebody will write down how to do comparable tests under windows as well.

How to solve problems:

  • Check with "top" that there are no other prcesses using the cpu
view · edit · print · history · Last edited by nslu2.
Based on work by marceln and docbillnet.
Originally by marceln.
Page last modified on June 12, 2011, at 06:34 PM