I have a somewhat complex home network setup consisting of seperate 802.11n and 802.11b/g wireless routers, HomePlug AV (which is supposed to present a maximum throughput of 200Mb/s to 100Mb ethernet jacks), and a mixture of gigabit ethernet and fast ethernet devices – but I’ve never actually checked to see how these connection methods differ.
Copying large amounts of data (approximately movie-sized, surprisingly enough 😉 ) via gigabit ethernet from my Mac Mini to my RAID storage array is stonkingly fast (in the order of 30 seconds or so) – even though I’ve read say that the VIA EN12000G‘s C7 processor can’t really keep up with sustained GbE transfers… although perhaps this is just a problem if running Windows as my custom Linux install, which also has to cope with software RAID-5 parity calculations, seems to show low processor usage and appears to keep up just fine.
To perform a quick-and-dirty speed comparison, I thought I’d write 200Mb of random data over the network whilst trying to keep everything else as quiescent as possible.
The command used was:
dd if=/dev/urandom bs=1024 count=$(( 200*1024 )) | ssh -c blowfish user@storagearray "time dd of=/storage/tmp/random.bin"
Using random data shouldn’t impose a high system load but ensures that the data can’t be easily compressed, blowfish is a fast cipher, and if the task is run once to create the output file on the array, then further runs shouldn’t take additional time as the filesystem searches for free extents. I guess I could just throw the data into /dev/null at the other end, but it is write speeds which are the main concern rather than simply how much data can be squirted down a pipe.
- Mac Mini (MacOS 10.5) via GbE: 35.8s, 44.75Mb/s.
- MacBook Pro (MacOS 10.5) via 802.11n/GbE: 42.4s, 37.73Mb/s.
- AMD Athlon (Linux 2.6) via HomePlug AV: 71.8s, 22.28Mb/s
- SGI Octane (IRIX 6.5) via 802.11g and HomePlug AV: 763s, 2.1Mb/s
All of these machines, including the eight year old Octane, managed to read /dev/urandom into /dev/null in more or less exactly 30 seconds… except for the Athlon, which took 58 seconds(!)
I’m now wondering if the Octane’s D-Link DWL-G810 is not working properly, and whether the C7 processor is holding all of the results down…
This would, admittedly, be a more useful test if the 802.11b/g wireless didn’t also have to go via the HomePlug connection. It’s an open question as to whether generating random data is any more or less valid than reading a pre-generated file from disc (which then pulls in the speed of the I/O subsystems too).
Like this:
Like Loading...
Related
Aug 6 2008
Speed test: 802.11n vs. HomePlug AV
I have a somewhat complex home network setup consisting of seperate 802.11n and 802.11b/g wireless routers, HomePlug AV (which is supposed to present a maximum throughput of 200Mb/s to 100Mb ethernet jacks), and a mixture of gigabit ethernet and fast ethernet devices – but I’ve never actually checked to see how these connection methods differ.
Copying large amounts of data (approximately movie-sized, surprisingly enough 😉 ) via gigabit ethernet from my Mac Mini to my RAID storage array is stonkingly fast (in the order of 30 seconds or so) – even though I’ve read say that the VIA EN12000G‘s C7 processor can’t really keep up with sustained GbE transfers… although perhaps this is just a problem if running Windows as my custom Linux install, which also has to cope with software RAID-5 parity calculations, seems to show low processor usage and appears to keep up just fine.
To perform a quick-and-dirty speed comparison, I thought I’d write 200Mb of random data over the network whilst trying to keep everything else as quiescent as possible.
The command used was:
Using random data shouldn’t impose a high system load but ensures that the data can’t be easily compressed, blowfish is a fast cipher, and if the task is run once to create the output file on the array, then further runs shouldn’t take additional time as the filesystem searches for free extents. I guess I could just throw the data into /dev/null at the other end, but it is write speeds which are the main concern rather than simply how much data can be squirted down a pipe.
All of these machines, including the eight year old Octane, managed to read /dev/urandom into /dev/null in more or less exactly 30 seconds… except for the Athlon, which took 58 seconds(!)
I’m now wondering if the Octane’s D-Link DWL-G810 is not working properly, and whether the C7 processor is holding all of the results down…
This would, admittedly, be a more useful test if the 802.11b/g wireless didn’t also have to go via the HomePlug connection. It’s an open question as to whether generating random data is any more or less valid than reading a pre-generated file from disc (which then pulls in the speed of the I/O subsystems too).
Share this:
Like this:
Related
By Stuart • Internet, Technology 0