NetBSD Sets Internet2 Land Speed World Record 336
Daniel de Kok writes "Researchers of the Swedish University Network
(SUNET) have beaten the Internet2 Land Speed Record using two Dell 2650 machines with single 2GHz CPUs running NetBSD 2.0 Beta. SUNET has transferred around
840 GigaBytes of data in less than 30 minutes, using a single IPv4 TCP stream, between a host at the Luleå
University of Technology and a host connected to a Sprint PoP in San Jose, CA, USA. The
achieved speed was 69.073 Petabit-meters/second. According to the research team, NetBSD was chosen 'due to the scalability of the TCP code.'"
"More information about this record including the NetBSD configuration can be found at:
http://proj.sunet.se/LSR2/
The website of the Internet2 Land Speed Record (I2-LSR) competition is located at:
http://lsr.internet2.edu/"
Why TCP... (Score:2, Interesting)
compression (Score:4, Interesting)
Did they check for any inband compression? They data they're sending isn't randomised.
466 MB/s (Score:5, Interesting)
Well, not having RTFA... (Score:2, Interesting)
Plus, I'm betting it's not a "land" speed record, seeing as how the data probably jumps through the air (satillite/microwave transmissions) at one or more points. (Not to mention the fact that being on, over, or under the surface of land or water means nothing to a data cable.)
I've always wondered about Internet2 (Score:3, Interesting)
Can we get a Uhaul trailer? (Score:5, Interesting)
At 9.4 GB per DVD (Assume single-layer double-sided DVD-R), and a travel time of 3 weeks from Sweeden to California (2 weeks on the boat, one week of driving), you'd need to get about 90,000 DVDs in your station wagon to get an effective 1680 GB/hr. That wouldn't be possible if they were in cases, but if it was just the DVDs, it's probably a close call. Might have to upgrade to dual-layer DVD's, or change the saying to "an SUV full of DVD's".
On the other hand, if you count the time to actually read the data off of the DVDs (even worse if you count the time to put the data on the DVDs too), the station wagon of DVD's barrier was broken long ago - you probably couldn't spin a DVD fast enough to get 9.4 GB of data off it in 20 seconds.
petabyte-meters!? (Score:2, Interesting)
Re:Well, not having RTFA... (Score:3, Interesting)
One of the biggest problems in networking is handling a large bandwidth-delay product (that's the amount of data in flight at once). Since distance increases the delay it is relevant.
Plus, I'm betting it's not a "land" speed record, seeing as how the data probably jumps through the air (satillite/microwave transmissions) at one or more points.
Nope. Think about it: what kind of wireless connection can handle 4 Gbps?
Means nothing (Score:1, Interesting)
Re:compression (Score:2, Interesting)
If any of the intervening links employs compression, then the transfer rates are artificial as they represent a function of the data, not a function of the network engineering. This doesn't make for a valid comparison against the other participants.
I looked at the rules and they mention nothing about the type of data, nor about the presumptions of the traffic path.
I conceed there is less likelihood of compression in effect on gigabit network due to compute power required, but it is a question that bears asking.
The assumption could be defeated by a test using (a) packets of entirely same character [i.e. highly compressible], or (b) entirely pseudo-random characters [i.e. not at all compressible].
In fact, the rules don't require that the transfer is validated for this problem, nor to average out any other network effects, i.e. the rules should state that (say) "average of 3 back to back transfers each utilising substantially randomised information".
DOSed (Score:4, Interesting)
Google padding (Score:3, Interesting)
I live there (Score:1, Interesting)
Why NetBSD was chosen (Score:3, Interesting)
The REAL reason for why they picked NetBSD is that Ragge (Anders Magnusson), the person doing a fair chunk of the testing, is heavily involved in the project and knows the code base. It was simply easiest to work with for him.
Re:Correct me if I'm wrong... (Score:4, Interesting)
Surely someone's seen the "released" Windows code and can now tell whether it is BSD based or not.
Re:Linux Stack vs. *BSD stacks (Score:3, Interesting)
Linux has often been used to set records. The sure way to see Linux trashing BSD is to add more CPUs. Linux scales tolerably well to 512 processors now! The Linux IP stack is very well suited to SMP.
This NetBSD record is really about having insanely great Internet connections separated by thousands of miles.
Long ago, the Linux developers did look into adopting the BSD stack. At the time though, the BSD stack was incompatible with the GPL. Alan Cox asked Berkeley to re-license under the GPL, and was turned down. At this point in time, using the BSD code wouldn't make any sense.
Re:Linux Stack vs. *BSD stacks (Score:2, Interesting)
512-way SMP (Score:1, Interesting)
" How about some evidence of that? Where is this 512 way smp machine running linux?"
Thought I was bluffing, did you? :-)
It's the SGI Altix. In a linux-kernel post [lkml.org] just today, an Altix user says that "Overall, linux scales to 512p much better than I would have predicted." This system runs with Itanium-II processors BTW.
So there you go, Linux handling a 512-way box tolerably well. Linux screams on an IBM 64-way box, with Xeon or Power 4 processors.
Re:Correct me if I'm wrong... (Score:3, Interesting)
By the way, this isn't the first time I've heard that NetBSD's TCP/IP stack is the superior of the three. I once met the head of networking for a semi-conductor testing equipment company that did extensive tests between all three of the BSDs and Linux, and he said that NetBSD was the clear winner in TCP/IP performance.
Re:Not entirely accurate for 'normal usage'. (Score:1, Interesting)
Others have suggested that disk speeds cannot sustain that rate. However, supercomputer disk arrays can easily keep up (4 GB/s or 32 Gb/s [sandia.gov]).
Finally, it is possible to achieve nearly the same result (multiple streams instead of a single stream) transfering real data (23.23 Gb/s [sc-conference.org]).
[Bias alert: I am a member of the team that set a previous Internet2 Land Speed Record, Guinness World Record [guinnessworldrecords.com] and won the "Bandwidth Lust: Distributed Particle Physics Analysis Using Ultra-High Speed TCP on The Grid" or "Moore's law move over" award at SC2003 [sc-conference.org].]
Now, before you complain that the technology is not available to "mere mortals," let me point out that we first started experimenting with 1 Gb/s Ethernet at work 5 years ago. Now it is readily available at reasonable prices [opsinas.com] for consumer desktop machines. (Apple has had it standard in G4 desktops for 4 years [apple.com].) The problem is not with consumer hardware, it is having access to true broadband (not cable modem or DSL), at least in the USA. Although your LAN may support 1 Gb/s, your download speed is limited to 1-3 Mb/s (cable) or 256 -786 Kb/s (DSL). (Your upload speeds are significantly lower.) Since the link provider has very little incentive to upgrade service, I doubt that will change very quickly.
So, yes it is possible. No you can't have it (yet)!
nah, that's old, and kind of wrong anyway (Score:2, Interesting)
The same goes for doing a copy on transmission. BSD has generally hidden a software checksum and/or copy in the driver, because older hardware didn't support scatter-gather and checksum. Linux didn't hide it. Note that checksum comes free (seriously!) when doing a copy, since you need to access the memory anyway. Now that cards with scatter-gather and checksum are common enough to care about, Linux can take advantage of this feature for "zero-copy transmit". (obviously, the network transmit is itself a copy and the whole point of doing a transmit)
Zero-copy receive, in the BSD style, is a way to kill SMP scalability. It involves remapping pages, which leads to cross-CPU interrupts to invalidate the old mapping. It's cheaper to copy the data.