Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet Operating Systems Communications BSD

NetBSD Sets Internet2 Land Speed World Record 336

Daniel de Kok writes "Researchers of the Swedish University Network (SUNET) have beaten the Internet2 Land Speed Record using two Dell 2650 machines with single 2GHz CPUs running NetBSD 2.0 Beta. SUNET has transferred around 840 GigaBytes of data in less than 30 minutes, using a single IPv4 TCP stream, between a host at the Luleå University of Technology and a host connected to a Sprint PoP in San Jose, CA, USA. The achieved speed was 69.073 Petabit-meters/second. According to the research team, NetBSD was chosen 'due to the scalability of the TCP code.'"

"More information about this record including the NetBSD configuration can be found at: http://proj.sunet.se/LSR2/
The website of the Internet2 Land Speed Record (I2-LSR) competition is located at: http://lsr.internet2.edu/"

This discussion has been archived. No new comments can be posted.

NetBSD Sets Internet2 Land Speed World Record

Comments Filter:
  • Why TCP... (Score:2, Interesting)

    by Handpaper ( 566373 ) on Monday May 03, 2004 @07:40PM (#9046498)
    when UDP has so much less overhead [whnet.com]?
  • compression (Score:4, Interesting)

    by sir_cello ( 634395 ) on Monday May 03, 2004 @07:42PM (#9046513)

    Did they check for any inband compression? They data they're sending isn't randomised.

  • 466 MB/s (Score:5, Interesting)

    by MikeD83 ( 529104 ) on Monday May 03, 2004 @07:42PM (#9046519)
    840GB/30 minutes = 466 MB/s, or 3,728 Mbps
  • by Atario ( 673917 ) on Monday May 03, 2004 @07:47PM (#9046552) Homepage
    My guess is that it's petabits times meters (as in physical distance between the machines). Which seems kind of stupid -- if the distance makes any real difference, something is wrong. How about communicating with Voyager II -- then you could get some real numbers, even at modem speeds!

    Plus, I'm betting it's not a "land" speed record, seeing as how the data probably jumps through the air (satillite/microwave transmissions) at one or more points. (Not to mention the fact that being on, over, or under the surface of land or water means nothing to a data cable.)
  • by aerojad ( 594561 ) on Monday May 03, 2004 @07:52PM (#9046594) Homepage Journal
    So, is this just using a secure connection on our internet, or did they go ahead and string up an all new internet for no one but theirselves to be on? I don't really see the point of the latter - why not dump the money into vastly improving the current internet and stomping out spammers and things that make the place bad?
  • by raehl ( 609729 ) <raehl311@@@yahoo...com> on Monday May 03, 2004 @08:00PM (#9046668) Homepage
    That depends on whether the DVDs are in cases or not I think.

    At 9.4 GB per DVD (Assume single-layer double-sided DVD-R), and a travel time of 3 weeks from Sweeden to California (2 weeks on the boat, one week of driving), you'd need to get about 90,000 DVDs in your station wagon to get an effective 1680 GB/hr. That wouldn't be possible if they were in cases, but if it was just the DVDs, it's probably a close call. Might have to upgrade to dual-layer DVD's, or change the saying to "an SUV full of DVD's".

    On the other hand, if you count the time to actually read the data off of the DVDs (even worse if you count the time to put the data on the DVDs too), the station wagon of DVD's barrier was broken long ago - you probably couldn't spin a DVD fast enough to get 9.4 GB of data off it in 20 seconds.
  • petabyte-meters!? (Score:2, Interesting)

    by autopr0n ( 534291 ) on Monday May 03, 2004 @08:02PM (#9046681) Homepage Journal
    I mean, correct me if I'm wrong, but isn't this a somewhat useless measure? I mean, I suppose that the longer a link is, the more interference, but really, seems like a rather pointless mesure to me.
  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Monday May 03, 2004 @08:10PM (#9046736) Homepage
    ...if the distance makes any real difference, something is wrong.

    One of the biggest problems in networking is handling a large bandwidth-delay product (that's the amount of data in flight at once). Since distance increases the delay it is relevant.

    Plus, I'm betting it's not a "land" speed record, seeing as how the data probably jumps through the air (satillite/microwave transmissions) at one or more points.

    Nope. Think about it: what kind of wireless connection can handle 4 Gbps?
  • Means nothing (Score:1, Interesting)

    by Anonymous Coward on Monday May 03, 2004 @08:10PM (#9046738)
    This was just node to node.. when they build a network that 1 billion users can simultaneously transfer data TO EACH OTHER at 100 Mbps (yes I'll be happy with Mbps) .. wake me up.
  • Re:compression (Score:2, Interesting)

    by sir_cello ( 634395 ) on Monday May 03, 2004 @08:23PM (#9046827)
    It means what it says: the data they are sending is compressible (looking at the tcpdump output) as it consists of repeated alphabetical sequences.

    If any of the intervening links employs compression, then the transfer rates are artificial as they represent a function of the data, not a function of the network engineering. This doesn't make for a valid comparison against the other participants.

    I looked at the rules and they mention nothing about the type of data, nor about the presumptions of the traffic path.

    I conceed there is less likelihood of compression in effect on gigabit network due to compute power required, but it is a question that bears asking.

    The assumption could be defeated by a test using (a) packets of entirely same character [i.e. highly compressible], or (b) entirely pseudo-random characters [i.e. not at all compressible].

    In fact, the rules don't require that the transfer is validated for this problem, nor to average out any other network effects, i.e. the rules should state that (say) "average of 3 back to back transfers each utilising substantially randomised information".

  • DOSed (Score:4, Interesting)

    by Veramocor ( 262800 ) on Monday May 03, 2004 @08:25PM (#9046843)
    Man I hate to be on the recieving end of a Denial of Service attack on Internet 2. 900 gigabytes of data /30 min from multiple sourses would be crushing.
  • Google padding (Score:3, Interesting)

    by GoClick ( 775762 ) on Monday May 03, 2004 @08:25PM (#9046849)
    Actually google doesn't index a lot of /. because there aren't enough inter article links to find all the articles and because google just gets the default page setup a lot of comments are hidden, not to mention Google only indexes a certain amount of dynamic data from a particular site to avoid causing what was once called "the google effect" when a poorly designed web app on a slow server would be hammered as google crawled the catalog.
  • I live there (Score:1, Interesting)

    by Anonymous Coward on Monday May 03, 2004 @08:47PM (#9047019)
    Cool, I live in Luleå, I actually have my internet connection supplied by the university. I wonder how long before I can get Internet that fast.
  • by Anonymous Coward on Monday May 03, 2004 @09:27PM (#9047350)
    They might claim that NetBSD scales best, but it took some code changes to get it to do so (which have since been picked up and are included in the base).

    The REAL reason for why they picked NetBSD is that Ragge (Anders Magnusson), the person doing a fair chunk of the testing, is heavily involved in the project and knows the code base. It was simply easiest to work with for him. :-)
  • by sir_cello ( 634395 ) on Monday May 03, 2004 @09:34PM (#9047380)

    Surely someone's seen the "released" Windows code and can now tell whether it is BSD based or not.
  • by Anonymous Coward on Monday May 03, 2004 @10:40PM (#9047706)
    You'll see pride in action when this gets moderated into the bitbucket, as all Linux-friendly posts in the Slashdot BSD secion are. But anyway...

    Linux has often been used to set records. The sure way to see Linux trashing BSD is to add more CPUs. Linux scales tolerably well to 512 processors now! The Linux IP stack is very well suited to SMP.

    This NetBSD record is really about having insanely great Internet connections separated by thousands of miles.

    Long ago, the Linux developers did look into adopting the BSD stack. At the time though, the BSD stack was incompatible with the GPL. Alan Cox asked Berkeley to re-license under the GPL, and was turned down. At this point in time, using the BSD code wouldn't make any sense.

  • by Bensmum ( 766488 ) on Monday May 03, 2004 @10:50PM (#9047776) Homepage
    How about some evidence of that? Where is this 512 way smp machine running linux?
  • 512-way SMP (Score:1, Interesting)

    by Anonymous Coward on Monday May 03, 2004 @11:07PM (#9047908)
    You write:

    " How about some evidence of that? Where is this 512 way smp machine running linux?"

    Thought I was bluffing, did you? :-)

    It's the SGI Altix. In a linux-kernel post [lkml.org] just today, an Altix user says that "Overall, linux scales to 512p much better than I would have predicted." This system runs with Itanium-II processors BTW.

    So there you go, Linux handling a 512-way box tolerably well. Linux screams on an IBM 64-way box, with Xeon or Power 4 processors.

  • by cipher chort ( 721069 ) on Tuesday May 04, 2004 @03:38AM (#9049097) Homepage
    Since when does "pretty much the same" give "exactly the same" results? Even an off-by-one can change things drastically.

    By the way, this isn't the first time I've heard that NetBSD's TCP/IP stack is the superior of the three. I once met the head of networking for a semi-conductor testing equipment company that did extensive tests between all three of the BSDs and Linux, and he said that NetBSD was the clear winner in TCP/IP performance.
  • by Anonymous Coward on Tuesday May 04, 2004 @10:43AM (#9051256)
    The article says that they used ttcp [pcausa.com] which is a memory-to-memory bandwidth testing program. Most would consider that unrepresentative of reality. On the other hand, today's supercomputers have a tremendous amount of memory (1.2 TB [sandia.gov], 6 TB [llnl.gov], 10 TB [jamstec.go.jp], 33 TB [top500.org], etc.) so memory to memory is possible.

    Others have suggested that disk speeds cannot sustain that rate. However, supercomputer disk arrays can easily keep up (4 GB/s or 32 Gb/s [sandia.gov]).

    Finally, it is possible to achieve nearly the same result (multiple streams instead of a single stream) transfering real data (23.23 Gb/s [sc-conference.org]).

    [Bias alert: I am a member of the team that set a previous Internet2 Land Speed Record, Guinness World Record [guinnessworldrecords.com] and won the "Bandwidth Lust: Distributed Particle Physics Analysis Using Ultra-High Speed TCP on The Grid" or "Moore's law move over" award at SC2003 [sc-conference.org].]

    Now, before you complain that the technology is not available to "mere mortals," let me point out that we first started experimenting with 1 Gb/s Ethernet at work 5 years ago. Now it is readily available at reasonable prices [opsinas.com] for consumer desktop machines. (Apple has had it standard in G4 desktops for 4 years [apple.com].) The problem is not with consumer hardware, it is having access to true broadband (not cable modem or DSL), at least in the USA. Although your LAN may support 1 Gb/s, your download speed is limited to 1-3 Mb/s (cable) or 256 -786 Kb/s (DSL). (Your upload speeds are significantly lower.) Since the link provider has very little incentive to upgrade service, I doubt that will change very quickly.

    So, yes it is possible. No you can't have it (yet)!
  • by Anonymous Coward on Tuesday May 04, 2004 @11:10AM (#9051510)
    Linked buffers were good for the VAX, since pages were only 512 bytes. Avoidance of the mbuf crud has historically given Linux a speed advantage. Now, with 9 kB jumbo packets and hardware scatter-gather, supporting linked buffers makes sense again. Thus the feature was added.

    The same goes for doing a copy on transmission. BSD has generally hidden a software checksum and/or copy in the driver, because older hardware didn't support scatter-gather and checksum. Linux didn't hide it. Note that checksum comes free (seriously!) when doing a copy, since you need to access the memory anyway. Now that cards with scatter-gather and checksum are common enough to care about, Linux can take advantage of this feature for "zero-copy transmit". (obviously, the network transmit is itself a copy and the whole point of doing a transmit)

    Zero-copy receive, in the BSD style, is a way to kill SMP scalability. It involves remapping pages, which leads to cross-CPU interrupts to invalidate the old mapping. It's cheaper to copy the data.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...