Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Software BSD

NetBSD Focuses On Scalability 114

An anonymous reader writes "Felix von Leitner recently performed some benchmarks (previous story) for a talk about scalable network programming he held at Linux Kongress 2003. The winners in this scalability lineup were Linux and FreeBSD 5, followed by NetBSD and finally OpenBSD. What's interesting is that in only two weeks time the NetBSD team made dramatic improvements. Felix performed his benchmarks again and the results are nothing short of astonishing. NetBSD now has better scalability than FreeBSD." Read on for a list of improvements.

the submitter lists these changes:

  • socket: previously O(n), now O(1).
  • bind: greatly improved, but still O(n). Much less steep, though.
  • fork: a modest O(n) for dynamically linked programs, O(1) for statically linked.
  • mmap: a bad O(n) before, now O(1) with a small O(n) shadow.
  • touch after mmap: a bad strange graph in 1.6.1, a modest O(n) a week ago, now O(1).
  • http request latency: previously O(n), now O(1)

This is a very good job from the NetBSD team! I hope to see more benchmarks and more improvement for a great OS like NetBSD."

This discussion has been archived. No new comments can be posted.

NetBSD Focuses On Scalability

Comments Filter:
  • Target (Score:2, Interesting)

    What's interesting is that in only two weeks time the NetBSD team made dramatic improvements.

    Colour me cynical, but just maybe the improvements are targeted to produce a better benchmark rather than broader scalability.

    Tell me I'm wrong.

    • Re:Target (Score:5, Insightful)

      by pb ( 1020 ) on Wednesday November 05, 2003 @11:50AM (#7397329)
      um... the benchmarks in question all have to do with improving web server performance, (specifically, the author's pet project of a web server) so does it matter whether or not the goals *are* targeted to produce a better benchmark, if the *results* end up being broader scalability?

      Benchmarks are great tools to use for improving performance, and as long as you don't have to cheat to do better, (like some major video card companies who shall remain nameless) improving your scores on a good benchmark largely equates to improving performance across a whole host of applications.

      If you'll remember, the same thing happened with the Netcraft debacle; performance deficiencies in Linux wrt. NT were highlighted, and fixed, and Linux is the better for it, with even faster webserving, and a better TCP/IP stack. I don't care about the alleged reasons, I care about the positive results. :)
    • Re:Target (Score:4, Insightful)

      by overbom ( 461949 ) <overbom@@@yahoo...com> on Wednesday November 05, 2003 @02:01PM (#7398766)
      Yup, you're cynical.

      NetBSD's team is scary good. There are some advantages to keeping a tight core development team, and if the team is good, one of them is quality.

      In my experience with NetBSD, when they do something, they do it right.

      Let me put it to you this way.

      Say there was this huge alien spaceship coming from outer space to blow up the white house and use us as food. We'd need to send a rag-tag group of crazy operating system geniuses up into space in a rocket to intercept them and upload a virus into their system (this would no doubt piss off the stiff-necks in the military, but the orders would no doubt come from up top). That rag-tag group of crazy OS geniuses would be the NetBSD team.
      • There are some advantages to open source software teams that closed source lacks that this illustrates.

        For example, how long before the other BSDs (and even Linux) pick up any applicable parts of the improvements the NetBSD team made here? Even though they are different teams and projects, they still help each other out almost as much as being on the same project does.
    • Your wrong. :-) The primary focus of NetBSD is stability and portability across a large number of platforms. A couple people took a break from that work, figured out the issues (undoubtedly partially by looking at what made others perform so well), and folded in the changes. Nothing sinister about that.
    • This should read as FreeBSD in the FreeBSD section..

      >>>
      As for NetBSD, I plotted the graphs for 4.9 against the graphs for 5.1-CURRENT. Here are the results:
  • NetBSD is very cool (Score:5, Interesting)

    by n1ywb ( 555767 ) on Wednesday November 05, 2003 @11:35AM (#7397171) Homepage Journal
    It's some of the best source code I've ever looked at. As far as having consistantly good source code, it whoops Linux. I really tried to get my operating systems class teacher to use it instead of Linux, because of it's clean design. He decided to use QNX instead. WTF~?

    Anyway if you've never tried NetBSD, I think you should. At least get it installed and compile a kernel. It's a good learning experience. Plus it's been ported to every fsking hardware platform ever (just about.)
    • by DrSkwid ( 118965 )
      The most beautiful code for the most beautiful OS.
    • by kjs3 ( 601225 )
      I hate to compare NetBSD to Linux, because in many ways they are different tools for different jobs. Linux lives firmly in the big server and desktop world, where NetBSD live more comfortably in the more modest (hardware-wise) server and embedded world.

      That said, NetBSD is very clean and elegant, and is persistently and carefully maintained. If you want an operating system that you can sit down and really understand and modify, I think you'd be very, very happy with NetBSD.

      YMMV


    • Why would your teacher choose QNX over Linux? Linux is free and easy to install. With the Linux source code, students can study (and play with) real operating system internals. Did QNX make a "donation" to your school or something? ;-)
      • QNX is a real OS... and they probably got source too.

        In an OS course, you want a clean OS so you can stick to the theory. Linux is HUGE in comparison, and it is not a clean design.

        It's about actually having a good chance of understanding a significant portion of the OS, not wading through performance hacks.
      • I wasn't exactly clear. We did do quite a bit with Linux. I was trying to get him to use NetBSD as the secondary OS, but he chose QNX instead. We also fooled around with Tru64 'nix and Windows NT a bit.
  • by Euphonious Coward ( 189818 ) on Wednesday November 05, 2003 @11:51AM (#7397338)
    The Debian Project is supporting a GNU-on-NetBSD-kernel configuration, Debian GNU/NetBSD. Benefits to users and to the Debian Project include:
    • demonstrating Debian GNU kernel independence, enforcing package portability.
    • supporting processor architectures that Linux is not ported to, yet, and many it will probably never be ported to.
    • improving diversity: driver bugs on various peripherals are likely not to match, so one kernel might work with devices where the other fails.
    • cool!

    I run GNU on my machines. I'm not picky about kernels.

  • What? (Score:4, Interesting)

    by wowbagger ( 69688 ) on Wednesday November 05, 2003 @01:07PM (#7398126) Homepage Journal
    mmap: a bad O(n) before, now O(1) with a small O(n) shadow.


    What the hell is this supposed to mean? Either you are O(1) or you are O(n) - what is "small O(n) shadow" mean?
    • by MSG ( 12810 )
      Probably that the implementation consists of two parts. The larger part of these is O(1), and there's a small O(n) operation involved.

      I'm guessing.
    • What the hell is this supposed to mean? Either you are O(1) or you are O(n) - what is "small O(n) shadow" mean?

      Well, perhaps he means that the running time is O(n), where n is the number of times the test is run ;-)

      I agree, the statement is unclear as it stands.

    • Re:What? (Score:4, Informative)

      by booch ( 4157 ) <slashdot2010NO@SPAMcraigbuchek.com> on Wednesday November 05, 2003 @02:19PM (#7398983) Homepage
      RTFA - specifically the graph [bulk.fefe.de] of the mmap benchmark in question. Note that for the most part the red line goes straight across. But there are a few data points that follow a O(n) graph above that (what the author called a shadow). So the interpretation is that the typical case is O(1), but occasionally it has a worst-case performance of O(n). Plus, the factor of the O(n) case is much lower than the previous version.
      • Sorry, but you are either O(1) or O(n) - if you have any part of your operation that is O(n) you are O(n) - it's as simple as that.

        I don't care if 99% of the time you are O(1), and 1% of the time you are O(n) - you are O(n).

        The whole idea of Big-O notation is that, if runtime is F(n), then as n goes to infinity what is the primary part of F(n) - the part that contributes the most to the operation.

        So let's say it correctly - the behavior is still O(n). It may be faster than it was, but it is still O(n). C
        • big doh notation (Score:5, Interesting)

          by epine ( 68316 ) on Wednesday November 05, 2003 @02:58PM (#7399415)
          The O(N) shadow statement is a sufficient statement of O(N) behaviour for the big O pedants. I looked at the graph, and I vote we keep the wording as it was.

          O notation is overrated. Sorting is always described as O(N*log N), but for any practical architecture using a radix sort with L1/L2 cache locality, replace log N with the constant factor of 3 or 4. A million cache local buckets can radix sort 10^30 elements in 3 log N time.

          Using all of main memory as your bucket store, I'd guess you could sort every proton in the known universe in 8 passes. So what exactly is that log N term trying to tell us?

        • Technically yes, performance is still O(n). But if you were to describe the graphs to explain performance, you'd use the same language as the original author, plus note that the factor is much lower. BTW, I believe that it is accepted usage to use Big-O notation for typical as well as worst-case perfromance, but I could be wrong -- I haven't kept up with pure Comp Sci literature lately. Also note that the graphs pretty much cover about as high a load as is practical, so anything beyond the graphs is theoret
        • Re:What? (Score:3, Informative)

          by platipusrc ( 595850 )
          if you have any part of your operation that is O(n) you are O(n) - it's as simple as that.

          Nope, you act like you know what you're talking about, but you're forgetting about a fun thing called amortization. For example, if you write a stack, and your push operation takes N operations on the Nth push (when you increase your array, and it can't be on the first push), then you can still end up with a constant push time even though not every push is constant.

      • RTFA - specifically the graph of the mmap benchmark in question.

        Take a look at the graph again, really. It's pretty clear that there is a great improvement with respect to the test, but that's about it. In absence of traditional statistical analysis, all we have is some nice pictures, with conclusions drawn upon the chosen scales of axes (they are very different) and the drawing program used. Additional summarising statistics are lacking,

        Now, compare the scales at

        • Yuck, draft was posted, but shit happens...
        • I'm not quote following. The old and new algorithms are both graphed at the same scale, on the same graph. Or are you saying that the scale is too course to tell if the new algorithm is in fact mostly O(1) and may in fact be O(n) with a very small factor? Hmm, thinking about that, as the factor approaches 0, O(n) approaches O(1), doesn't it?
          • The scale of x-axis is different from the scale of the y-axis. Now, choosing different scaling on the axes is pretty common, but unless care is taken, the intepretation might be distorted (in abscence of supporting statistical analysis). Just imagine the the "old" NetBSD results had a greater variance, the results of the new test would appear to be "even more" O(1) (and thus we would not see the so called "O(n) shadow").

            All in all, from the plot alone, I would say NetBSD has done some great improvements.

            • Ah, point taken. The supposedly O(n) part of NetBSD-CURRENT has a factor of about 5. If the supposedly O(1) part is actually O(n), the maximum factor would be about 1, but it's hard to tell where it is between 0 and 1. So I can see your point. But how accurate/small a factor do you need before you can definitively say it's O(1) vs. O(n)? I think maybe this is a case where real-world graphs (and the multiplying factor) are more useful than Big-O notation.
    • Re:What? (Score:1, Informative)

      by Anonymous Coward
      Calm down. If a given implementation is generally O(1) in practice, but can be hit with a case that resorts to an original O(n) algorithm, then their description of "O(1) with a small O(n) shadow" seems like good description. Technically, you must call this O(n) then, since the worst case is O(n). But more information is better in this case, as long as it's accurate.
  • I'd like to play around more with NetBSD if it can produce results like this. Did the author just install an "off the shelf" version of 1.6.1 or did he have to apply 100 pre-alpha patches from some guy named Joe in order to get this performance?
    • I'd like to play around more with NetBSD if it can produce results like this. Did the author just install an "off the shelf" version of 1.6.1 or did he have to apply 100 pre-alpha patches from some guy named Joe

      I should say RTFA, but he originally tested 1.6.1, which performed well but not as well as FreeBSD-CURRENT. Then he tested NetBSD-CURRENT, then, after talking to the NetBSD team, he tested a later version of NetBSD-CURRENT. Not quite like testing a release, but he tested -CURRENT for FreeBSD and Open

  • by Triumph The Insult C ( 586706 ) on Wednesday November 05, 2003 @07:20PM (#7402352) Homepage Journal
    I can really only warn of using OpenBSD for scalable network servers.

    Don't use OpenBSD for network servers.

    ...again, I would advise against using OpenBSD for scalable network servers.

    If you are using OpenBSD, you should move away now.

    http://news.netcraft.com/archives/2003/11/02/secur e_dog_hosting_most_reliable_hosting_company_site_d uring_october.html [netcraft.com]

  • Since the author actually reads the slashdot comments, I've a little question.

    I'm no real kernel hacker, so I could be totally off, but:

    Are all OSes located on the same part of the (same) HD? This because linear performance probably scales linear with the cylinder the OS partitions is on.

  • by Anonymous Coward
    ...wake me up when I can take NetBSD v1.7 and run it on my VAXstation 3100 with 8MB of memory like I can with v1.5.1
    NetBSD is starting (not yet, but close) to become dangerously close to the precipice of being Bloatware(*)
    First on my list to replace is GCC which has ballooned in size way way too much for the "features" that have been recently included.

    TDz.
    (*Bloatware is a TM of Microsoft corp)
  • Or the NetBSD guys looked at all of Ingo Molnar's kernel hacks and incorporated the same logic. 2 weeks to implement all these things is an awfully short period of time, IF YOU ARE DOING THE HEAVY LIFTING OF THE DESIGN. Implementing someone else's algorithm is much easier.

One person's error is another person's data.

Working...