Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
BSD Operating Systems

Matt Dillon On FreeBSD 5.0 VM System And More 280

JigSaw writes: "OSNews features a very interesting interview regarding FreeBSD 5 with the guy responsible for the very good (technically) FreeBSD VM among other things. Matt Dillon talks about everything: FreeBSD 5, Linux, .NET and much more. Additionally, OSNews also includes two mini interviews with the NetBSD and OpenBSD head developers."
This discussion has been archived. No new comments can be posted.

Matt Dillon On FreeBSD 5.0 VM System And More

Comments Filter:
  • Great to here BSD news after the wind river story a few days back...thanks FreeBSD crew...
  • There are also a few comments from Theo de Raadt, the OpenBSD Founder himself. Be sure to check them out at the very bottom of the page!
  • Wow (Score:2, Funny)

    by Chris Brewer ( 66818 )
    I didn't know that there's such a following by Hollywood actors [imdb.com] of alternative Operating Systems.

    Maybe the last big movie he was in should have been called "There's Something About BSD".
  • "Most of the linux centric companies were leeching off the linux name, and those that weren't didn't fail because they were using Linux, they failed because they didn't have a business model with a chance in hell of (ever) going profitable."


    True as hell. There are way to many companies leeching of the Linux name without o real business model...

    • by chinton ( 151403 ) <chinton001-slash ... m ['gma' in gap]> on Monday October 08, 2001 @05:45PM (#2404002) Journal
      I don't think that this is specific to Linux-hype at all, though. Look at most of the dead (or dying) .com companies. Most of those business plans read as follows:
      1. Provide a service free of charge.
      2. Profit!
      • by Anonymous Coward
        There was at least one mining company that changed their name to "linux services" in order to get a stock jump (which they did). I can't recall the name, but they have probably changed it recently anyway...
      • Yeah, well obviously they're going to make up for it in volume!

        In 98 or 99, I remember pointing out to a stockbrokertype uncle of mine that Amazon loses money for every book they sell (I don't know if this is still true, btw) and his response was "they'll make up for it in volume." The unsustainability didn't seem to worry him at all. Haven't had a chance to ask him about it lately, though.

  • by Anonymous Coward
    As a note (not a flamebait) about how FreeBSD compares with Linux regarding the VM implementations, remember that Linux has actually 2 VM managers (you can choose which one you want in the kernel build configuration), both of exterme poor quality. That seems to be a common problem in linux, where people write sensible code just to learn how to do it, and it becomes the standard in the kernel. Now compare that with any of the BSD's and you'll see why linux is actually very hyped, and why the BSD's are technically so strong.
    • by Anonymous Coward
      I use NetBSD at Uni and Linux and home.
      I like all the BSD's but FreeBSD is my favourite. However, my Linux box at home with kernel 2.2.19 on reiserfs is noticably more responsive when I have a lot of programs running.(similar hardware)

      This is just my observation.
      I am glad to see FreeBSD around it is a great OS.
  • by Anonymous Coward
    Matt tells us that Somthing About Mary 2 will be fully compatable with FreeBSD
  • by TedCheshireAcad ( 311748 ) <ted AT fc DOT rit DOT edu> on Monday October 08, 2001 @05:41PM (#2403979) Homepage
    This guy makes several good arguments for *BSD, mainly, the difference between *BSD and Linux for the desktop. Many people think that *BSD is only the shell, but GNOME and KDE can be compiled on it just as easily as on Linux, no compatability code needed. Also, his point about .NET is a good one, that Microsoft is just using it as buzzword VaporWare to name whatever the new latest and greatest product that will "change the world".
    • by frob2600 ( 309047 ) on Monday October 08, 2001 @06:00PM (#2404069)
      You should never NEED to compile GNOME or KDE for FreeBSD. They come precompiled and are an option during the install if you want one. Personally I did compile GNOME because I was bored. ;-)

      But even if there was some reason that you could not use the code the way it is, FreeBSD has a very good ports tree that will download the current source, patch it for FreeBSD, compile it for your system, install it, and then clean up after itself. VERY SEXY... YEAH!

      Well, I need to clean my underpants again. So I guess I am done ranting here.
      • sorry...


        I agree with you about not ever needing to compile, but I think his real point was not any of BSD's strong points (features included, packages/ports, etc) but rather that Linux is way out in left field as far as "across the board" unix compatibility.


        I know what you're thinking ...what across the board unix compatibility? right? That's the whole point, many of the unices and free clones can share most of the programs off one can download off of the internet, unless they're full of Linux-isms. These linux-isms are popping up everywhere now, and just plain causing crossplatform (or at least differing unix platforms) problems.


        I'm sure this will be modded to lowest plane of /. hell for attempting to pick on the One True O.S. but it has to be said, I for one am TIRED of downloading programs only to find some strange library from some linux install is needed (and I'm greatly oversimplifying here).

        • by Arandir ( 19206 ) on Monday October 08, 2001 @07:28PM (#2404321) Homepage Journal
          These linux-isms are popping up everywhere now, and just plain causing crossplatform (or at least differing unix platforms) problems.

          For those who haven't figured it out yet, let me clue you in on a couple of facts:

          1) The linux kernel is specific to Linux. If you make linux specific kernel calls then your program will only run under Linux. If you're writing a kernel module, then go for it. Otherwise forget it. It will only make you look stupid.

          2) A Standard C Library (libc) is standard for every Unix and Unix-like operating system. For 99 systems out of 100, this libc will NOT come from GNU. If you write a program that makes use of glibc extensions, your program will not be portable. It will only make you look stupid.
          • I'm sure this will be modded to lowest plane of /. hell for attempting to pick on the One True O.S. but it has to be said, I for one am TIRED of downloading programs only to find some strange library from some linux install is needed (and I'm greatly oversimplifying here).

            As far as I can tell, he wasn't talking about about syscall interfaces and libc, he was talking about obscure libraries that must be installed seperately, and may in turn require others. It must be in terms of a source compile, since package managers like rpm and deb, etc. handle dependencies automatically.

            • by Arandir ( 19206 )
              Even though his pet peeve might not be my pet peeve, it still peeves me off. There are many kinds of linuxisms, even though I mentioned only one.

              Li*nux*ism:
              Pronunciation: 'lee-nuks-izm'
              Function: noun
              Date: 1998
              : a theory holding that the community can know nothing but its own operating system and that its operating system is the only existent operating system. Compare with English "solipsism".

              p.s. You have a Linuxism in your post. Can you find it?
              • p.s. You have a Linuxism in your post. Can you find it?

                Let me guess---is it his assumption that these much-vaunted *BSDs don't have package managers such as rpm, deb, etc., to handle dependencies?

                Am I the only person on /. who thinks user friendliness is part of technical superiority?
          • 2) A Standard C Library (libc) is standard for every Unix and Unix-like operating system. For 99 systems out of 100, this libc will NOT come from GNU. If you write a program that makes use of glibc extensions, your program will not be portable. It will only make you look stupid.

            I don't mean this as flamebait at all, but I'd like to point out that the reverse is absolutely true as well. Having used both systems quite a bit, I'd say there are as many or more features in the BSD libc that are oh-so-good and oh-so-tempting to use, and many BSD authors do in fact use them, making their programs unportable (making themselves look stupid? =). Now then, it's pretty easy to pull a part of the BSD libc and include it in your program for portability so this is less of an issue than pulling part of glibc and potentially tainting your licenses. Still, it's a concern for any Unix-like system.
          • The linux kernel is specific to Linux. If you make linux specific kernel calls then your program will only run under Linux. If you're writing a kernel module, then go for it. Otherwise forget it. It will only make you look stupid.

            Yeah, on the other hand sometimes they are useful enough to be worth it. If sendfile gives your application a 10% boost, and you need that 10% boost, well, go for it. #ifdef it, but use it. Hell, use it and #ifdef it if you just want to play with it.

            I won't shrink back from using kqueue on FreeBSD, why should a Linux user hold back? (and kqueue is harder to #ifdef!) I won't go off and use it if it won't save me a lot, because it is painful to #ifdef around, but it is faster then poll...

            If you always avoid something only on one Unix the state of the art never advances. Remember the socket calls were once on only one Unix (4BSD). SLIP was once only available for VAX BSD. Now sometimes that doesn't work out, after all that nasty SysV shm crap left SysV to infest the rest of the world.

            • There's nothing inherently wrong with writing a Linux only (or FreeBSD) only application, if that's what you want.

              I use Linux, FreeBSD and Solaris, so crossplatform software is important to me. It's a royal pain in the butt to have a different set of tools on each platform. For lower level system software it doesn't matter much. I don't care if the BSD ld won't compile on Solaris, because Solaris already has a dynamic linker. And sysinstall is as pointless on SuSE as YaST is on FreeBSD. But the higher level software *should* be portable.
          • 2) A Standard C Library (libc) is standard for every Unix and Unix-like operating system. For 99 systems out of 100, this libc will NOT come from GNU. If you write a program that makes use of glibc extensions, your program will not be portable. It will only make you look stupid.

            This isn't most definately not true. The C library on Unices implements a set of standards, but which standards are implemented is up to the implementators. There are quite a few conflicting functions in between ANSI, SUS, SVID, BSD, ISO, and POSIX though most of them have to do with simple things such as errno values, but there are more serious cases such as the setjmp(3) conflict between BSD and POSIX.

            Now, even without the conflicts, not every Unix has a complete implementation of these standards, especially given how frequently they've been coming out in recent years. POSIX 1003.1* for instance contains a good amount of optional functionality (SUSv2 made a lot of it mandatory however) including extremely useful features such as read-write locks for threads.

            Now, that's not to say that I don't wish every Unix was at least was SUSv2 conformant, but OS specific code is a fact of life in cross-platform Unix development unless you are working on just Linux and *BSD. Calling someone stupid because they check to see if sigaction(2) exists because they want SA_RESTART to simplify their code just doesn't fly.

            I look forward to the day when autoconf doesn't need to exist.
        • by dangermouse ( 2242 ) on Monday October 08, 2001 @11:37PM (#2404897) Homepage
          I'm sure this will be modded to lowest plane of /. hell for attempting to pick on the One True O.S. but it has to be said, I for one am TIRED of downloading programs only to find some strange library from some linux install is needed (and I'm greatly oversimplifying here).

          No, you're not. I'm a Linux user, and this trend irritates the living hell out of me. I don't grok the need for 47 tiny libraries to write a mail client*, especially when nine times out of ten half of the required libs were written by the same guy for the same project, but distributed separately (never to be used by another project).

          * For the truly anal retentive, that was a bit of hyperbole. But not much.

  • by esvoboda ( 166456 ) on Monday October 08, 2001 @05:41PM (#2403981)
    Consider one of Dillon's points: "A great deal of what people label as 'Linux' isn't actually Linux."

    As a long-time FreeBSD user, I am fascinated when Linux users go to bat citing so many popular open source applications as Linux applications. Very few of the thousands of applications out there need to run in Linux "emulation" mode on FreeBSD. Almost all applications build and run similarly on FreeBSD as Linux.

    I read print magazines such as Linux Journal and visit many Linux web sites, knowing that the content is very much applicable to my OS of choice.

  • I can guess that MFC means 'backported', but I don't understand the acronym. SMPng is probably 'SMP Next Generation', but I can't figure out what KSE is... Kernel Security Enchancements? Any BSDer's out there to help a poor linux-using slob? :)

    -_Quinn
    • Ok, I'm a dumbshit, the interviewer asked about KSE & SMPng. Sorry. :( Still wondering about MFC, though.

      -_Quinn
    • KSE = Kernel-Scheduled Entities

      "The FreeBSD kernel-scheduled entity (KSE) project is striving to implement new kernel facilities that allow the implementation of SMP-scalable high-performance userland threading, as well as a new userland POSIX threads library (libpthread). KSEs are heavily based on a technology referred to as scheduler activations, and differ only to the degree necessary to support features that the original research does not address. The new libpthread uses as much of libc_r as is reasonably possible."
    • The article defines both KSE and SMPng:

      "SMPng (next-generation symmetric multi-processing) and KSE (kernel scheduler entities)"
      MFC is Merge From Current, describing the process by which code is taken from the -CURRENT branch, where new development is done, and applied to the -STABLE branch.
    • MFC = Merged From Current

      Just means that changes in the -CURRENT tree were merged into the -STABLE tree for inclusion in the next -RELEASE.

  • by HalfFlat ( 121672 ) on Monday October 08, 2001 @05:45PM (#2403995)

    It was interesting to read that Matt Dillon is supporting Rik van Riel's work on the VM in Linux. There has been a lot of controversy ove the Linux VM, with a number saying that the latest one - based on Rik's work - had made a poor comprimise: more smooth operation for lower overall performance, making it better suited to interactive applications than server applications.

    Not being a kernel-list follower, I don't know much about the details, but I'm sure there are other Slashdot readers who are much more familiar with them. I would have thought that lower expected latencies from the VM would improve most server-based tasks as well, at the possible cost of reducing the maximum amount of work the machine could do - but then, it seems counterproductive to talk about performance in such a heavily-loaded environment when the best solution would be typically to add more hardware (RAM, CPUs or split across multiple machines.) Do I have this all wrong?

    Also, is anyone able to describe the degree to which the new VMs of both Linux and FreeBSD are similar? What are the concepts behind them that distinguish them from other and earlier VMs?

    • by AKAImBatman ( 238306 ) <<moc.liamg> <ta> <namtabmiaka>> on Monday October 08, 2001 @06:07PM (#2404096) Homepage Journal
      Also, is anyone able to describe the degree to which the new VMs of both Linux and FreeBSD are similar? What are the concepts behind them that distinguish them from other and earlier VMs?

      I could spout technical details all day, but it wouldn't mean anything. Instead, let me try to take it from a different perspective.

      When a transmission for a car is designed, one of the greatest criteria is that the transmission shift smoothly instead vs. quickly. Would you drive a car that would shift extermely fast but cause the car to lurch every time it did? Probably not. The car must work smoothly at all times so that the driver can maintain control and keep the car in good condition. What might happen if the driver were to rev out the engine during one of these super-fast shifts? That's right, he might drop the trans (very expensive).

      By the same token, both Windows and Linux (don't start on me) have very "fast" VMs that will perform extremely well under very particular circumstances. When they are removed from those circumstances, they begin to perform extremely poor.

      As an example, a few years ago I was using a Pentium 120 w/ 16MB of RAM. Under Window 95 I was starting to run into performance problems (read: chugging) every time I tried to really use the system. This chugging was caused by the fact that Windows would end up thrashing the VM because there wasn't enough actual memory. So, I loaded up Linux using KDE (1.x). It chugged WORSE! When I finally tried FreeBSD with KDE (same setup), I found that the whole system ran smoothly. If I tried to run too much, the program in question at the moment would take longer to execute but would make it up without impacting any of the other programs. Did FreeBSD actually run slower? Perhaps in certain circumstances that were advantagous to Linux, but overall system performace was much more useful and enjoyable.
      • I appreciate your attempt to answer this question, although in my personal opinion:

        1) The transmission analogy sucks (but I hate analogies)

        2) I would find technical details meaningful (in addition to the general optimize-best-case vs. optimize-worst-case philosophy you lay out)

        3) Your experience is, as you say, years old and therefore not really relevant to your parent's question.. certainly not the part you're quoting.

      • by Anonymous Coward on Tuesday October 09, 2001 @12:37AM (#2404964)
        Translation: I can't defend any of this, since all I know was a bit of technical talk I didn't understand on a mailing list I lurk on.

        So instead of anything substantive, I'll give you my useless anecdotes about my preferred system's performance being more "enjoyable."

        And of course, any circumstance where anything outperforms my preferred system, I'll dismiss it out of hand.

        I used to miss comp.sys.amiga.advocacy, but slashdot's BSD section is an excellent substitute.
    • It was interesting to read that Matt Dillon is supporting Rik van Riel's work on the VM in Linux.

      I didn't get the sense that Matt was supporting Rik in this controversy. Rather, my impression was that Matt thought that a particular argument put forth by one camp was spurious. Big difference.
    • It was interesting to read that Matt Dillon is supporting Rik van Riel's work on the VM in Linux.


      Matt is no stranger to Linux, having helped with the TCP implemenation in the Linux IPv4 network protocol stack. For confirmation, see linux/net/ipv4/tcp*.c on your favorite Linux machine.

  • If you take anything away from this article, this is quote is it. It is so important for the Linux community to understand if they are going to have a chance of breaking Microsoft's strong hold of Jo Six-packs computer.

    Both Linux and FreeBSD are in the same boat there... the only way to drive desktop acceptance is to ship machines pre-installed with the OS (whatever OS) and preconfigured with a desktop so when you turn the thing on, you are ready to rock. The only way to do that is for the PC vendors to pre-install Linux (or FreeBSD, or whatever).

    • Am I the only one that is sick to death of hearing about "joe sixpack"? I don't give a shit about joe sixpack! I care about what OS other's use about as much as I care about what kind of car other people drive. Zilch.
      • Am I the only one that is sick to death of hearing about "joe sixpack"? I don't give a shit about joe sixpack! I care about what OS other's use about as much as I care about what kind of car other people drive. Zilch.

        ... and this is why Microsoft will always dominate the Desktop.

  • by AKAImBatman ( 238306 ) <<moc.liamg> <ta> <namtabmiaka>> on Monday October 08, 2001 @05:49PM (#2404018) Homepage Journal
    I think Linux is going through a somewhat painful transition as it moves away from a Wild-West/Darwinist development methodology into something a bit more thoughtful. I will admit to wanting to take a clue-bat to some of the people arguing against Rik's VM work who simply do not understand the difference between optimizing a few nanoseconds out of a routine that is rarely called verses spending a few extra cpu cycles to choose the best pages to recycle in order to avoid disk I/O that would cost tens of millions of cpu cycles later on. It is an attitude I had when I was maybe 16 years old... that every clock cycle matters no matter how its spent. Bull!


    This has got to be the BEST description of the Linux development to date that I've heard! (And it's got me rolling on the floor with laughter!)

    Seriously, when are people (in Linux, Windows, C, C++, Java, etc. camps) going to learn that design is paramount? We don't design things because we are old farts who have no clue about "how to make a system fast", we design them to get the best tradeoff between performance, stability, structure, and maintainability. Anyone who says "I don't care about those things" is talking out of his ass and will not truely become a good programmer until (s)he can admit that code should be well designed.
    • by Anonymous Coward
      The problem with Rik's VM isn't that it spends cycles here or there, it's that it is demonstrably broken in the field. If you load down a linux 2.4 box for any length of time with a lot of I/O, you will eventually see the failure mode of kswapd using 100% CPU forever. People are willing to try alternative VM systems because Rik hasn't proposed a solution to the problems that people are seeing with his system.

      The question isn't some cycles here versus a lot of cycles there, the question is whether we will ever return from kswapd, or should we just power cycle the damn thing and see if it comes back up?
    • That is a good point, except he didn't say that Linux wasn't designed properly. Or even mention design at all.

      He's talking about stuff like using cvs and not having a massive fork in the stable kernel. A lot of developers are a tad upset about that right now...

      (Not that I take one side or another in this issue. And besides if I took a side on something who would care?)

  • Bad Business Models (Score:4, Interesting)

    by KidSock ( 150684 ) on Monday October 08, 2001 @05:50PM (#2404022)
    M. Dillon writes:
    Open-source operates behind the scenes far more then it operates in the public eye, and it's hard to sell support to hackers who actually have *fun* trying to figure out a problem. In some respects Linux and the BSDs are poor commercialization candidates because they are *too* good... that they simply do not require the level of support that something like Windows-NT or Oracle might require in a back-office setting.

    This sounds like sane reasoning but conraditory to quite a few "service and support" business models (e.g Red Hat). It will be interesting to see who's right. Perhaps proprietary solutions build as userspace applications running on top of Free platforms would be a better? Would that be frowned on by anyone? Not me.

    • by stripes ( 3681 )
      This sounds like sane reasoning but conraditory to quite a few "service and support" business models (e.g Red Hat). It will be interesting to see who's right.

      That sort of assumes you try to sell the support to the hackers. I expect you end up selling more support contracts to the "normal" folks. For example back before Ret Hat bought Cygnus they sold more gcc support contracts to embedded developers then to people wanting to run gcc on a Unix system to produce Unix executables.

      (This is not to say embedded folks aren't hackers, but many more of them are hardware hackers, and software dabblers, and many many many of them a worker grunts and not hackers at all, which was surprising when I was in the field, but never the less true at the time)

      So I think service and support business models have a market, that market just isn't me (and is less likely to be folks who read Slashdot, and more likely to be folks who stop reading about computers when they have free time...)

  • Poor Guy... (Score:1, Redundant)

    by Anonymous Coward
    ...It's like in Office Space:
    "THE Matt Dillon? I celebrate the man's whole catalogue! What's your favorite of his movies?"
    Managers == Trolls
  • In the wishlist, Matt writes:

    [...] an I/O descriptor representing a TCP connection could be migrated entirely off the original machine

    Now that'd be a tasty bean.

    • [...] an I/O descriptor representing a TCP connection could be migrated entirely off the original machine

      Now that'd be a tasty bean.
      I'm terribly curious to know how they plan to pull this off. Anyone got any ideas?
  • BSD and Linux VM? (Score:2, Interesting)

    by thule ( 9041 )
    Is the BSD VM really that much better than the Linux VM anymore? It seems that Linux's VM is looking forward to machines with lots and lots of processors (NUMA). BSD seems to still be working out basic SMP. There was a patch for the Linux 2.4 kernel to make it behave like the BSD VM. What sets the BSD VM appart?
  • by 1010011010 ( 53039 ) on Monday October 08, 2001 @06:12PM (#2404119) Homepage

    I'm going to cheat a bit and also give you my #2 feature-wish: I want native filesystem replication. I don't care a whit about common server-based disk store: you don't get reliability or scaleability that way. I want to see distributed (replicated, not partitioned) filesystems that are transactionally coherent[...]


    This would indeed be a nice thing to have, and was, in fact, the subject of two years of work at my previous employer.

    We were developing a filesystem named, variously, Charon, CXFS, and SSFS (Charon and CXFS turned out to be other people's trademarks).

    It was a 64-bit journaled filesystem that used extensible hashing for directory layout, and provided named streams, ACLs and per-object quotas (i.e., per-volume, per-directory and per-file block and inode quotas). It used a distributed journaling system to synchronize data among several peers, and could perform partial filesystem replication (i.e., client local fs device size is smaller than server local fs device size, but client fs appears to be as large as the server's fs).

    It included a system for mapping the local OS authentication and security identifiers to those from the other OSes participating in a replication group (such as unix UIDs/GIDs to NT SIDs, to kerberos tickets, etc.). All filesystem entities has 128-bit UUIDs to aid in this mapping.

    We had begun port to FreeBSD and Windows 2000.

    We were about 4 months short of having an alpha relase on Linux 2.4 before the company went bust. In short, it was over-engineered, and too ambitious. We should have started smaller; for instance, Linux 2.4 only, with nfs-style 'security'. Or FreeBSD-only. :)

    After the sudden, complete, and total demise of the company we worked for, all of us on the team had to put our energy into paying bills and finding a job. So not much has happened since The End.

    I can provide design info and even code if someone wants to help.


    • A friend of mine works at Cal Poly Pomona and their whole campus intranet runs on some kind of Distributed Computing Environment. Each server in the system is a "node" in a Borg-like network. If a box melts down, things keep chugging. To reinstall, pop in one CD and the system will update itself from its peers within minutes. It's quite slick, and I think the whole thing is Free and running on FreeBSD and Solaris.

      I couldn't find any good links just now that talk about it, but if anyone is interested I will talk to my buddy and get some.

      I guess this isn't the same thing he was talking about in the article, because I don't think that any arbitrary data object or stream can be moved from one box to another, but it's still pretty cat.
  • Not trying to nitpick or anything (interesting interview), but there does seem to be an apparent contradiction in Matt's answers.

    1) And if you didn't hear me mention so-called 'clustering' solutions currently available from unnamed vendors, it's because they can't actually deliver these things -- not true Q.O.S. That's my opinion, anyway. Using a cluster to hide the fact that the underlying systems crash regularly is an extremely dangerous way to manage a computing environment.

    2) Q.O.S. means having redundant hardware at the very least.

    Or perhaps I'm missing something ... :)
    • His statements are not a contradiction. He is saying that QOS is more then just redundant hardware, but that redundant hardware is an important part of QOS. I think that in the first statment Matt also means that using redudant hardware is the wrong way to fix poor (crashing) software.

    • by mindstrm ( 20013 ) on Monday October 08, 2001 @07:02PM (#2404242)
      I think he's saying two different things.

      To give context, he's discussing full transparent process migration, so a process can move *completely* to new hardware, transparently. If this were the case, you could build a 'true' cluster where tasks could move around to different hardware at will.

      He's saying that current 'clustering' solutions are NOTHING like this. They are all application specific. Veritas, Sun, whoever is offering you clustering technology is simply offering you their own version of something you can probably think up at home.

      He's saying, in point 2, that to have real QOS you need to have redundant hardware, but the unwritten context is that it's not really 'redundant' if processes can't switch to it seamlessly. Otherwise it's just 'other' hardware.
      • Have you looked at Mosix?

        It provides pre-emptive process migration across machines in a cluster. Its a single-system-image clustering solution for Linux.
        • Yes, I've looked at it and used it quite a bit.
          Mosix is the kind of thing we are talking about.. EXCEPT...
          .
          Mosix only migrates the user context of a task to another machine. The process is still effecitvely tied to the node it started on; if that node crashes, bye-bye process.

          What is meant in the article, though, is complete migration... having a process able to move to whichever hardware it wants to.

        • Or.. to put it differnelty, mosix allows greater performance to a point, by adding machines... but doesn't provide redundancy. If the node a task started on crashes, the task is toast.
          You can't instruct it, say, to move all tasks to another machine so you can perform maintenance.

    • by Shoeboy ( 16224 )
      1) And if you didn't hear me mention so-called 'clustering' solutions currently available from unnamed vendors, it's because they can't actually deliver these things -- not true Q.O.S. That's my opinion, anyway. Using a cluster to hide the fact that the underlying systems crash regularly is an extremely dangerous way to manage a computing environment.

      Bullshit. No amount of bug fixes will eliminate crashes. I've seen smoke pouring from servers due to short circuts on the motherboard. I've seen array controlers give up the ghost, network cards bid this cruel world goodbye and absent minded tech power cycle the wrong box. I've worked for 3 (!) companies that have had entire datacenters lose power due to human error (Microsoft, Voicestream and Group Health).

      Crashes happen and you have to have redundancy to deal with this. If you want to see a truly stable system, don't look at any Un*x. Un*x clustering solutions are bolted on and it shows. You want to look at a real high availability system like a Tandem system.

      On a Tandem system you can lose a processor without dropping a single connection. You can lose a server and recover. It's built fault tolerant from the ground up.

      If you're serious about high availability, this is the only way to go. Unix clustering is mostly shit - you really don't want it running a system that absolutely, positively can't go down.

      Sure NT clustering is worse -- I did a study at Voicestream and found that our NT clusters had more downtime on average than our standalone NT systems, but Un*x clustering is nothing to write home about either. Saying that Un*x is more stable than NT is like saying that it's better to drink urine than eat shit. Sure it's true, but it's missing the big picture. VMS clusters are more stable than any Un*x solution and Tandem systems won't go down even if you start smashing processors with a mallet. That's what real stability means.

      --Shoeboy
      • Calm down calm down. Actually, I think Matt might even agree with you.

        What he was talking about in the way of clustering was the sudden fascination with beowulf type clusters. These things are being marketed as high availability when clearly they are not.
      • ...and Tandem systems won't go down even if you start smashing processors with a mallet.

        Man, I would love to see the white paper on that!
    • Although clustering is no substitute for real software engineering, you can't get real reliability without redundant hardware, because hardware fails. I don't see any contradiction here; I think the points are unrelated.

  • by Anonymous Coward
    This article contains some of the most important home truths that the open source community must face, some in particular relating to Linux.

    This guy is talking sense, the two quotes that stand out being:

    1. The one on the GPL (Non)Business Model
    2. The attitude of the projects

    I'm a BSD user, primarily because I didn't believe that these quantities existed in the Linux world in suitable amounts.

    - Another AC
    • Business models (Score:2, Interesting)

      by matty ( 3385 )
      The one on the GPL (Non)Business Model

      If the GPL is/has a "non-business model", then why have the major computer manufacturers that have embraced Free Software pretty much all chosen Linux over any form of BSD? (IBM, HP/Compaq, Dell)

      The answer is really simple: When companies like IBM & HP make contributions to the Linux source code base, they don't want someone (like Microsoft, for example) to come along and take the code and make it proprietary and closed-source. The BSD license allows this, the GPL doesn't.

      These companies seem to either want to keep the code completely closed (AIX, HP-UX, Lotus Notes, etc.) or completely open so that it must stay open.

      The business model behind Linux and other GPL-based software is simple: build a turnkey solution that meets the customer's needs/wants and they will pay top dollar for it. There is a company here in Washington State that runs all their systems on Linux: mail, web, desktop, and a custom billing/scheduling system based on MySQL (I believe). They have hired one good Linux hacker (not me, he's a friend of mine) to custom build their solution.

      If he had done all this with BSD-licensed software, other companies could come along and steal his code, modify it, close it and sell it as their own. Since it's under the GPL, they can use it if they like, but they must keep it open and give him credit.

      IBM is doing the same thing with their work. You can use their modifications to Linux if you like, but you must keep them open and you probably have to give credit to IBM somewhere along the way (I may be mistaken about this, please correct me if I'm wrong). This protects them from competitors stealing their code, not only because it's illegal due to the license, but also because I'm sure HP wouldn't want to use code where they had to give credit to IBM. Individuals and small companies wouldn't mind that, though.
      • That may be true to some degree, but I also think this was a factor: Linux had (has) more mindshare than ___BSD. When choosing a bandwagon to jump on, you pick the one that's most full.

        Not that I am complaining that free software has some corporate supporters. Rah, rah. But c'mon, Linux is "hot" and that has GOT to account for a lot of the corporate interest.
      • other companies could come along and steal his code, modify it, close it and sell it as their own.

        You should rephrase that. The most other companies could do is come along and fork the code with which they could modify, close, and/or sell it. You can't steal what is given to everyone openly.

        IBM is doing the same thing with their work. You can use their modifications to Linux if you like, but you must keep them open ...

        That is actually incorrect. IBM is working at many levels:

        1) Lotus is IBM. Completely closed.
        2) Much software is LGPL or GPL.
        3) Some software is IPL. Postfix for example.
        4) Many of the improvements made to Apache come from IBM. They appear to not be afraid of Microsoft forking their code.
        5) I think OpenAFS is under an MIT license.
  • by cymen ( 8178 ) <cymenvig.gmail@com> on Monday October 08, 2001 @07:37PM (#2404358) Homepage
    In regards to the desktop... well, I'm not sure exactly what you are asking. Both Linux and FreeBSD are in the same boat there... the only way to drive desktop acceptance is to ship machines pre-installed with the OS (whatever OS) and preconfigured with a desktop so when you turn the thing on, you are ready to rock. The only way to do that is for the PC vendors to pre-install Linux (or FreeBSD, or whatever).

    I think this is bunk. As he pointed out earlier open source software is a poor candidate for commericial support. I think it is a poor candidate for pre-installation too. No self respecting sysadmin would want Dell to preload Linux or FreeBSD for their companies desktops (or servers). It is a far easier to support systems that are configured in the same manner and style and each sysadmin has their own preferences which become company policy. If we are talking about pre-installed systems for the home market than ok - it would be a selling point. But I think the market for such a system would be so low as to make it not worth the cost to a large company like Dell.

    None of the open source operating systems are ready for the average home users desktop. The desktop environments need to be stable and established. The system update procedures as simple as Windows Update (apt is very close but not enough). There are too many rough edges right now for the average user. Compare the rate of change in the Windows desktop to that of KDE or Gnome. KDE and Gnome have to change because we demand and expect the same ease of use that the Windows desktop environment provides but in the same vain they won't be useful for the average user until they stabilize.

    Can you imagine dumbed down debian with a graphical installer and a graphical web-based update like windows update? Instead of seeing all the package details we would only see the meta packages that hold all the updates for a particular component like KDE or X11 or the base system. A simple click and the download and upgrade begins... I'm sure some of us would be horrified by the idea of dumbing it all down so much but I think it will be neccessary - and I wouldn't mind running such a system as my stable desktop while running something a bit hairier on my development system.
    • The fact that good admins wipe out anything pre-installed and go for their Network installed image isn't surprising.

      But, thats not different on OpenSource than it is for commercial OSs. First thing we do is wipe the Sparc and load from our own trusted (if they can be described as such) disks.
    • I think that what he's talking about in that quote is the fact that people who are not technical inclined aren't using Linux or a BSD system because their computers came with Windows or whatever.

      I believe he's saying here that if linux, etc was installed on system as well as other things, then the standard of everyone using windows would be less then it is now.

      Most non-technical people have no idea what Linux even is. Or they think its a "hacker" tool. Personally i think that X windows is stable enough for some systems to replace the microsoft standard for people who aren't downloading or buying Linux.

      Plus any technical administrator of any system formats their hard drive before using it. Even if its NT or Solaris. Just a thought.

    • First of all, apt kicks windows update any day of the week.

      Secondly, "dumbed down" is a word only l337 kids use. They think that because something is easy it is less powerful. This is simply untrue. Good software is both powerful and a delight to use.

      If your software has user interface bugs that inexperienced or less determined users are not willing to put up with, that's not something to be proud of.

      Please don't use the word "dumbed down" again. To put it in a phrase you might understand: Blaming users for software bugs is l4m3.

      • Please don't use the word "dumbed down" again. To put it in a phrase you might understand: Blaming users for software bugs is l4m3

        I think it is pretty obvious in my post that I used dumbed down simply to illustrate the point. In my example the dumbed down software is a delight to use and I would myself like to use it. I don't consider myself dumb. Where did I talk about bugs?

        First of all, apt kicks windows update any day of the week.

        Well doesn't that just depend on what your criteria is? I too believe apt is obviously superior technically but in terms of ease of use for someone not wanting to spend more than a couple mouse clicks to update it isn't there yet... It could be done very simply but it hasn't been. And if you want to advocate that these users can simply use debian stable I'm sorry - that just illustrates my point. The desktop just isn't ready yet... Maybe one day we'll have debian-stable, debian-desktop, debian-testing, and debian-unstable.

        If your software has user interface bugs that inexperienced or less determined users are not willing to put up with, that's not something to be proud of.

        An what the hell does this have to do with anything I posted?



        Secondly, "dumbed down" is a word only l337 kids use. They think that because something is easy it is less powerful. This is simply untrue. Good software is both powerful and a delight to use.


        I really doubt "dumbed down" is only a term "l337" kids use. It's a common term that means simplified. In some people's minds it has negative connotatations - not in mine. In short from m-w.com "dumbed down":

        to lower the level of difficulty and the intellectual content of (as a textbook)

        And isn't that exactly what we are talking about? The way I used the term obviously has nothing to do with intellect - only difficulty. If you can't see that I can't help you :).
        • And isn't that exactly what we are talking about? The way I used the term obviously has nothing to do with intellect - only difficulty.

          What is so great about difficult software that you would not software to be pleasant to use?

          "Dumbed down" has negative conotations.

          Instead of calling it a "dumbed down version of debian" you should say, "can you imagine a debian with a well designed user interface for installing and updating it." (btw I hear that the next version of debian is going to have a new installer and hardware detection).

          People use the term "dumbed down" to say that what the program is doing is so hard that the interface has to be hard to use also. That's a falacy. Difficult and uncomfortable user interfaces are rarely there because the function that the software does is difficult. They are almost always there because the programmers did not design them correctly. Ie they are difficult because they are buggy.

          For example, take your "dumbed down" debian installer. There is nothing about the process of installing an operating system that is inherently difficult.

          If the user knows exactly that they want a "desktop" and they are able to tell the installer what harddrive to install on then the rest could be automated. That's two things for the user to do niether of which are hard.

          Or perhaps the user may want to choose the packages individually. That's three steps but none of them are hard.

          Maybe the user has some specific partitioning scheme that she wants. That would be complicated if the user didn't know a little about unix already but she clearly does so it's not complicated for her.

          No matter what the user wants to do, installing the operating system should not be hard. There is no reason why a well designed installer can not allow for all these different types of users to install exactly what they want.

          The tricky thing for me when I first installed debian years ago was setting up the ethernet, sound and X. But this is something that a powerful installer does automatically. The fact that the debian installer at the time did not do so is a bug. The fact that I had to be determined and experienced to install debian was not a positive thing in any respect. (Difficult != good.)

          You are wrong to say that debian would have to hide .deb information to be easy for newbies to use. Newbies want more feedback and information not less. What they do want, though, is for the information to be organized in a coherent way.
          Microsoft doesn't give enough information and that's one reason people don't use windows update. That's also one reason why people hate computers...

          (Microsoft has a horrible interface in general. They spend millions on research but they seem opposed to the idea of using that research in actual products. It's amazing how they manage to violate every single user interface rule with the start menu alone. And once you start using their application it only gets worse.)

  • MFCd? Giants? At least they explained what KSE and SMPng are, but it would have been nice to have that at the beginning of the article.
    • by stripes ( 3681 ) on Monday October 08, 2001 @10:07PM (#2404733) Homepage Journal
      MFCd?

      Merged From Current. Most new features (esp big ones) go into the "current" version of BSD. Most users use the "stable" version because current isn't exactly stable... If current is a long time off from going throught a code freeze and becoming stable, then some of the new features that don't depend on new other parts get Merged From Current into the stable version that most people use.

      Giants?

      The older SMP kernel had a "giant" lock on basically all of the OS. In theory more then one CPU could be int he kernel, in practice one would have a lock on Giant, and the rest would block waiting to get the lock (or would be running free in user code). The stable SMP code has a few other locks, and you can do a little in the kernel without hitting Giant. The SMP code in "current" pretty much (or totally) does away with the Giant lock.

      Lotsa little locks is more like how Solaris works. I'm not sure if Linux has lots of little locks, or a hand full of mid-level ones (lots of little ones works better if they are all in the right place, a few mid-level ones works much better then the one giant lock, or lots of little ones in the wrong places).

      And yes, it would have been nice to explain all the terms at the start of the article (even SMP which isn't BSD specific). I'll live though.

      • stripes from #kotari ?
    • MFC: The process of backporting a feature from -CURRENT (the development branch of FreeBSD) to -STABLE (the stable branch, natch). It's an acronym for "Merged From Current"

      Giants: In the old FreeBSD (4.x and before) locking (for mutexes) was handled by a single lock (over the entire kernel). Naturally this is unacceptable for SMP systems and one of the big reasons FreeBSD lacks in the SMP arena. John Baldwin is currently working his patoot off trying to get fine grained locking in the kernel to get rid of Giant.
  • Can we put all the "wow, I loved Something About Mary!!" and "the Outsiders rocked!" Comments inside this one thread, to save the rest of this discusion from having to read it? I know I can't dissuade you from thinking it is original or funny, but I can at least try to contain it..

    Stay Gold, Penguin-boy.

  • 9. What is your opinion on .NET and do you think that it may be possible that .NET change the OS "map" as we know it?

    Matt Dillon: I believe .NET is Vapor. It's a marketing term dreamed up by Microsoft that will magically morph into whatever Microsoft eventually winds up delivering. MS announces grandiose ideas with cute catch phrases all the time, and as with any good vapor there is always some basis in truth (if only a little pinprick). The reality is a little different though... remember, these are the people that hyped windows-ME up the wazoo and all we got out of it was a speech-synthesized windows installation wizard! These are the people that called NT the unix-killer and told people it was as reliable as UNIX. .NOT is probably a more descriptive term for .NET. My guess is that it will turn into Microsoft-proprietary rent-a-service glue, and that it will introduce an order of magnitude more security issues then IIS.


    Yes, I just blatantly copied this over. Sorry. But it's a choice comment. I bet he's right too.

  • I will admit to wanting to take a clue-bat to some of the people arguing against Rik's VM work who simply do not understand ...

    Actually I think the main problem with Rik's VM is Rik. He's always got some arrogant comment or critisizing Linus publically about something. The fact is, his code was stuck in a loop (literally) and no one knew how to fix the damn thing. People kept submitting all sorts of little patches but they were just tipping all over each other (Alan claims to have been much more conservative about what he allowed into his Kernal which is performing well). I think when Linus saw that Andrea's total rewrite showed good performance he jumped at the opportunity if for no other reason to just get Rik off his back. And so far the results have been pretty good. Linus still does not approve of the classzone design, the code is supposedly really messy, and there are little annoying incongruencies. Alltogether this meaning the stuggle is not over on this front :(
  • Does anyone remember the USEFUL little tools Matt Dillon (I think it's the same guy) made for the Amiga?

    Interesting how really smart people never seem to quit being smart.

Do you guys know what you're doing, or are you just hacking?

Working...