Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
BSD Operating Systems News

xMach Announces Core Team 132

Joseph Mallett writes "xMach today announces our brand new Core Team. We've also (finally) added a CVS server, as well as a CVSweb front-end so people can browse the source. Since the first Slashdot post, we've accomplished one of our major goals of being GPL-free (and thus fully BSD License'd), as well as added two mailing lists and fixed the wishlist code. Due to Mach's history with Multiprocessing, we are currently looking more and more and the ideas of distributed processing. The code base is now cleaned up, so that everything should compile out of box. Some of our more abitious goals are to move to a multiserver format, and do a major update of the filesystem interfaces, short term. And like the HURD, it's software that's here right now, and isn't vapourware."
This discussion has been archived. No new comments can be posted.

xMach Announces Core Team

Comments Filter:
  • by Anonymous Coward
    The site www.xmach.org is running Apache/1.3.14 (Unix) (Red-Hat/Linux) PHP/4.0.4pl1 on Linux.
  • I could be wrong, but I remember XF86 being a year or a PCI bridge thing or both (yes I'm not very clued in this, but read on). Once upon a time there was XF98 for another architechure of some sort, it had to do with what kind of graphics cards it worked with.


    ~^~~^~^^~~^
  • A few quick commercial examples would be QNX and BeOS. BeOS is a very good performer, and QNX is targeted at the embedded, and even has real-time capabilities.

    The L4 micro-kernel is the "successor" to Mach. The initial L4 developers were the primary developers of Mach 4. There is a Linux server for L4, and the site claims only a 4% performance hit compared to monolithic Linux - and that's only partially opitmized.

    Microkernels can provide better performance and more flexibility than monolithic kernels, but they are much harder to design and develop properly.
  • MacOS X?

    *sigh*

    MacOS doesn't use [apple.com] a traditional Mach uK. It completely bypasses Mach for many functions, all because Mach is too slow to do many things.

    The wheel is turning but the hamster is dead.

  • > but has anyone written a microkernel other than Microsoft

    QNX Software Systems has been writing microkernels for quite a number of years now; both QNX 4 and their current OS are true microkernels.

    www.qnx.com [qnx.com]

    NT's kernel is "micro" only by being "Microsoft".

    Be's kernel is sort-of like a microkernel, although drivers and some services (such as the filesystem) are dynamically loaded into kernel space instead of their own space.

  • Be's kernel isn't a microkernel because drivers and services like the filesystem are loaded into kernel space, not as separate processes.

    If Be ever gets around to releasing a BeOS update with the new networking (BONE), it'll be even less of a microkernel, because BONE moves networking into the kernel.

  • You really shouldn't substitute a u for a mu. If you're so special that writing microkernel is too hard for you, then perhaps you should at least use the right symbol. A (mu) is not the same as a u.

    --

  • I'm sorry -- I didn't mean it that way. I liked your post, and I'm sorry if it seemed like I just wanted to nitpick.

    Again, I liked the post, but your use of "u-kernel" made it seem less knowledgeable, sort of like when you read an otherwise informative post about computer security and then the person uses "virii"[?] [perl.com] three times in a row.

    --

  • And you have what to back that claim up? Ahhh exactly...
  • Hmm...

    Getting rid of the GPL, what a worthy goal!

  • I followed your link and read the page, but I didn't see how MOSX bypasses Mach. Could you elaborate on this a little, please?

    BTW, the diagram there surprised me -- I thought that things like Carbon and Cocoa used some of the functionality provided by the BSD layer. The VFS, for example, and the networking code. Could the diagram be wrong?

    --
  • From your web page:

    > Carbon consists of most of the classic Macintosh > APIs save for reentrant code.

    What do you mean by this? It could be read a couple of different ways, and one of them isn't correct.

    --
  • So, by virtue of choosing not to live free, I would, in effect, still be living free (since I chose not to live free) - and thus can't really choose not to live free after all.

    My head still hurts. (And this is getting more silly :)

  • > And like the HURD, it's software that's here right now, and isn't vapourware.

    This must be a curious definition of "vapourware"; I clearly remember downloading and running Hurd some years ago.
  • Someone please moderate that up. If I had mod points this would get a definite 'informative' from me.
    --
    Slashdot didn't accept your submission? hackerheaven.org [hackerheaven.org] will!
  • Some HURD developers are looking to replace their Mach kernel by L4, I think that it is because of performance problems.

    So it seems that Mach has really performances problems: a BIG problem for a kernel!
  • >Mach (which has never been used successfully in a production system)

    Oh, really [apple.com]?

  • > For proof of Mach's deficiencies I linked to two research papers

    Oops :) My apologies
    Thanks for the great info.. C4L and all of you who replied..
    /me is happy

  • So, we now have:
    • X.11
    • Mac OS X (uses mach)
    • "X Server"s
    • x86
    • Mac OS X server (uses mach)
    • Software that uses X.11, which by custom tends to begin with the letter x ("xEyes")
    • xMach (uses mach)
    • Multiple X servers for Mac OS and Mac OS X, one of which is named "MacX" but is not compatible with Mac OS X. Good luck looking for "Mac OS X X Server"s on ANY search engine.
    • XF86 (X server; despite name, runs on lots of non-x86 hardware)
    • Someday we will start seeing X servers for XMach
    • And, most horrifically of all, someday we will see Mac OS 11-- which undoubtedly everyone on IRC will start calling "X 11". Assuming apple doesn't come out and name it "Mac OS X 11".
    ARGH!
  • The others are full-on fakes!

  • Quite insidious, good work!
  • Eros is now gpl.
  • Actually, I was wondering what would happen if someone rewrote all the GNU apps that are currently in use with Linux and replaced it with another license (BSD). If and when that happened, would Richard Stallman still "require" us to call it "GNU/Linux"?

  • The "GUI" is not in the kernel. Not unless you're using some New & Improved(tm) definition of GUI that differs from either the definition or the common usage.

    The *display subsystem* runs in kernel *space*, for performance.
  • It is actually a problem with DNS connectivity & Verizon not rapidly responding.
  • FYI: "misspelt" isn't a word, either... if you are keeping score. Chances are you should check your (not you're) own grammer and spelling in posts that rag on the same.
    --
  • actually it is "flamebait", ie. a desenting opinion that is supposed to make you think and defend your position.
  • beos is not, nor will it ever be a production system. I know nothing about qnx.
  • blah.. there is shit that is in the kernel that is not contained within the definition of a microkernel.. wtf is with you man?
  • My comment about linux was to point out to you that one of the servers isn't even running a *BSD derivative. In fact, if you had taken the time to look, you would have realized this was a DNS lookup failure and not crashed server. Then again, you are probably running IE on NT and I doubt you would have been able to tell.

    As far as the second part of your post is concerned, I have no idea what you are trying to say. What does NT have to do with xmach.org being down? Then again, this is Slashdot, I no longer actually expect posts to be on-topic.

    For the record, if you think I run NT you are sadly mistaken. In my house I have 1 Win2k box, 1 Win98 box, 1 Solaris 2.6 system (SPARC), 1 Solaris 2.7 system (SPARC), 3 FreeBSD 4.3 systems (DEC Alpha), 6 FreeBSD 4.3 systems (x86), 2 FreeBSD 3.5 systems, 1 Cisco Router and 2 Cisco Switches.

    -sirket
  • Good point. Maybe the site should stop using Linux.

    Server info

    Apache/1.3.14 (Unix) (Red-Hat/Linux) PHP/4.0.4pl1

    -sirket
  • Two main ways:

    You can maintain a completely separate tree of your own changes (a coed fork), and distribute/sell the changed code/binaries without talking to Apple, or giving Apple rights to the modified code.

    It includes little/none of the Apple worked on code/technologies. This means that Apple's Obj-C compiler and frameworks are not currently included (the compiler changes are being merged back slowly) the concept of Frameworks is not there, the MacOS X layout is not there, etc.. And the way the kernal has evolved from its Mach basis is/will be different (different emphasies).

    So the fact that Apple is not involved is both a blessing and a curse.
  • It is important to note that those three systems are monolithic systems running on top of a Mach microkernel. Mach has not, in any of those cases, been modified to the extent that it no longer functions as a true microkernel.

    yes, but the whole point of a microkernel is to have a different fault domain for each subsystem (and to use message passing ipc between them). using a single server completely defeats that.

    For example, take the fact that OSF/1 based systems use Mach message passing and VM, so they can handle such fun concepts as reserve-on-write memory allocation with message-callbacks on failure. In UNIX terms, that means that "malloc(1024*1024*1024)" does not reduce the amount of VM available... which is both scary and useful at the same time ;-)

    linux and most other unix-like OSes (with notable exception of solaris) over commit memory as well. this is nothing new. recently people have complained about the inability to completely disable it in linux.

    The proliferation of copy-on-write semantics in Mach-derived systems is also very useful, and again benefits greatly from the way mach does message-passing instead just having signals to convey information from the kernel to user space.

    copy on write and signals have nothing to do with each other. also, message passing ipc is more expensive than just invoking a user space signal handler. message passing between user space and kernel space is pretty much useless, unless you want to export kernel interfaces (eg, vm pager, fs) directly to user space.

  • BTW> I was thinking. From looking at the Win95 design, it seems to me that Win95 may just be the most "advanced" of any commerical OS design. Since most Windows95 components are user-level libraries mapped into each program's address space, it would seem that Win95 is the original exokernel!

    *BARF* *wipes off chin* Sorry, I couldn't help myself. I don't know enough about Win 95 architecture to comment, but it's certainly not stable or fast enough to be any kind of good design. And certainly, not even CLOSE to exokernel.

    -----
    "Goose... Geese... Moose... MOOSE!?!?!"
  • No, no, noooo. Read it again. He said "used successfully". And they have, haven't they? ;-)

    -----
    "Goose... Geese... Moose... MOOSE!?!?!"
  • You really shouldn't substitute a u for a mu. If you're so special that writing microkernel is too hard for you, then perhaps you should at least use the right symbol. A (mu) is not the same as a u.

    Thank you very much for reducing my informed, intelligent and helpful post(which I took quite a bit of time out of my day to write simply because I wanted to help someone out), to piece of shit by a completely petty and stupid nitpick. No, I really appreciate it.Thank you. Perhaps you should try adding to the discussion next time, instead of bitching about something completely unimportant. If you're constipated(as seems to be the case), might I suggest adding more fiber to your diet.

    -----
    "Goose... Geese... Moose... MOOSE!?!?!"
  • To most people microkernel == Mach (which has never been used successfully in a production system)

    NeXT [next.com], MacOS X [apple.com], MacOS X server(in use for 2 years now).

    -----
    "Goose... Geese... Moose... MOOSE!?!?!"
  • The L4 micro-kernel is the "successor" to Mach.

    Completely incorrect. L4 is the successor to L3. Perhaps they had started with Mach, but that's because most academia research revolves around Mach(for some only-God-knows reason). Please see this [slashdot.org] post for more information and links to back this up.

    -----
    "Goose... Geese... Moose... MOOSE!?!?!"
  • Honestly, if I had made a mistake and used virii when the correct usage is viruses, then I wouldn't mind being corrected. I want to know correct pronunciations, definitions, etc. However, the reason I used u-kernel and not -kernel or microkernel for most of my post, was simple:
    • , usage: µ = 7 characters
    • micro, usage: micro = 5 characters
    • u, usage: u = 1 character
    I knew I was writing a long post and I that I would be using the term alot. In the interest of time I used the shortest version available and which wouldn't cause confusion. I believe it was quite clear that I was using u-kernel as a short hand for microkernel and 'u' is quite close looking to ''(missing all of 5 pixels). People don't use 'virii' as a short hand for viruses, they use virii because they don't know any better. That's a blatant mistake. I don't think I conveyed that image.

    -----
    "Goose... Geese... Moose... MOOSE!?!?!"
  • There's two ways of looking at Win95: Everything get's loaded into the kernel, or everything sits in userland. Now generally speaking in unix userland threads from one process are isolated from threads in other processes. The only place where you can see everything is from kernel space. You decide- I personally have the opinion that everything is loaded into the kernel.
  • Is there any code that was copied or just a header file?
  • I'm surprised that they chose to put that in their /. submission. Afterall, most slashdotties go after the GPL like a fat Klan member does a burger made by a fellow Aryan. To dis the GPL, or the Klanboy's burger is to anger and confuse him. After all, Slashdot tells me that the GPL is the only true hardcore license... Scheiße!
  • How many versions of Unix already exist? Oh, i think I'll pass... I think youre missing the point with the subject too, the aim of the project is to steer away from the bloatedness.
  • If there are, then what is it that repeatedly leads projects like xMach/HURD/OS X/mkLinux to embrace Mach as opposed to one of the competing microkernels? -- The same reason most of us are using Java and C++ instead of SmallTalk, Lisp or Objective-C. Developer inertia and people falling to the more hyped and/or better sold technology.

    Also note that all the "better" microkernels (Neutrino, EROS, BeOS) are non-free. I think that's the basic reason that free software projects are not using them.

  • FSF thinks it should be a forced gift whether you want to or not. I want the freedom to give or charge for anything I do. I'm talking about stuff that I wrote, not modify existing code. This is what freedom is all about. FSF doesn't want you to have the option. It really is just as bad as the other side of nothing should be released at all. Extreme anything is almost always wrong.
  • OS X [slashdot.org] has a Mach 3.0 kernel. Isn't that still microkernel?
  • Sure, Microsoft, take our code and use it in your closed source programs without our consent, without even acknowledging us. We like it! BSD all the way!
  • You forgot Windows XP :-)
  • Well said. But will it be lost on the /. audience?

  • really?
    Tru64 is on top of Mach. the following is from tru64unix.compaq.com...
    The Tru64 UNIX operating system is a 64-bit advanced kernel architecture based on Carnegie-Mellon University's Mach V2.5 kernel design, with components from Berkeley Software Distribution (BSD) 4.3 and 4.4, UNIX System V, and other sources.
    here's the scoop from compaq/digital [compaq.com]
  • The part of version 3 that I don't like is the bit that makes it mandatory to send the source along with any binaries. Not make it possible to get it like it is now but have to send it with the binaries. I do a lot of net installs and maintaince over the net with Debian. This part would easily double or triple the amount of stuff I have to download. I thought 2.2 was good. I just think that RMS is starting to go a bit too far. In particular given that a lot is now done over networks and this would force me to download source that I don't want or need. I think something that is about halfway between the GPL and the BSD licenses as they stand now would be perfect.
  • Link Please? I have never read anything to that effect, if anything the GPL v3 will loosen some of the restrictions on distribution, and will close the ASP loophole (the one where the user uses the software via remote control rather than distributing it, therefore negating the need to make source available.) In fact if you read http://www.newsforge.com/article.pl?sid=00/11/01/1 636202
    you will see that GPL v3 will allow internet only distribution of the source.
  • Sorry, I was attempting to play nice with the gnubies for once, and possibly mock the line from the GNU HURD page.
    --
  • Actually, and this is kind of important, the reason the site is down (and it actually isn't) is because my DNS server is unreachable. Why? Verizon.
    --
  • I'm not a kernel guru, but I've made the following observations while I was working with MkLinux (so all comments relate to that version of MACH):

    - Mach is rather bloated. The Mach kernel was significantly larger than the Linux server, which IMHO tells a lot.

    - Mach has evolved into a giant patchwork. The source code is FULL of (years old) comments like "XXX THIS REALLY NEEDS TO BE FIXED !!! XXX". In general, the code is messy, to say the least.

    - Some internal MACH interfaces are definitely not well-thought out. As an example, reading from a block device required to send a message first to set the block number, and then another message to initiate the read. Apart from this uneccessary overhead, the device driver thus needs to maintain more state which inhibits any possible parallelism.

    Mach has some nice ideas, but IMHO it needs to be rewritten from scratch - right now, this "micro"kernel is carrying around much more historical baggage than some monolithic kernels, thus skillfully combining the disadvantages of both monolithic and microkernels while avoiding most benefits of any of them.

  • He said successfully...

    - A.P.

    --
    Forget Napster. Why not really break the law?

  • I believe your interpretation of the GPL is incorrect. The person sharing code does not expect any direct return (through money or credit) from their code.

    GPL = Kid who shares his toys with goodwill, and says that anyone who wants to use the toy must share their toys as well.
  • Fiasco [tu-dresden.de] is another C++ implementation of L4.
    "The original L4 -kernel for x86 [tu-dresden.de] has some shortcomings which we intend to fix with this new implementation. The Fiasco kernel:
    • can be studied and maintained better because it has been written in a high-level language (C++)
    • has better real-time properties than L4/x86 because it can be preempted at almost any time
    • is freely redistributable under the GPL

    See also the L4Linux [tu-dresden.de] project:
    "L4Linux is a port of the Linux kernel to the L4/x86 and Fiasco -kernels (microkernels), two kernels implementing the L4 -kernel API."

    "L4Linux runs in user-mode on top of the -kernel, side-by-side with other -kernel applications such as real-time components. It is binary-compatible with the normal Linux/x86 kernel and can be used with any PC-based Linux distribution."
    --

  • Are you offended that "freedom" means that people don't have to make the same choices as you?

    It's like the motto "live free or die". What if I don't want to live free? Does it mean I don't have the freedom to decide not to live free - and am therefore being not free to decide whether or not to live free.

    Ow, my head hurts...

  • The (unfinished) goal of the GNU Project was to create an operating system with no proprietary software in it. Why? Because they didn't want the restrictions of proprietary licenses. What a worthy goal!

    But for some people, the restrictions in the GPL are still too much. It's much, much better than proprietary licenses, but it still has restrictions on distributions. The xMach team wants to create an operating system with only copycenter/unrestricted software. What a worthy goal!

    'There was a village that had a magical apple tree. Whenever an apple was picked from the tree, another would replace it. The villagers were well fed and had no want. But some villagers took their apples and locked them away, saying "Look at those foolish people! They do not lock away their apples. Surely someone will come and steal them."'
  • In this case, it is the *user* who decides which version to distribute the software under, not the author. If the GPLv3 is too restrictive (and parts of it sounds like it might), then you can still keep distributing the software under GPLv2 instead.
  • In a nutshell, yes...

    You are not free if you do not have the choice to be free or unfree. And if you have the ability to choose unfreeness, then you are free. Even if there is no turning back on unfreeness, at least you were free at the time you made that choice.

    This is in stark contrast to proprietary software. You are born free and then subsequently choose to use proprietary software. If you don't like it, stop using it. It's like walking into a closet and shutting the door behind you. Your movement is now restricted because you are in a closet, but you are still free and the closet is not subjugating. Just step out! People *choose* to use Linux or BSD instead of Windows. How can they possibly be unfree if they have that choice?
  • The portion of in_systm.h he's referring to is below. The only code if some network-order typedefs that aren't used by the kernel, but provided for BSD-compatibility. IMO, this is code for the purpose of the GPL. or would be were it used.

    /*
    * INET An implementation of the TCP/IP protocol suite for the LINUX
    * operating system. INET is implemented using the BSD Socket
    * interface as the means of communication with the user level.
    *
    * Miscellaneous internetwork definitions for kernel.
    *
    * Version: @(#)in_systm.h 1.0.0 12/17/93
    *
    * Authors: Original taken from Berkeley BSD UNIX 4.3-RENO.
    * Fred N. van Kempen, <waltje@uwalt.nl.mugnet.org>
    *
    * This program is free software; you can redistribute it and/or
    * modify it under the terms of the GNU General Public License
    * as published by the Free Software Foundation; either version
    * 2 of the License, or (at your option) any later version.
    */

    Note that it states that the file is GPLed, with no mention of the BSD license or, in fact, of the BSD copyright ("original taken from" is not a copyright notice that I'm familiar with). I assume that they got permission to do this, or the the file was public-domain, especially since (I'm almost certain) 1993 pre-dates the removal of the advertising clause in the BSDL, which might have been sufficient reason to avoid using the file altogether. I can really see them stealing the header and leaving attribution, either.

  • copy on write and signals have nothing to do with each other

    In mach, they most certainly do. There are a number of resources that are difficult to copy-on-write because they are too high-level to be managed by the VM. With mach's copy-on-write semantics, you can actually manage the copy in a user-space callback.

    message passing ipc is more expensive than just invoking a user space signal handler

    Expensive yes, but much richer. Sending a SIGFIZZ to the process is all well and good, but in most cases, the process then needs to go dip into some kernel datastructure or make a system call to get more information. With message passing, you get it all in one pass, and you can define richer message classes than the traditional POSIXish signal mechanism can deal with.

    the whole point of a microkernel is to have a different fault domain for each subsystem

    This is actually a kind of funny statement. Since microkernels are being used most often because they provide a conviniently small and flexible abstraction layer for monolithic operating systems, I think you have to revise your thinking here. Perhaps when microkernels were envisioned this was the idea, but throughout the late '90s and early '00s, we've certainly seen very little of this sort of microkernel use.

    This use of microkernels is not invalid, nor is it rare. Would all monolithic OSes benefit from a microkernel-as-abstraction-layer? Yes I think so in the ideal, however many OSes (e.g. Linux) have seen tremendous benefit from the monolithic model that they started with, and to re-impliment on top of, e.g., mach, would be quite painful. I'm not convinced that the benefits would outweigh the cost.
  • Linux (or FreeBSD or MacOS X, etc) is UNIX is 1960s technology in the same way that 2000 is NT is VMS is 1960s technology.

    Neither one is a very fair statement, since development has progressed dramatically (including at least one major re-implementation along the way) in both cases.

    Oh well, trolls can't be right all the time ;-)
  • NT has no relation (other than conceptual) to Mach. MacOS X, NeXT OS and OSF/1 (from which True64 descends) are all Mach with some relatively monolithic layers on top.

    It is important to note that those three systems are monolithic systems running on top of a Mach microkernel. Mach has not, in any of those cases, been modified to the extent that it no longer functions as a true microkernel.

    Also, the original post made no such distinction. It made a blanket statement that Mach had never been used in a production OS. This is untrue, and there is a fair deal of advantage in having the microkernel abstraction available, even if you run a monolithic kernel layer on top of it.

    For example, take the fact that OSF/1 based systems use Mach message passing and VM, so they can handle such fun concepts as reserve-on-write memory allocation with message-callbacks on failure. In UNIX terms, that means that "malloc(1024*1024*1024)" does not reduce the amount of VM available... which is both scary and useful at the same time ;-)

    The proliferation of copy-on-write semantics in Mach-derived systems is also very useful, and again benefits greatly from the way mach does message-passing instead just having signals to convey information from the kernel to user space.
  • You'll have to define "successfully," then. Notwithstanding your criticisms of Apple's business models, MacOS X has sold as many copies in its initial week of issue as there exist estimated users of most free distros of BSD-based UNIX systems.
  • BTW - Mach has done more to stiffle microkernels than any theory or performance debate has ever.

    No more than COBOL, FORTRAN or BASIC stifled programming languages. In this community, such debates are won on the merits.

    To most people microkernel == Mach (which has never been used successfully in a production system)

    MacOS X?
  • Darwin isn't licensed under BSD. Reasonable people might prefer BSD to ASPL
  • we should be using microkernels, however every time someone tries to make a microkernel they give up and go grab the source for mach which just sux ass hardcore.
  • see that's the problem. people like you just quote the microsoft propoganda without actually knowing what this "technology" is that they are talking about. Here's a hint: it involves microkernels vs monolithic kernels. Try thinking for yourself, I know it hurts but you might learn something.
  • ok, let's concede that one.. does this change my point at all? If people didn't constantly try to base their microkernels on Mach we would have some sort of progress and not still be using monolithic kernels.
  • that's amuzing because I just used it and it conveyed meaning to pretty much everyone who read it. That qualifies as a "word" in my book. So perhaps what you ment to say is that I misspelt it and as such I would remind you that I never said anything about the front page story's spelling. So shut the fuck up ok?
  • and the gui, let's not forget the gui. My only point is that people should actually reimplement rather than use Mach which is such a lame first generation microkernel.
  • and != at && like != unlike

    try putting in a tiny bit of effort so thousands of people the world around dont have to. BTW - Mach has done more to stiffle microkernels than any theory or performance debate has ever. To most people microkernel == Mach (which has never been used successfully in a production system) or WinNT which just blows. No wonder we're still using monolythic kernels in the year 2001.
  • umm.. winnt is not a microkernel. That is why it blows. Because they said "this microkernel stuff is just too hard, fuck it, let's put all the gui code in the kernel". Once again, the microkernel failed. Is this because microkernels suck? No, it's because Microsoft sux.. but has anyone written a microkernel other than Microsoft (and not just based it on Mach?) I dont know, perhaps someone can enlighten me.
  • What is it with you *BSD haters out there? I mean, jesus. I keep seeing this same rant, posted almost verbatim, anytime there is an article about one of the BSD's. BSD is not dead, it will not die, it happens to be thriving.

    Also, claims of fragmentation in the BSD community are completely bogus. The seperate BSD operating systems are closer in operation than half of the Linux Distributions. OpenBSD is closer to FreeBSD than Slackware is to Debian. If you do not believe that then you have not run them.

    The different BSD's are distinct operating systems, like Solaris or HP/UX. They are not fragmented distributions. The Linux world is the fragmented one. There are some 180 linux distributions according to the LDL. Even if only the top 20 or so are reasonably large, that is still a far larger number of distributions than in the BSD camp. And as I already pointed out, half the time the Linux distributions are incompatible with each other, despite being the same OS! (By incompatible I mean that either there are inane library problems (glibc, etc) or they are completely different administratively.)

    Do everyone here a favor and go troll somewhere else.

    -sirket
  • I'll give you 50 cents, go buy yourself a clue. Darwin is aimed at being a rock-solid core for the new MacOS-X system, and being similar to the core of NeXTStep. xMach is aimed at being small, thin, fast, and clean. While Darwin is based on old FreeBSD 3.2 code, xMach aims to be as modern as possible (and according to the story, the site is down) will try to take advantage of some of the latent features of the Mach kernel, such as fine grained locking for multiprocessing, that have been neglected in recent BSD releases. (Read the UVM design thesis for a description of this, FYI)
  • Just to elaborate on his point. Mach really *is* giving microkernels a bad name. Two of the fastest UNIX-compatible systems (BeOS and QNX) are microkernel based. Yet, since everyone sees Mach as representing microkernels in general, people automatically get the idea that microkernels are slow. Just to give you an idea of the situation, its like everyone saying Linux is slow, because Win95 sucks and both use a monolithic kernel.

    BTW> I was thinking. From looking at the Win95 design, it seems to me that Win95 may just be the most "advanced" of any commerical OS design. Since most Windows95 components are user-level libraries mapped into each program's address space, it would seem that Win95 is the original exokernel!
  • And sometimes we (authors of BSD licensed code) don't even get that much:
    http://www.benews.com/cgi-bin/mwf/topic_show.pl?id =14628

    John has told me that he would fix it, and even today that he has fixed it, but beats me where the new binary is! :)

    The moral of the story is, regardless of what license you choose, some people will try to rip you off anyway. So there's no point in further discouraging legitimate developers, who could possibly give back in the future, with a license that binds their soul to the project.

  • I worked for a company that sold super-computers running a mach-based OS. That company has since been purchased by HP and the mach-os was dumped as quickly as possible.

    As I recall, there were two really major problems with mach:

    1) Buggy as all get out
    2) Enormous message-passing latencies. System calls that take a couple of microseconds under HPUX took 100s of microseconds under mach on the same hardware.
  • If mach is, indeed, a bad implementation of the microkernel, what would be a *good* implementation of the microkernel?

    L4

    Will someone please attempt to assert or refute, using some kind of solid logic or numbers or something, the statement that microkernels are a good idea but Mach is a bad implementation of that idea? What is done wrong in Mach, and can it be fixed?

    search the l-k list archives for some examples. one point is that only one thread at a time can be in any module, as a result of messages being queued. mach is much slower than other microkernels when it comes to message passing. another issue is that when you shove use a monolithic unix server (eg, as do mklinux, bsd lites, next), you lose all the advantage of using a microkernel in the first place. if your single server dies, everthing's shot anyway. also, message passing is not the only way to isolate fault domains. (see here [nec.com].)

    i don't believe that message passing really gains you much abstration. if you had all the vm calls/messages placed in a generic way, it wouldn't let you optimize for the specific pager implementation. but, to allow an easy drop in replacement, you wouldn't be able to depend on the current vm's subtle behaviors. this is just my thought; i haven't exactly done a lot of kernel design, so take that with a grain of salt.

    and, btw, this is kind offtopic, but while we're VAGUELY near the subject: someone once told me that Mach has the ability to host multiple kernels on the same machine at the same time. Is this true?

    yes, it's definetly true. i know the mklinux server has been debugged using another running instance of the linux server. people have often talked about run time switching between mac os x and mklinux, but nothing's come of it. and to be safe, it would need some resource locking, eg, so that you don't have the two servers writing to the same disk partition.

  • The GPL is a virus, 'nuff said.
    So is the polio vaccine.

    "I may not have morals, but I have standards."
  • Also note that all the "better" microkernels (Neutrino, EROS, BeOS) are non-free. I think that's the basic reason that free software projects are not using them.

    That's actually not true. L4 [l4ka.org] is GPL, and highly regarded. HURD will probably be ported to it soon, with a great performance increase.


    "That old saw about the early bird just goes to show that the worm should have stayed in bed."
  • Will someone please attempt to assert or refute, using some kind of solid logic or numbers or something, the statement that microkernels are a good idea but Mach is a bad implementation of that idea? What is done wrong in Mach, and can it be fixed?

    I don't know enough of the Mach internals to know exactly why it's such a poor performer, but I have read alot of theories put forth. The most common(and accepted) reason is that Mach's memory management is too abstract and that because Mach is built on a hardware abstraction layer. Those two reasons are directly interrelated.

    The Hardware abstraction layer(HAL) restricts the u-kernel to operation on a "generic machine". Everything is abstracted in the sense that the HAL contains the units which are common to all CPU architectures. This was done to improve portability. However, it sacrifices a great deal of performance because alot of issues are platform dependent. Things such as page size must be dictated by the architecture you are running on. But because Mach uses the HAL to abstract this away, Mach performance suffers a great deal in memory operations. Often, the HAL dictates a page size which is too small/large for the architecture. The hardware can't handle address translation anymore, so the kernel has to do this manually. This is very expensive.

    In general, Mach's architecture just seems poorly designed from what I've read. Alot of research has been done on this topic, and they're coming to the realization that u-kernels are inherently non-portable. That's a very important point. This shouldn't be surprising either because the u-kernel is so small that mostly only platform dependent code end up in there. L4 is 12k, Eros is 32k(I think), VSTa is around 50k and QNX is less than 10k!

    The good thing about this approach is that most(if not all) of the platform-dependent code is wrapped up in the u-kernel. The rest of the system is completely portable. So all you have to do is re-write about half of a 20k kernel for the new architecture, and you're done! Re-compile and off you go. Theoretically at least. ;-)

    If mach is, indeed, a bad implementation of the microkernel, what would be a *good* implementation of the microkernel? Are any well-designed microkernels out there?

    Good u-kernels that have implementations with performance comparable to or exceeding Linux:
    • QNX [qnx.com]: Everyone's heard of this one. They have a very good u-kernel.
    • Opearting Systems Group at Dresden [tu-dresden.de]: They do alot of great work with u-kernels. They have code for L3 and L4, the first very promising, high-performance u-kernels(though they may not have designed them). They even have Linux running as a service on top of L4, so you may be able to run it right now! Also see This University [unsw.edu.au] and the L4KA page [l4ka.org] for other implementations of L4(ie. other architectures).
    • Eros [eros-os.org]: EROS is a new operating system based on the architectures of earlier high-security capability systems(KeyKOS). Very promising and has performance comparable to L4. The measurements are in the papers section(usually towards the end of the paper). System is GPL'd.
    • VSTa [vsta.org]: a cool GPL'd hobby u-kernel system(in that it has no university or company backing). This one has a somewhat complete system, ie. self-hosted with gcc, vi, emacs, etc. Runs on a windows partition and uses GRUB to boot(all of which you'll need to run it). No performance metrics that I'm aware of.
    • Fluke [utah.edu]: No working system as far as I know. The kernel is complete and some performance measurements have been made. Looks promising and source is available(GPL I think).
    Helpful note: if I've listed a website to a university site or academic operating system/kernel, the documentation easiest to skim is found in the papers they've published.

    If there are, then what is it that repeatedly leads projects like xMach/HURD/OS X/mkLinux to embrace Mach as opposed to one of the competing microkernels?

    I have no idea. Ignorance of their existence probably.

    Unless i am quite confused, supposedly, because the interaction between the microkernel and the OS is somewhat abstract, you ought to be able to replace the microkernel with a better one as long as the interface is the same. Is there any reason a better microkernel with the same software-side interface as Mach could not be written, and used to replace mach?

    Yes you could. But then you'd just have Mach. :-) You might be able to engineer the Mach implementation a little better, but having the same interface for the most part means making the same tradeoffs, and then all you'll have left is a bastard child of Mach. *shudder* ;-)

    someone once told me that Mach has the ability to host multiple kernels on the same machine at the same time. Is this true? How does that work in terms of sharing the hardware? How do you go about doing this?

    Yes that's true, but not in the way you're thinking. Both kernels don't run as kernels at the same time. A well-engineered u-kernel is so thin and provides such a minimal interface to the hardware, that by just slightly modifying Linux(or other kernel) you can get it to run on top of the u-kernel like any other application, and it could do everything that Linux does running on the bare hardware. See L4Linux, MkLinux, Darwin/MacOS X and even this xMach project as examples. The key to good performance is to provide as small a u-kernel with as minimal an interface as feasible to avoid performance problems. It will never run as fast as on bare hardware, but you can get pretty damn close.

    I am just thinking that at this point, it would be an utterly useless but nifty parlor trick to try to get Mac OS X/Darwin, MkLinux, xMach and HURD running off the same mach microkernel on the same machine at the same time.

    Not so useless as you might think. The problem with any new operating system or kernel is software. There's nothing written for it yet. But what if you could run the Linux kernel on top of your new OS? You'd have near instant access to whatever drivers and applications are currently available for Linux without any porting effort! (except for the initial Linux port) Then you can have a complete system and start writing native drivers for what you need.

    -----
    "Goose... Geese... Moose... MOOSE!?!?!"
  • I think most of you are sadly missing out on the primary unlisted objective(s) of any project like this.

    To have fun and to meet likeminded people who all think they can contribute together to do something worthwile!

  • After reading Carnage4Life's excellent post, you might also want to dig up the papers about Spring from SunLabs. Spring wasn't just a microkernel; it was a complete multi-server OS with some neat features like Plan 9-style extensible namespaces and single-system-image clustering.
  • Hopefully the folks that run xmach.org will move it to a more reliable OS (perhaps like that evil GPL one) given the status of http://www.xmach.org... :)
  • Since when? I would think it would be more accurate to state that most of the license snobs here on /. think that any license should meet the OSI standards but also should be defended. The reason you here so much more about the GPL is that it is *hard* to violate the BSD license. Many think this is a good thing and in fact I'm starting to think somewhat that way myself. I have real problems with version 3 of the GPL and pray to gawd that nobody adopts it.
  • The classical paper that contains the numbers suggesting that Mach is... er... suboptimal with respect to performance, is this comparison [l4ka.org] of L4 versus Mach (and a number of other kernels). This paper does suggest that microkernels need not necessarily be slow if designed and implemented correctly.

    As to why (so termed) 1st generation microkernels are still used as the base for newer systems, I can only speculate - but I believe that tradition is the main reason here (teaching an old dog new tricks tend to be hard). Just for the record, however, there is actually an effort to port it to L4 [gnu.org].

    For more info on L4 related issues, you could have a look at L4Ka [l4ka.org], a C++ish implementation of the L4 API.

  • And, most horrifically of all, someday we will see Mac OS 11-- which undoubtedly everyone on IRC will start calling "X 11". Assuming apple doesn't come out and name it "Mac OS X 11".

    Bad news. Officially, MacOS X is "MacOS X 10.0". Future versions will be numbered "MacOS X x.y" and so on.

    The idea is to distinguish it from MacOS Classic which isn't going away for a while. Too bad - I was hoping for a MacOS XI.II some day.

    Unsettling MOTD at my ISP.

  • 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

    The current person cannot remove or change the license. This does provide the greatest freedom for the next person to receive the code, because we are not asking any more of them than that they acknowledge that we wrote the code.

    GPL = Kid who shares his toys but expects a return if the person enjoys it and makes it better. BSD = kid who gives his toys to the Goodwill and hopes that someone will find them useful, without demanding or expecting anything in return.

    ---
    "Do not meddle in the affairs of sysadmins,

  • . I have real problems with version 3 of the GPL and pray to gawd that nobody adopts it.

    Haven't we all already adopted it?

    Most (all?) GPL code says somethig to the effect of "This is licensed under GPL 2.2 or later"

    Ryan T. Sammartino

  • by Arandir ( 19206 ) on Wednesday April 25, 2001 @09:09AM (#265909) Homepage Journal
    It's great to know that no one in the GPL community has ever bashed the BSD license. That's so righteous of you guys. We should follow your example.

    However, the xMach guys are not bashing the GPL. They are merely not desiring to use it. Can't you see the difference? Are you offended that "freedom" means that people don't have to make the same choices as you?
  • by MrHat ( 102062 ) on Wednesday April 25, 2001 @09:57AM (#265910)
    Will someone please attempt to assert or refute, using some kind of solid logic or numbers or something, the statement that microkernels are a good idea but Mach is a bad implementation of that idea?

    Gladly. And I don't even have to do it myself.

    For starters, check out On u-Kernel Construction [nec.com], a paper written by Jochen Liedke for the 15th ACM Symposium on Operating System Principles. It contains a thorough technical explanation of why Mach performs poorly, and provides corroborating evidence measured on multiple architectures.

    Additionally, using Mach as a "hardware abstraction layer" for a userspace Unix server, rather than as a true microkernel, only compounds the kernel and related subsystems' poor performance.

  • by jmallett ( 189882 ) <jmallett@newgold.net> on Wednesday April 25, 2001 @09:33AM (#265911)
    I would like to clarify to everyone that the site isn't in fact down, nor is any server related to the site. What happened is my DNS servers (which are on Verizon internet) became unreachable _NOT_ because of Slashdot, but because _my DSL is down_. To clarify, the site is hosted on soem rather fat pipes, not hosted by me. One mirror runs RedHat Linux, the other runs FreeBSD. They are both up and running. If you want to view the Core Team page complete with everyone's picture on the Canadian server, try http://www.velocet.ca/~smegsite/xmach/core.html , if you want to view the entire webpage, which is a few hours out of date, try http://xMach.FreeOS.com. Thanks. And to clarify one more thing, Mach Mach3 was monolithic, so all of you people talking about a lot of Mach 2.5 derivitives being exactly the same, you're wrong. And yes there have been a number of 'production' oses based on a microkernel Mach. Unicos comes to mind.
    --
  • Someone made a statement along these lines (Microkernels are a good idea but Mach is a poor implementation of the idea of a microkernel) earlier, in the slashdot discussion on the kernel of Mac OS X and Linus' long-standing dislike for microkernels. Like you, they failed to back this assertion up with anything at all.

    Will someone please attempt to assert or refute, using some kind of solid logic or numbers or something, the statement that microkernels are a good idea but Mach is a bad implementation of that idea? What is done wrong in Mach, and can it be fixed?

    If mach is, indeed, a bad implementation of the microkernel, what would be a *good* implementation of the microkernel? Are any well-designed microkernels out there? If there are, then what is it that repeatedly leads projects like xMach/HURD/OS X/mkLinux to embrace Mach as opposed to one of the competing microkernels

    Past that: Unless i am quite confused, supposedly, because the interaction between the microkernel and the OS is somewhat abstract, you ought to be able to replace the microkernel with a better one as long as the interface is the same. Is there any reason a better microkernel with the same software-side interface as Mach could not be written, and used to replace mach?

    and, btw, this is kind offtopic, but while we're VAGUELY near the subject: someone once told me that Mach has the ability to host multiple kernels on the same machine at the same time. Is this true? How does that work in terms of sharing the hardware? How do you go about doing this?
    I am just thinking that at this point, it would be an utterly useless but nifty parlor trick to try to get Mac OS X/Darwin, MkLinux, xMach and HURD running off the same mach microkernel on the same machine at the same time.

    Thanx0r. I look forward to reading the xMach web page and finding out what it is as soon as their web provider recovers.
  • by Carnage4Life ( 106069 ) on Wednesday April 25, 2001 @11:25AM (#265913) Homepage Journal
    Will someone please attempt to assert or refute, using some kind of solid logic or numbers or something, the statement that microkernels are a good idea but Mach is a bad implementation of that idea? What is done wrong in Mach, and can it be fixed?

    It seems you are refering to this post [slashdot.org] of mine a while ago. For proof of Mach's deficiencies I linked to two research papers; On Microkernel Construction [nec.com] and The Impact of Operating System Structure on Memory System Performance [cmu.edu] in that post. If you want the capsule summary then here's a short list of Mach's deficiencies as posed by Liedtke
    • Inefficient kernel-user switching (i.e. system calls spend too much unecessary time in the kernel).

    • Inefficient address switching (i.e. the number of cache and TLB misses in the Mach microkernel is also absurdly large).

    • The performance of IPC operations and thread switching is subpar. (this is related to the above points).

    • It isn't optimized for specific hardware and instead has a Hardware Abstraction Layer which slows it down considerably.
    The paper is a few years old so the Mach people may have tackled some of these problems by now.

    If mach is, indeed, a bad implementation of the microkernel, what would be a *good* implementation of the microkernel? Are any well-designed microkernels out there?

    Neutrino [www.swd.de] and EROS [eros-os.org] to name two.

    If there are, then what is it that repeatedly leads projects like xMach/HURD/OS X/mkLinux to embrace Mach as opposed to one of the competing microkernels?

    The same reason most of us are using Java and C++ instead of SmallTalk, Lisp or Objective-C. Developer inertia and people falling to the more hyped and/or better sold technology.

    and, btw, this is kind offtopic, but while we're VAGUELY near the subject: someone once told me that Mach has the ability to host multiple kernels on the same machine at the same time. Is this true? How does that work in terms of sharing the hardware? How do you go about doing this?

    A microkernel can load different modules at runtime that may be OS emulation layers that mimic the behaviors of certain OSes even down to memory management and paging. Since you can theroetically load as many modules as you want, you can load various emulation layers and mimic several OSes at once. For instance OS X has a microkernel that loads modules to mimic BSD as well as old Apple APIs (Classic) as well as teh new stuff (Carbon). Here's a graphical look at how the MacOS X architecture [redshed.net].

    --

The person who can smile when something goes wrong has thought of someone to blame it on.

Working...