Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
BSD Operating Systems

FreeBSD VM Design 101

Over at DaemonNews is an excellent explanation of the FreeBSD VM design, from Matt Dillon, who's been doing a great deal of work on it recently. It's rare to see good descriptions of the internals (or parts of the internals) of any OS OS (that's "Open Source Operating System") so this is particularly welcome. As is customary, there are a number of other excellent articles over at DaemonNews, including a new Darby Daemon adventure.
This discussion has been archived. No new comments can be posted.

FreeBSD VM Design

Comments Filter:
  • Being very familiar with Matt's work I am certain he has performed analysis and is able to measure the lagging performance of NT. Interesting that Microsoft thinks enough of his work to be using a very large array of FreeBSD systems at HotMail. Replacing them with Win00 doesn't appear to be a practical alternative, according to their own analysis. Matt certainly contributes a lot more that is useful than you have. He is a true professional.
  • A very lucid explanation of a complicated system. Like many good Unix people, Matt Dillon writes natively as well as he does in code. I only hope that this encourages me to take up poking around in FreeBSD's kernel code...
  • Re:fork():

    Well, that's your opinion. I admit I'm not well-qualified to talk on NT or BSD system internals, so I won't. But, from the little I remember about Linux, both fork() and thread spawning were implemented with the clone system call, the difference being that fork() also has to set up a separate memory space for the data. There are cases where this would be useful, and then you would use fork() for them. There are cases where it would not, and then you would use threads.

    Frankly, it doesn't matter what you think a modern OS should or should not do: until there are no situations where using fork() would still be useful, a modern OS should still use it. I don't think it has outlived its usefulness yet.

    And yes, NT focuses on threads more than processes. But if you wanted to implement it correctly, one approach would be to write the best threading model possible, and implement processes on top of that.

    (I know NT does some "weird" stuff because of the VMS baggage, but I don't remember enough, really. Believe me, none of us would complain about NT if it also shared the *advantages* of VMS, such as wonderful clustering, great stability, low system requirements, the ability to stick it in a closet and forget its there, and an adventure game on every machine. ;)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • The link to daemonnews.org is spelled 'dea...'
    The story link is fine, but the link to the site is misspelled. Not a big deal, and probably mentioned at a lower score, but I tend to skim at a score of 4, and was confused for a moment...but just a moment.
    :-)

  • Virtual memory (VM) is how you implement processes with separate memory spaces; with a page table, and a page fault handler in the kernel. It's how your swap space gets used on Linux.

    It was probably a Virtual Machine (VM) the previous poster talked about. Seems like he didn't read the article at all. ;-)
  • by acb ( 2797 ) on Thursday January 20, 2000 @02:13AM (#1355942) Homepage
    Not only did he write a C compiler, he's also dating Winona Ryder. The man is a living god.
  • Running ms office on the server and projecting the gui of that app to clients is a unix way to approach the design of NT.

    I.e., Windows Terminal Server is "a UNIX way to approach the design of NT"?

  • Linus's comments seem reasonable.. What about it do you feel is unreasonable?
  • Is this the Matt Dillon that wrote the DICE C compiler on the Amiga ages ago? (DICE - Dillons Integrated C Environment?)

    Maarten
  • by Jubal Kessler ( 7025 ) on Thursday January 20, 2000 @03:04AM (#1355946)

    I read up on the Solaris VM model a few days ago while trying to debug some Solaris boxen at work. It's an interesting read and covers the basic VM model and system diagnostic tools, including helpful detail of vmstat(1). Not as straightforward as Mr. Dillon's excellent FreeBSD VM article, but worth the look for comparison purposes.

    The link to the PDF file is the first one on this page; check out the others as well if you're interested in finding out about more Solaris internals:

    http://www.sun.com/sun-on-net/perfo rmance/ [sun.com]



    ----
    Jack of all trades, master of none: http://whole.net/~pup/ [whole.net]
  • Although I hate it very much, as it is very inconvenient, I would say that it is a bug. I have seen many a programmer having trouble understanding case sensitivity, which leaves little hope for the average Win95 user. And since Microsoft tries to make the WinNT and Win95 interfaces fairly similar, this behavior is carried over.

    --

  • [Dave Cutler] is developing Win64, the 64-bit Windows 2000 vaporware.

    Win64 is not vaporware. It will be released exactly when Microsoft has been saying all along. The year 2064.

    --B

  • Just an example: try to untar the qt source on NT.

    Szo
  • Yes, BSD has page coloring.

    See? The bars on the BSD pages are all red, and followed links are black.

    Linux has page coloring too, but the colors are *different.*

  • nik writes:

    It's rare to see good descriptions of the internals ... of any OS OS (that's "Open Source Operating System") ...

    There's a reason for this, and it's pretty obvious - it's precisely because they're Open Source. If you want to learn how the internals work, you can go to the ultimate description - the source code.

    Don't get me wrong, though, higher level descriptions are good and neccessary. It's just that they're not essential when dealing with Open Source Operating Systems. The reason there are so many "Windows Internals" type books is because those systems are closed, and worse, those higher level descriptions are often the only ones you can get of the system.

    Such is yet another beauty of Open Source and Free software.

  • Although some elements could have been a bit clearer (you have to figure out just what "BSS" is from context unless you already know the appropriate Unixese), Matt has the most lucid explanation of page coloring I've ever read. Skip down to the bottom of the article to read it. It's a good example of why even wholly compute-bound programs can differ in performance under different OSes with the exact same hardware and compiler.

    -Ed
  • The *BSD distributions out there are highly scalable


    I like OpenBSD alot, but nobody will give me a real answer...
    Is OpenBSD ever going to go SMP in the near future, like within a few years? I'd rather not see OpenBSD be passed over as an option in a few years when multiprocessor servers are used more frequently.

  • I'm not a programmer so I don't have more than just a basic knowledge of the kernel and stuff, but how difficult would it be to look at FreeBSD's multiprocessor code and have OpenBSD folks adapt and review it over a period of time for their kernel? Would it take more than a couple of years, or is it just something they're not interested in right now?
    (just friendly questions, I like OpenBSD alot and even run around giving them free advertisment wearing their T-shirts.)
  • Separate documentation is a waste of time because it *rarely* gets updated along with the code, and thus gets out of date very quickly. Sad, but true.

    The only documentation worth starting therefore is in-line source code comments because future programmers can be made to update the comments as they go along. And even then in some cases the project manager has to beat them with a stick to make them do it.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Am I mis-reading you, ralphclark or are you actually oppsed to documentation?

    Ha! Not at all - when I can get it. Or when I'm allowed the time to produce any. But in my experience, although a brand new project will usually be written up with the best of intentions, *very* few software development shops will allocate enough time&resources to keep documentation up to date and in step with bugfixes and enhancements. It's a bit of a vicious circle because the argument can always be made that no-one ever has time to read it anyway...you have to just jump straight into the code and get on with it.

    What about documention embedded in the source, like kdoc, javadoc, perl pod, and the like? Do those solve your synchronisation complaints?

    This is what I was trying to get at. The cost of maintaining in-line source code comments is far lower as the text is only ever a "Page Up" key away from the code you're working on. Plus then the documentation is right where it's needed when you or another programmer needs to refer to it.

    I'm not familiar with the specific examples you named but I'm guessing they are kind of implementations of Donald Knuth's "Literate Programming" concept, where the master copy of the code is actually embedded in the formatted text document which describes its function in natural language. To compile the code you first have to strip it out from the document.

    IMHO this is a very good idea...with just one proviso; programmers spend enough time prettying up their code (and reformatting other peoples!). The last thing you'd want is for them to be farting about reformatting paragraphs, selecting heading styles etc. for the comments sections.

    Plain unformatted text suitable for viewing on an 80-col-wide dumb terminal is plenty good enough and doesn't waste time. I admit that the ability to add the odd illustration without resorting to "ASCII art" could be useful but this too can be a waste of time and shouldn't be overused.

    Finally, apart from source comments there is only one type of application documentation that should ALWAYS be produced for applications, a User/Operations Guide.

    Anything else, such as API documentation for programmers can always be most economically done in the body of the source code.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Yes, Citrix (where WTS came from) was specifically trying to emulate the behavior of X in a NT or OS2 context. Microsoft held them off at an arms length until they figured out that is what some customers really want.
    --
  • This behavior can be defeated with the IE4 ActiveDesktop.
    --
  • Check your history, bub -- "FUD" was invented far earlier -- in the 1970s or 80s, and was used to describe the 'preannounce' marketing tactics of IBM!

    (Perhaps after IBM had it's ass dragged through antitrust court for 15 years, they shaped up and enjoyed throwing FUD label at Microsoft. But "FUD" seems so un-IBMish, so I think it mostly the Team Warp vulgarians that enjoyed that term.)

    As a side note, FUD is usually defined as "Fear, Uncertaintly, Doubt", but the original definition I read was "Flaming User Demand" (by preannouncing products to defuse competittors).

    --
  • I stand corrected. Thanks!
  • yes. I forgot the chip numbers, so you'll have to excuse me, but:

    The 68000 could not use an MMU or FPU
    The 68020 could use an external MMU and FPU
    The 68030 had an integrated MMU and could use and external FPU
    The 68040 integrated both, with the exception of
    The 68LC040 had no FPU but a working MMU.

    Back then amiga's were around, which could take advantage of MMU's, but I think most people saw them in Apple's machines. Their first ones were called 32-bit dirt, meaning that there were errors in the ROMs that didn't permit them to use 32 bit addressing, until (it think) connectix released Mode32. The result of that was the system SAW the memory above 8 MB, but wouldn't let you use it. It would just report all the extra memory as being allocated to the system.

  • But (true) Case Preserving/Case Insensitive file systems makes the most sense to me: you don't have to worry if you're opening "REAMDE" or "readme", it just works. This is probably my biggest beef with using UNIX-based systems!

    But it's a big pain for guys like me who have filenames like "~/DOS" and "~/devel". Why? Filename completion. To change to "~/DOS" I type "~/D[TAB]" and for "~/devel" I type "~/d[TAB]". The shell does the rest.

    If "D" and "d" were the same, not only would I have to type an extra letter, I would have to remember which of the directories in my home directory, ("~/" for those of you not familiar with Unix), required one letter and which reqired two to complete.

    cheers,
    sklein

  • Seems to me that your criticizing open source development, not documentation.

    For a properly maintained, fully staffed project, there's no reason that docs cannot be kept up-to-date with the current release code.

    The problem is that open source projects aren't "staffed" at all, usually. People just do what they can, and need to get done.

    Am I mis-reading you, ralphclark or are you actually oppsed to documentation? What about documention embedded in the source, like kdoc, javadoc, perl pod, and the like? Do those solve your synchronisation complaints?

  • NT can quite happily handle case sensitively
    in filenames, for example all posix programs have
    this behaviour. But the designers made the
    decision that Windows programs would be
    case insensitive. Doubtless one could argue
    whether it was a good decision or not, but it
    is a feature not a bug.
  • From what I hear, Dave Cutler took some time off from Microsoft to play with his racecars. Now he is developing Win64, the 64-bit Windows 2000 vaporware.
  • "The enemies of the White race would like to pretend that George Lincoln Rockwell never existed."

    Herr Rockwell is irrelevant. Period.
  • Mr. Dillon was not throwing mud in order to get people to love him - this is someone who has already earned his status and respect. He was expressing _his opinion_, and being a kernel engineer he has the right to express it. Just as you believe that spawning threads is the right way, the specific branch of OS R&D research they are doing is going another way. The technology that Mr. Dillon is developing has proven itself in many ways, while the NT way of doing things has not, in many ways, and the way their kernel is engineered is, to his opinion, very wrong.
    The fact that he does not choose to elaborate on his opinion is his choice to make, and it should be respected (again, on the strength of his standing as a pillar of the BSD development community). As he wrote at the end of the piece, it was really much longer than he intended it to be.
    Now, regarding your quibble: Many kernel architects are convinced that a MK-based design is not fit for a server architecture (which is what MS is trying to pass NT as). Particularly a poorly-implemented one such as the NT microkernel and its bloated set of services. Mr. Dillon may have other complaints about the basic NT architecture, but had he stated only that, it would not be enough for you - you'd demand examples, bring up BeOS etc.
    It is a fact that MS has had to modify NT's structure to make it up to scratch in the performance department, however even these changes had been poorly architected (video subsystem in the kernel, anyone?). Hopefully Win2K will be a worthy server platform (it seems to be a decent desktop system, at least). Unless MS have gotten their act together, the clean (if extremely slow) basic design of NT will have been further turned into an insecure hodgepodge of services.
    So, the next time you start flaming, take a deep breath and try to see just what you're doing. Unless, that is, it was your intention to divert attention from the discussion, in which case I am sorry I wasted my breath on you.
  • Case Preserving/Case Sensitive

    Like UNIX, which means you can create the BAD situation of a file name that's the same except for capitalization. Why the hell would one WANT this? Filenames are supposed to provide context about the file's contents, and this does NOT serve that purpose.

    I prefer the Unix approach. The whole idea of upper and lower case turns into a tar pit when you have to deal with internationalization. Some languages don't have cases and others have quirky rules that vary by country. It is much simpler to treat a file name as a sequence of character codes, preferably Unicode.

  • The 68000 and 68010 both supported an external MMU. The 68010 added support for instruction restart/continuation to the 68000. There was a kludge involving a pair of 68000 chips that supported instruction restart/continuation. I ran a 68010 based Unix System V box for years.
  • I think the Berkeley folks learned from their mistakes with early versions of BSD on the VAX. The early VM system was too dependent on the details of the VAX hardware, making it difficult to port that code to other systems.
  • We're working on getting Chuck Cranor to write an article similar to Matt's on NetBSD's UVM system. You could write him and tell him to get going! :-)

    Brett Taylor
    Editor in Chief - DaemonNews
  • There are 3 ways to do file systems (I've used all 3):

    Case Forgetting

    Like DOS, you can type the filename however you want (oscillating CAPS LOCK if you like), but it's always gonna show up as uppercase in a DIR.

    Case Preserving/Case Sensitive

    Like UNIX, which means you can create the BAD situation of a file name that's the same except for capitalization. Why the hell would one WANT this? Filenames are supposed to provide context about the file's contents, and this does NOT serve that purpose.

    Case Preserving/Case Insensitive

    Like NT and 9x... sort of. I've noticed that on FAT filesystems, windows will assume that you want filenames that fit 8.3 to be displayed with an uppercase first letter and all lower after that. Annoying. But (true) Case Preserving/Case Insensitive file systems makes the most sense to me: you don't have to worry if you're opening "REAMDE" or "readme", it just works. This is probably my biggest beef with using UNIX-based systems!

  • by ripcrd ( 31538 )
    Man that's way over my head. It did induce a good nap though. Now I just have to wipe up the drool puddle and make up an excuse for the thud my head made on the desk. I need to read stuff like this in the a.m. when I'm fresh.
  • Really? I'm running a dual 133 system have been for some time. current uptime is 42 days (took it down when i did a make world and needed to reboot after kernel recompiling). Prior was 62 days (was down for same reason). What hardware are you running?
  • Your theory didn't prove a thing. Sorry.. You must take a second look and try to be more precise..


    Regards..

  • The biggest of the BSD news services is Daily.Daemonnews.org. Over the last year its been pretty efficent at posting stories. Slashdot's coverage is moderate, but Nick isn't trying to "compete."

    Also, slashdot quickly created a side-bar box for DDNs. If you have an account, just add it.
  • NT has a single-user, kernel mode GUI.

    Not true. NT has a kernel mode GUI. Kernel mode (by definition) does not run in the context of any user. The NT GUI actually supports as many users as you want - Terminal Server, Win2000, PC Anywhere, WinFrame and others are great proofs of this.

    Since nearly everything you can do in NT needs the GUI...

    Again, not true. Most user oriented things require the GUI. Server oriented things can't use a GUI because when running as an NT service you don't have a valid Windows Station or Desktop to draw on to.

    In Windows 2000 you can right click any program and 'Run As' any user you want. That very effectively shows there is no problem in NT itself with multiple users simultaneously.

    The real problem is named objects created by applications for synchronisation. These cause all sorts of havoc because programmers assumed the single user scenario and never imagined the named objects (event, semaphores etc) may be accessed by multiple users running the same application. Hence the incredibly ugly hack that allows NT to have a different named object space for each user on Terminal Server and Terminal Services in 2000.

  • NT is hopeless at handling filenames case-sensitively. You can create a file called HELLO.TXT and when looking at a directory^H^H^H^H folder listing, the filename will be printed in capitals. Now try creating a file called hello.txt, or Hello.TXT, or HeLlO.TxT. It won't let you. Even though it remembers that the original filename was in capitals, it thinks the new filename is the same.

    This can be problematic if you want to copy files from a unix machine, e.g. mirroring a web page, where there may be two different files called CONTENT.html and content.html, because one of them will get overwritten.
  • Mr Dillon knows more about FreeBSD than anyone outside Microsoft can know about NT, because he has the source. If NT reacts to things in stupid ways, Mr Dillon *can't* show you the bit of source code where the mistake/oversight was programmed in. I think he was probably talking more about fundamental design flaws, rather than bugs in, say, TCP/IP, which anyone could make.
  • NT has a single-user, kernel mode GUI. Since nearly everything you can do in NT needs the GUI, that means only one person can use an NT machine at once. People writing apps today may assume only one person is using the computer at once. So any future multi-user version of Windows may have severe difficulties running today's apps.
  • Since the Mac platform moved to PowerPC, virtual memory is also used in the MacOS to do filemapping. In the case of PPC code execution, this means only active, needed chunks of the code are loaded into RAM. The net result is that RAM usage for PowerPC native apps drops by 10-50% with MacOS virtual memory on. A side effect of this mapping is that once an executable has been loaded once, it can be relaunched several times faster than it was during the initial loading.
    So, it's not just that MacOS spills its RAM onto disk when it runs out. This is why even Mac systems with a couple hundred Megs of RAM perform better with VM on.
  • by raytracer ( 51035 ) on Thursday January 20, 2000 @07:07AM (#1355982)
    There's a reason for this, and it's pretty obvious - it's precisely because they're Open Source. If you want to learn how the internals work, you can go to the ultimate description - the source code.

    Don't get me wrong, though, higher level descriptions are good and neccessary. It's just that they're not essential when dealing with Open Source Operating Systems. The reason there are so many "Windows Internals" type books is because those systems are closed, and worse, those higher level descriptions are often the only ones you can get of the system.

    This comes perilously close to saying "we don't need documentation, that's what the code is for!" A truly dangerous path to stray down.

    The code is good at one thing: telling the machine what to do. It is often a pretty poor mechanism for documenting what you wanted it to do. It is also a poor mechanism for documenting what your concerns were, what you tried but found didn't work very well, what used to be implemented here, where your inspiration came from, etc... etc....

    Unfortunately, many projects (open source projects included, but of course not exclusively) suffer from a combination of snobbery (if you really knew what you were doing you'd read the code and figure it out) or worse yet, a lack of design itself (it just works okay). It's too easy to hide lack of design and poor design in this way.

    The FreeBSD VM article was exactly the kind of article that open source authors should be working toward. Document what you think. Document what you try. Document what the code does. There is more to programming than just knowing what C programs do.

  • The same. See http://www.obviously.com/ [obviously.com].

    Free source for DICE, compilable on Amiga, FreeBSD & Linux.

  • Talking about FreeBSD - I'm trying to gather some people to gether to work on FreeBSD distro that's aimed for the Win and Mac users. Here's a very incomplete write up for the distro - and we are still working on getting the logo finished so we can do the website. Just email me if you want on.
    ----
    Liberationix

    Discription: A FreeBSD distribution that is aimed for the desktop user,
    Novice and power users that are used to a Windows or Macintosh enviroment.

    Base: FreeBSD 3.4
    Install will be a GUI install with the options of Standard, Custom and Lite.

    Applications:
    Installer -
    Packaging -
    Windows Manager - QVWM (http://www.qvwm.org)
    File Manager - Explore2fs (http://uranus.it.swin.edu.au/~jn/linux/explore2fs .htm)
    Simple Word Processor:
    Calculator -
    Media Player -
    Imaging:
    Advanced (like adobe) - GIMP (http://www.gimp.org)
    Basic (like paint) -
    Web Browser -
    Email Client -
    IRC -
    IM -
    Office Suite - KOffice (http://koffice.kde.org/)
    Virus Scanner -
    Defrag (?) -
    Scandisk (fsck gui front end) -
    Clock -
    Compression (like winzip) -
    PPP Dialer -
    News Reader -
    Clock -
    Web Editor -
    Media player -
    FTP Client -
    Misc - Powertweak (http://linux.powertweak.com)
    KDE packages (http://www.kde.org/)
    Gnome packages (http://www.gnome.org/)
    Seti@Home Client (http://setiathome.ssl.berkeley.edu/)
    Emulators:
    Windows - Wine (http://www.winehq.com)
    Mac -
    ----
  • The *BSD distributions out there are highly scalable. Linux still need fine tunning in that arena in the fact that the scalabilty is still not as good as the *BSD's.
  • Well OpenBSD is still an infant in the aspect of growth compaired to FreeBSD.
  • I think the old Mac processor cannot generate page fault exception or address remapping...

    The MC68000 couldn't, but IIRC the 68020 was capable of using a MMU. I might be wrong about that, but certainly the 030 and 040 were available in combination with an MMU (built in to the processor I seem to recall).

    There were plenty of other machines using these processors properly, like the Apollo workstations (running HP-UX).

    There were many very good features of the MacOS, and the memory model was definately not one of them.

    -- Steve

  • by acarlisle ( 96757 ) on Thursday January 20, 2000 @12:44AM (#1355988)
    For those interested, the UVM web page [wustl.edu] describes the UVM virtual memory system used in NetBSD.

    -AC
  • One of the things one notices, when reading this article, is that FreeBSD's VM system -- like many -- is forced to do a great deal in software and assume the "lowest common denominator" as far as hardware is concerned.

    The old segmentation hardware in Intel processors embodies some fantastic concepts but is a horrible implementation. Worse still, it was not fixed when the 32-bit i386 came out. So, it sits idle most of the time and does the VM system in FreeBSD no good. What should be there instead?

    --Brett Glass

  • I think the Berkeley folks learned from their mistakes with early versions of BSD on the VAX. The early VM system was too dependent on the details of the VAX hardware, making it difficult to port that code to other systems.

    I'm not saying that the code should be hardware-dependent, but rather that it's ripe for a few hardware assists.

    --Brett

  • Are there any particular hardware-based optimizations that you had in mind?

    Hardware could help with such things as COW (having to keep several layers of information around after a fork is painful and resource-intensive) and page coloring. In fact, one could do much better than the current page coloring algorithms -- which are speculative -- if the hardware gave better information about the caching scheme. Currently, caching is a "black box" to the OS. The associativity, etc. of the cache may even depend on the amount of RAM installed.

    In general, anything that allows the software to avoid marking a page invalid to "catch" an access is a big improvement.

    --Brett

  • Take a look at NT 4.0 Terminal Server and its long list of incompatible software and you will see you are correct. Not even newly released MS apps work correctly without a lot of tweaking.
  • This is stupid. Just use the batch abilities of sysinstall and you don't have to make your own "FreeBSD distro".

    The neat thing about UNIX is the ability to use existing tools to do the job instead of re-inventing the wheel.
  • MacOS has had "virtual memory" since System 7, but it blows. It basically spills over to disk when you run out of physical RAM, though it's been improving very much with each OS release. OS X of course, is based on BSD, and will have real VM.
  • Ditto plus more.
    Erroneous or out-dated documentation is even worse than no documentation.

  • MacOS has had "virtual memory" since System 7, but it blows. It basically spills over to disk when you run out of physical RAM, though it's been improving very much with each OS release.

    Uh, this is a feature of a lot of VM systems. What do you think a Linux swap partition, or the Windows swapfile, are for?

    But yeah, System 7 had a pretty lousy VM implementation, from what I've read. If I understand correctly (and I could be way off), the third-party utility 'RamDoubler' was basically a replacement for the OS's built-in VM system.

  • i think "annoying" is an even more appropriate label for the moderation of this post. by my count, it is one of approximately 3 on topic posts out of 58 total, and yet, some helpful moderator went to the trouble of knocking it down to zero.

    i mean, whuhhhh?

  • Could someone from the staff *please* fix the HTML from this news? The last anchor ("Darby Daemon Adventure") is not closed, what makes the first page on /. unusable if you're using Mozilla.


    --
    Marcelo Vanzin
  • Mr. Dillon takes off pleasing his readers with:
    The NT folk, on the other hand, repeatedly make the same mistakes solved by UNIX decades ago and then spend years fixing them. Over and over again. They have a severe case of 'not designed here' and 'we are always right because our marketing department says so'. I have little tolerance for anyone who cannot learn from history.

    Ok, enough of this. Over and over again I have to wade through OR crap like: NT is based on VMS, it's just VMS in a new jacket, OR crap like: NT is not mature, it makes mistakes solved in Unix millions of years ago, over and over again . WHICH mistakes??? the 27 TCP/IP stack bugs, all solved a decade ago? I can't think of any other 'bugs' or 'mistakes' made... but what's really annoying is that the article, with a title that looks really interesting, dives into the same pile of FUD spread over and over again by people who know a LOT about Unix OS-es but seem to know a very LITTLE about NT.

    Why?

    Does the article need this kind of craptalk? no. Do the readers feel the NEED to learn from Mr. Dillon how the world works? no. The article is about VM's. FreeBSD's vision on VM's. So talk about THAT instead of pumping up the anti-fire. It's not amusing. It's annoying.
  • It can be problematic copying files from unix systems indeed, but using the same filename, but with difference cases is probably not a wise move anyway. So it's hardly ever a problem
  • the gui isn't single user mode. If I use 3 PC anywhere sessions at once to 1 single NT machine it works perfectly. besides... NT is diffenent than Unix when it comes to system design. That's what I was talking about. Projecting the Unix design on NT and THEN deciding some stuff isn't matching is silly. it's a different approach as netware is also a different approach. A different approach doesn't have to be bad, by default.

    THe multiuser aspect of NT is already discussed a lot of times at this site: try to telnet with several people at once to an NT system. possible? of course it's possible.
  • And services are NT's power to the users. your statement is not correct. Running ms office on the server and projecting the gui of that app to clients is a unix way to approach the design of NT. that those don't match doesn't mean NT is bad designed. it's different.
  • You are right about we should focus on the real red line in the article, which is about a VM solution in FreeBSD. However, the fun gets spoiled by more than 20 lines of TOTAL UNNECESSARY craptalk about what's bad in this and buggy in that. Because the mudthrowing of Dillon is in his article, I can comment on that in this forum. He might have achieved a lot in this world, he still has to OR 1) pay respect to others who also have achieved a lot in this world and work also very hard and thus focus on the subject of his article and leave the mudthrowing to others OR 2) point out WHY others are bad and buggy when throwing mud.

    He chose the 3rd option: throw mud and hope for the right audience who will love him for that mudthrowing. Well, some will, other won't. And that's sad.

    Because what would have been REALLY interesting is: the NT model of threads attaching to loadable libraries in core and how NT solves the problem mapped on how FreeBSD solves the problem with processes, not threads, because he focussed the reader on the NT topic as being crappy because it has flaws that were solved decades ago in Unixland.

    . I read you ask for respect for mr. dillon and for not being immature. Well, I can only answer: I don't respect people who throw mud at others without pointing to the reasons why. That means: you MAY throw mud on others, but explain why. If you don't, you are just the same as the people you describe, a teenage male looking for yet another venue to release their now trendy, "geek angst".

    The article itself was pretty straightforward and insightful. Insightful because it let you look into the kitchen of FreeBSD without you have to crawl through lots of code. That's a Good Thing (tm). It wasn't really 'new' or '21st century tech' to me. Because IMHO a process fork() is something a modern OS shouldn't do, but instead spawn threads.
  • Mr Dillon knows more about FreeBSD than anyone outside Microsoft can know about NT
    do these things relate? Back to the topic: if NT is packed with errors solved in Unix a decade ago, relating this to the actual topic of the article, are there designflaws in NT's VM handling? Don't come out now with the words 'we don't know! we don't have the source!!', because that's bollocks. There are numerous documents written by a lot of people about NT's internal structure and also about the VM handling and specifically connected with threads (the buildingblocks of NT's multitasking, Unix uses processes as building blocks). So, in the light of the article, what is so incredibly wrong? Mr. Dillon makes the, IMHO not so clever, remark that NT contains flaws unix doesn't have. So I say: which? because he MAKES the remark, he KNOWS what's wrong, right? otherwise how can he say THAT there is something wrong... that was my point. Perhaps offtopic, whatever, and already moderated down because it's perhaps hurting for the OS OS people here, but I just want to know.

    Ah well....
  • Yeah, most embedded OS doesn't do VM. Probably mostly due to the limitation of processors not supporting VM implementation. To write an OS that do VM on these processors can be very hard as you have mention. I think the old Mac processor cannot generate page fault exception or address remapping, so we can't blame those old Mac not doing VM :)
  • There is always a need for rationale and discussion.

    I really don't get it - the guy writes a lucid and solid technical article and people are flaming him for it.

    You want to know why? Because none of you here even has a goddam clue as to what he is talking about, but you just want to put in your own 2 cents.

  • I love it, a guy writes a great article regarding FreeBSD's VM, and you come in here flaming him over some offhand remark about NT.

    Just fess up and admit it - you couldn't understand any of the technical details, but you felt compelled to contribute some comment, so you lay in with this garbage.

    If you're not going to contribute to the discussion constructively, don't post. I know this rule may apply to my own post, but I'm just overwhelmed by your idiotic response to a purely technical article.

  • Microsoft had (has?) Dave Cutler instead of the source code to VMS. I suppose that is some sort of substitute. Didn't seem to help the end product much though.
  • yikes
  • only active, needed chunks of the code are loaded into RAM.
    I think the technical term for this is "demand paging." Except isn't it "code fragments" instead of pages for MacOS? Or am I confused. I'm not really a Mac person . . .
  • Huh? I think you may be confused about your terms.

    Virtual memory (VM) is how you implement processes with separate memory spaces; with a page table, and a page fault handler in the kernel. It's how your swap space gets used on Linux.

    Certainly you could have protection without paging; there were machines in the 60's that did that if I recall correctly. It's kind of silly, though, because you have to either always use position-independent code, or have a loader which relocates your code on the fly. Or you can do weird segment register things. A page table (and hardware assist via a TLB) is a much more flexible solution, considering it lets you do things like paging to disk (instead of swapping entire processes to disk, which you had to do before you had VM).

    Are there any mainstream OSes out there which don't do virtual memory? I think MacOS is weird this way somehow, but I'm not sure exactly what it does and doesn't do (just wait for MacOS X.)

  • I didn't say "unreasonable"... although there is hard evidence that some applications benefit considerably from page coloring on the 21164.

    Linus is correct in saying that the 21164 is one of the last architectures to use a direct-mapped L1 cache. That alone may make it not worthwhile for the kernel distribution. Making it a kernel option may be viable... I admit I don't know enough about VM internals to attempt it.

  • by codealot ( 140672 ) on Thursday January 20, 2000 @06:29AM (#1356015)

    Interesting notes about page coloring... I didn't know FreeBSD had this capability.

    Linus has stated [deja.com] that he probably will never add page coloring to the Linux kernel. Apparently he doesn't believe it will benefit enough architectures to be worthwhile.

    On an Alpha 21164 many binaries run faster on Tru64 Unix than on Linux. Static linking rules out differences in the compilers or libraries. Page coloring (a feature of Tru64) is almost certainly the reason.

    At last I feel compelled to go try FreeBSD...

The most difficult thing in the world is to know how to do a thing and to watch someone else doing it wrong, without commenting. -- T.H. White

Working...