Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
BSD Operating Systems Technology

McKusick's softupdate code integrated in to NetBSD 9

From the NetBSD -annouce mailing list: "Frank van der Linden (frank@wins.uva.nl) has brought Kirk McKusick's trickle sync + FFS soft update code into the main tree." For the uninitiated, softupdates is an extension to FFS which collects and orders writes to the filesystem, removing unnecessary metadata writes, and carrying out necessary metadata writes asynchronously. All the speed of Linux's default filesystem configuration, with all the safety of UFS. More information at the NetBSD news page, and Kirk McKusick's softupdates page. Softupdates has been in FreeBSD for a while, it's great to see NetBSD getting it as well.
This discussion has been archived. No new comments can be posted.

McKusick's softupdate code integrated in to NetBSD

Comments Filter:
  • by ajs ( 35943 )
    The "all the speed ... of [ext2fs] ... all the safety of UFS" comment makes me wonder. Why is ext2fs considered unsafe? I've only lost an ext2fs once, and that was a hardware problem (hard power-off caused a nice long disk-scribble). The recovery was fun to watch though ;-) This is one time I was thanking the heavens for the -y option to fsck!

    Seriously, though, what's the deal? Is this just "my OS' filesystem is more stable than your OS' filesystem," or is there a serious concern that I should be on the lookout for?
  • Hey Nick, please correct the poster's link to Kirk McKusick's softupdates page. Its .html, not .htm. No 8.3 filename lengths for UNIX. :-)
  • One of the most obvious reasons are the ridiculously long syncs of Extended Filesystem 2.
    If, sorry, WHEN your Linux box crashes(all boxes do, sooner or later) you will most probably lose data.

    There are several other reasons why Extended Filesystem 2 is much less secure, but that's on a more technical level and I for one don't have time to explain it. Just look at the code.

    Extended Filsystem 2 is very useful for desktops but is not a serious alternative for servers.
    (The same can be said about Linux in general.)

    As always, it all comes down to what you will use your box for.
    If you use and like Linux, don't worry about, be happy with what it does well. Because you did not pick your OS because of the filesystem, did you?

    And Extended Filesystem 3 will be out soon anyways...


  • I also said:
    [...]
    As always, it all comes down to what you will use your box for.
    If you use and like Linux, don't worry about, be happy with what it does well.
    [...]


    I run one of my old modified Linux 1.0.6 kernels on one of the desktops.
    Why? Because it does what I want it to, and does it well.
    But I would NEVER use Linux as a server OS because of it's bad networking performance, security etc etc.

    And as I wrote, but you where to ignorant to read, it all comes down to what you will use the OS for...


  • Why is ext2fs considered unsafe?

    Traditionally, the Berkeley Fast File System (which is what most Unix file systems are based on these days), has written "meta-data" (directory entries, the stuff in inodes and superblocks, etc) in a synchronous fashion. That is: when write to filesystem meta-data is required then the the system goes straight to the physical disk drive with the changes and updates disk blocks. This differs from the way that the system handles straight data writes, where the write may just go into a memory buffer for later asynchronous write-out to physical disk.

    ext2fs, in default configuration, uses asynchronous writes for meta-data as well as for data. This gives a useful speedup (particularly in operations like creating/deleting directories and stuff like "ls -l" in a big directory, which updates the atime on each inode of every file). It also means that if the system panics or you have a power outage there is a bigger chance of there being meta-data critical to the file system structure which is hanging around in RAM and which hasn't been committed to the physical disk.

    I've seen quotes from Linus on this that say, effectively, "but Linux doesn't crash anyway". This is cute, but doesn't help much if the power company messes you around, or if a clueless user manages to push the Big Red Button on your main server ...

    The "safety" thing tends to be brought up by BSD types (like myself :-)) when people make unfavourable comparisons between the speed of BSD filesystem operations compared with Linux filesystem operations. At the end of the day it is a trade off between speed and crash hardening (and you can run either system either way: both systems have mount options to do sync or async meta-data updates, its just that BSD defaults sync and Linux defaults async).

    In the BSD world, async mounts tend to be seen as a performance hack which is justified for a filesystem that you wouldn't mind loosing in the event of a crash - a news spool, perhaps, a filesystem to which you are currently installing, or restoring from backup (if it stuffs up you can just start over without file loss) or /usr/obj (where BSD stashes .o files during a system recompile). I've seen it suggested that the structure of ext2fs is such that filesystem corruption is less of a problem/easier to repair anyway, so ext2fs users may well be justified in having a different take on the tradeoff from BSD users.

    The good news is that, with softupdates in BSD, and various log based filesystems beginning to appear in Linux, both camps are now able to "have their cake and eat it" (or will be able to soon enough).

  • All of this seems quite grim. So, why have I never lost anything. I've certainly had power-related crashes, but only once have I lost anything, and since I cannot see how the MASSIVE damage, in that case, was due to anything but a disk-scribble, I'm a little shocked by the gloom-and-doom comments.

    It kind of sounds like a theory-vs-practice thing, especially since both OSes can be configured to do things the way the other does by default.

    The "Linux is nice as a desktop, but not as a reliable server" comments are particularly shocking, given how many Linux-based servers are out there (I've got a few, and have managed more in the past). You would think that, for example, lost mail would be noticed if it were happening to all those sites that run Linux/Sendmail out there.

    Anyone have any theories as to why the gap exists?

The first 90% of a project takes 90% of the time, the last 10% takes the other 90% of the time.

Working...