Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Operating Systems BSD

OpenBSD Gets Even More Secure 374

Telent writes "As seen in this post by Theo de Raadt, OpenBSD is getting even more secure, working on smashing script kiddies running buffer overflow exploits dead. Tightening PROT_* according to the POSIX standards and creating a non-executable stack on most architectures are just two of the recent enhancements, most of which are in -current now."
This discussion has been archived. No new comments can be posted.

OpenBSD Gets Even More Secure

Comments Filter:
  • by stonebeat.org ( 562495 ) on Thursday January 30, 2003 @09:15PM (#5193317) Homepage
    I like the way, BSD makes non-exec stack default. I think in the latest linux kernel, you can make it non-exec at compile but it is not default.
    • by fifirebel ( 137361 ) on Thursday January 30, 2003 @09:53PM (#5193541)

      There are several reasons why Linux does not have non-executable stacks yet...

      One of them is that gcc and the kernel use trampolines. From the gcc docs [fifi.org]:

      A "trampoline" is a small piece of code that is created at run time when the address of a nested function is taken. It normally resides on the stack, in the stack frame of the containing function.

      AFAIK, linux uses trampolines at least in these places:

      • Gcc uses them for nested functions (not very common though, but glibc has quite a few of those).
      • The linux kernel uses (used?) trampolines for signal delivery.

      Both problems can be addressed (the openwall patches take care of the kernel trampolines), but it's not as easy as turning off the PROT_EXEC bits in the stack.

      Also, a non-exec stack is not a silver bullet: it only makes exploiting buffer overflows somewhat harder. Check out this article [securityfocus.com] from Solar Designer (the OpenWall patch author).

      On top of the above:

      • Multithreaded programs have more than one stack, and they're not necessarily contiguous.
      • As Theo's mail says, the i386 arch (which is still the most common linux arch, despite its suckiness) has a very limited way of implementing PROT_READ && ! PROT_EXEC pages:
        The i386 is not capable of doing per-page execute permission. At most it is only capable of drawing a line through the address space, by limiting the code segment length (using the code segment register). So we can say, "from 0 to this point is executable" and "from that point on to the end of userland is not executable".

      You may now understand why Linus has so far judged that a non-executable stack was more trouble than it was worth.

  • by Forgotten ( 225254 ) on Thursday January 30, 2003 @09:16PM (#5193320)
    One day, Theo is going to decide that allowing people access to the HTTP port of the dist machine is just too big a risk, and OpenBSD really will be the most secure OS there is.
    • by coene ( 554338 ) on Thursday January 30, 2003 @09:29PM (#5193406)
      OpenBSD does not secure by limiting or removing functinality. Instead, it secures through proper programming, working as a team, and tackling issues in sequence.

      I understand your joking, but point the next one towards the right area :)
      • In truth I agree (mod grandparent down!). But there is a fine line between removing functionality and failing to add it (or delaying addition indefinitely).

        I have used OpenBSD for some task-specific servers, and undoubtedly will again. I'm very glad it exists (regardless of the personalities involved) because it defines one end of a spectrum, from slow-but-steady to willy-nilly. Actually defining that spectrum can be someone else's flamebait. ;)
      • As an OpenBSD user (/sysadmin/porter/etc) I have to say it is not always the case. Such as Apache now being chrooted. Sure, it's easy to un-chroot it, but it just goes to show that sometimes security and functionality are mutually exclusive, and in those cases Theo goes for security, so long as it can be undone for those that want the features.
    • You're sort of right (Score:5, Interesting)

      by anonymous cupboard ( 446159 ) on Friday January 31, 2003 @01:06AM (#5194483)
      Theo doesn't like to enable functionality on OpenBSD until he is fairly certain that it is secure. The process isn't perfect, for example, he has also seen the OpenSSH problems - but most of the time it works.

      Forget about security through obscurity, this is security through paranoia. Sometimes, it is justified.

  • smp blah. (Score:2, Flamebait)

    by sithe ( 85039 )
    Yah, now If only I could run Open BSD on a system with more than _one_ processor. =/

    -sithEnder
    • Re:smp blah. (Score:5, Informative)

      by MalleusEBHC ( 597600 ) on Thursday January 30, 2003 @09:35PM (#5193443)
      You may get your SMP in OpenBSD [slashdot.org] soon. From what that article says, they are supposed to have a kernel up by now (although that was their estimate, so you can never be sure).
    • Seriously, without MS would there be a reported "shortage of qualified computer personnel"?

      If all these exploits are beaten before they are even implemented how will one prove their worth to their employers?

      Damn it, I thought that SQL Slammer was a saving grace. We didn't have our servers at sp3 but then again they are behind a dual stage firewall so we had no hicup at all. However, I got the time at work to install patches as a result of the media.

      To summarize: don't eliminate vunerabilities because you'll just be eliminating someones job (both the admins and hackers).

      [end sarcasm, or my attempt at sarcasm since I haven't seen any around these parts since 1995]
    • You can _run_ it, you just only get one CPU. We use it on some multi-CPU firewalls (multi-CPU because we bought a dozen machines in a package and they _all_ came with two CPUs).
  • Most Secure OS (Score:5, Informative)

    by OverlordQ ( 264228 ) on Thursday January 30, 2003 @09:22PM (#5193365) Journal
    A few [pcmag.com] other [thestandard.com] people [zdnet.com.au] would agree [acolyte.org] that OpenBSD is the most secure OS. Although I'm a Debian user, kudos to the OpenBSD team on their work.
    • I should hope OpenBSD is the most secure OS out there. That is the main goal of the project. It's definately not the fastest or most feature-rich OS out there. But, that's what they do, and if you need ironclad security, OpenBSD is the way to go.
      • I should hope OpenBSD is the most secure OS out there. That is the main goal of the project.
        Only one remote hole in the default install, in more than 7 years!
        Uber secure? Yes. Secure? Probably not, but they're working on it.
        Actually that one remote hole is a stronger statement for OpenBSD than when there were none known.
    • VMS (Score:5, Insightful)

      by ArchieBunker ( 132337 ) on Thursday January 30, 2003 @09:58PM (#5193575)
      VMS is probably a close second in terms of security. Its C-2 secure right out of the box. Plus most script kiddies would be left scratching their head trying to use it.

    • Re:Most Secure OS (Score:4, Interesting)

      by modulo ( 172960 ) on Thursday January 30, 2003 @10:44PM (#5193819)
      Theoretically, a capabilities-based OS like EROS [eros-os.org] would be even more secure than OpenBSD, if the implementation was as careful. Of course, the same tradeoffs that OpenBSD makes would be even more extreme (application support definitely, performance probably I would think)
      • Some others that have come up for discussion before (?) are:

        Trusted Solaris [sun.com]

        and

        Pit Bull [argus-systems.com] from Argus Systems

        IIRC, these are more common Un*ces that are patched to provide "capabilities" - that is, instead of the root being the one-size-fits-all user that has enough privileges to get anything done, different kinds of access are broken down so that if a running program getw 0wned, it limits the damage.

        Theo's answer to that probably would be, "code it right in the first place and it won't GET 0wned!!!", which is a valid point, the devil (no pun intended) is in the details.

        BTW, I first came across EROS comes from Alan Cox in an interview with Robert Metcalfe a few years ago (remember the "Open Sores" series of articles? Great trolling, Bob!), in response to a question of what he thought was going to be the next big thing after Linux. He was impressed with the response (having previously accused Linux-y types of monomaniacal zeal), but it didn't overturn his opinion at the time that Linux was doomed. Oh well. (This comes to you courtesy of the similarly fated Internet.)

  • open source security (Score:3, Interesting)

    by calib0r ( 546092 ) <backpacker@ h i kers.net> on Thursday January 30, 2003 @09:32PM (#5193429) Homepage
    I think its good that organizations such as OpenBSD are taking the initiative to combat DoS/DDoS attacks. I see a lot of companies such as ISS and SecureWorks blowing smoke about "preventing hacker intrusion" when the real threat these days is worms such as Slammer. I really don't know a whole heck of a lot about DDoS attacks, but I've seen a lot of systems crumble under them, even if the os installed on the systems is unaffected. Wonder when Cisco will start doing things like this with IOS? (Unless they already are?) Discussion encouraged.
    • by Anonymous Coward
      Cisco already seems to have put in measures to combat the spread of worms. Whenever someone starts putting a high load on the router, it crashes. Problem solved.
  • by Mathness ( 145187 ) on Thursday January 30, 2003 @09:33PM (#5193431) Homepage
    This, and other recent, articles about BSD have made me consider giving BSD a test run on one of my computer. It appears to have some interresting posibilities. And it should be worth getting to know better, it could be the right tool (OS) for some types of "jobs".

    The problem is always where to start, I assume there is no BSD flavor as easy to install as Mandrake (just to pick one), but then I am a happy user of Debian (2.2) floppy installer. A few hints to start out on, for instance a install/easy of use comparison of the various BSD's would probably be helpful to more than a few readers.
    • The floppy based install is pretty easy. If you have a windows nt/2000 box, get ntrw.exe and (probably) floppy32.fs. run command ntrw.exe floppy32.fs to create a boot floppy. If you have a weird device, you may need a different .fs file, read the docs...

      It does put you in a disk partition situation which can freak people out. For your first experiment on a disk with no data you care about, you can tell it to use the whole disk for obsd, and go with two partitions. (many would advise separate partitions for /tmp, /var, /home, but this is a simple example) The first partition is always / (at least, I can't make the installer do anything else- correct me...) and the next is the swap partition. Something like 2*RAM should do for swap (again...correct me if I'm wrong)

      If you can set up an ftp server, you might want to get the ~120MB (if using X, about half that if not) and put it on a local ftp server. I have found the mirrors pretty adequate, even right after a new release.

      Post an obfuscated email and I'll send you a cheat-sheet, with how to do common things that are too easy for the gurus to think of supplying and too hard for a novice like me to figure out. Mostly network stuff.
  • Anyone mind explaining, or posting some links to pages explaining, what stack smashing is, and why an executable stack stops it?

    We're not all l33t haxx0rs here...
    • I do not have any links, but stack smashing is when you pass a large parameter to a program. If the programmer did not check the size of the input and assumes that it is smaller than the actuall size of the parameter. This then overwrites the stack(variable stoarge) and can get into the return section of the program. If it is able to get into the return section, OS's with EXEC Stacks may execute code that was passed with the overflowing parameters
    • by djschaap ( 11133 ) on Thursday January 30, 2003 @09:47PM (#5193512) Homepage
      Put simply, "Smashing the stack" is a method of overwriting variables within a program (which are located on "the stack") with malicious CPU instructions. When done properly, the vulnerable application will jump to those malicious instructions and do Bad Things(tm).

      Preventing the CPU from executing code located on the stack will in many/most cases prevent these malicious instructions from ever running (because they're located on the stack).

      For a detailed explanation, see Smashing the Stack For Fun and Profit [shmoo.com] by Aleph One.
    • by Jeremi ( 14640 ) on Thursday January 30, 2003 @09:52PM (#5193537) Homepage
      Roughly: A program's stack is where it keeps its temporary information, a sort of "scratch pad" if you will. Stack smashing is when the program has a bug that causes it to "scribble outside the lines", such that it overwrites info in its scratch pad in an unexpected way.


      A common bug in programs is that when they are receiving input (from the disk, from the keyboard, but most relevantly, from the Internet), they forget to make sure that they have enough room to "write down" the input they are receiving, and so they end up writing right past the end of their scratch area and over other stuff. Typically, this will cause the program to malfunction and/or crash soon afterwards.... BUT:


      Years ago, crafty nasty little hax0rs realized they could use this type of bug to gain control of a computer remotely via the Internet. What they do is they find a computer running a program that has this sort of bug, and then send it input that is too big for the program's buffer, and contains a little program. The buggy program duly writes the input onto its stack, munging some of its other data -- and the hacker has formed the input "just so", so that the data it overwrites is the data governing what the computer will do next. So instead of just crashing, the program then executes the hacker's program. The program then usually "unlocks the front door" of the computer to the hacker, allowing the hacker to control the computer by remote.


      Making the stack non-executable means the the computer doesn't allow itself to execute any code that is located in the stack. This means that the hacker can upload his evil program, but he can't trick the computer into running it.


      Why this feature hasn't been standard in all OS's since the invention of the MMU, I cannot fathom.

    • by mbessey ( 304651 ) on Thursday January 30, 2003 @09:53PM (#5193544) Homepage Journal
      Well, most C compilers allocate temporary variables on the stack, which is the same place that they use to keep track of where the currently-executing function was called from, so they can return to the previous function.

      If your program overflows the end of a temporary string or array (due to a bug in the error-checking, most likely), it can overwrite other things on the stack, including the return address of the current function call.

      So, the attacker sends in an enormous string, which "just happens" to contain some executable code that does something nasty, followed by the address of that same code.

      The new return address overwrites the one on the stack, so that when the function returns, it actually jumps into the previously-loaded data on the stack, which then does something nasty.

      Making the stack non-executable causes the program to crash when you try to exploit it this way.

      You can still overwrite data on the stack, which might still allow you to get a program to misbehave, but at least you're limited to running code that's already in the application.

      Some of the other changes mentioned in the paper try to make it harder to exploit overwriting data on the stack, too.

      Hope this was helpful,

      -Mark
      • You can still overwrite data on the stack, which might still allow you to get a program to misbehave, but at least you're limited to running code that's already in the application.

        Not really. You can still control the program flow by overwriting the return address, and given the stack-only parameter passing scheme on x86, you can also supply arguments to whatever function you choose to return to (with register arguments, you'd be limited to whatever registers are going to be loaded off the stack before you return, and argument registers are typically not preserved by the callee). While the typical exploit that just wants /bin/sh to run would likely just call system() or execve() or something, if one really wants to run code from the stack, the stack could be set up to return to memcpy() with the arguments pointing at the code to be moved to an executable area, and a subsequent return value that points to the destination. Null bytes would present a problem with (at least) the length argument, assuming the overflow is from a C-style string, but a sufficiently clever attacker could utilize other library functions to patch in the needed null bytes.

        Thus, it may make certain attacks marginally harder, but it doesn't stop foreign code from being executable. It's most visible effect would be the problems it causes to legitimate programs that may want to execute dynamically generated code on the stack.

        While there is no silver bullet to these sorts of attacks, making the stack grow upward (or strings grow downward) would eliminate a lot more holes than a non-exec stack. Unfortunately, people are too bound by tradition, and still introduce new ABIs that use downward-growing stacks, even when there's no hardware bias against upward-growing stacks, such as on most RISC chips.

    • Quick explanation -- (really you should google for this info but oh well)

      The stack is a LIFO (Last in First out) data structure. OS's and programs use it to store all sorts of data, but one thing it's commonly used for is storing return addresses for subroutines and such.

      By "stack smashing" what people mean is the typical goal of a buffer overflow attack. A program that doesn't do bounds checking on an array of data can be fed a huge amount of data that exceeds the length of the buffer that it has allocated to hold that memory. It then starts walking over other parts of memory and runs into the stack (which exists in memory just like everything else).

      If you overwrite a return address in the stack with a pointer that points to some code that you've written to memory with the buffer overflow you just did, you can execute arbitrary code and thusly take control of a system (with whatever privileges the program you just attacked uses).

      Since a lot of your overflowing data is going into the stack, it's a potential space for a lot of the code you might want to execute. Since the stack should only hold data and pointers, not code, it's safe to make it non-executeable with the MMU. This makes it harder to write a buffer overflow exploit, although it doesn't totally prevent it (since you can stick your malicious code in non-stack space as well).

      (Feel free to flame/mod me down for any mistakes in this explanation. :-) )

    • An non-executable stack is just another hurdle. That just leaves exploits using the GOT (Globale Offset Table) or the return into libc exploits, which don't need an executable stack to work. And there are probably other trampolines on which arbitrary code can be run if an application is insecurely coded. It might deterr the script kiddie, but it won't stop the determinate blackhat.
  • What I use BSD for (Score:5, Informative)

    by harikiri ( 211017 ) on Thursday January 30, 2003 @09:38PM (#5193459)
    I prefer to use BSD (Free* or Open*) as servers, as opposed to Linux.

    Why?

    If you've ever installed a Linux distribution, you will immediately note the number of third-party libraries and applications installed on a 'base' system. This is frustrating for me, because for the most part I may not want all those extra applications installed, because it clutters up the system, and there may be various vulnerabilities present that I'll be open to.

    Instead I prefer to use BSD in these situations, because when you install the operating system, everything with a few choice exceptions (ie, gcc, apache, less) everything is part of the BSD operating system, no third party apps are installed unless you choose to at install time.

    So when I install a BSD server, its clean from the get go. If I want bash, I have to install the package. This way I get control over what is on my system, and spend far less time adding what I want, instead of removing what I don't want (in the case of a Linux distro).

    I use MacOS X laptop, which is the vision for what I always wanted my linux desktop systems to be. I can play DVD's, get sound working, simple updates, bash, gcc, ircII, web browsers which don't have problems on most sites, beautiful MP3 application, mail, etc.

    For me, I don't even bother with Linux except for testing program portability, or for wireless lan-related applications (airsnort).
    • by Kashif Shaikh ( 575991 ) on Thursday January 30, 2003 @11:17PM (#5194052)
      If you've ever installed a Linux distribution, you will immediately note the number of third-party libraries and applications installed on a 'base' system.

      I found this true with redhat and suse, BUT you can use another meta distro: Gentoo.

      You only merge(Gentoo-speak for fetch/compile/install packages) on what u need and satisfies your dependencies. Bootstrapping a gentoo system takes a couple of phases, however after the 3rd phase you have a base system close to 400mb(extra megabytes are due to having a development system needed to compile everything).

      Plus, you can customize specific packages such as compiling in gtk/kde support, etc with USE variables. And if that don't work, you can edit the ebuild script file right then and there and customize you're own version right there(i.e. adding --msdfs configure flag to samba to compile in Microsoft DFS support)

      I won't bore you with more details, but you can check out www.gentoo.org.
    • by tamnir ( 230394 ) on Friday January 31, 2003 @12:02AM (#5194273)

      If you've ever installed a Linux distribution, you will immediately note the number of third-party libraries and applications installed on a 'base' system. This is frustrating for me, because for the most part I may not want all those extra applications installed, because it clutters up the system, and there may be various vulnerabilities present that I'll be open to.


      You should give Linux From Scratch [linuxfromscratch.org] a try. You will build your own Linux system, installing each component manually. No clutter, just what you need, and compiled the way you want it. It is a very good learning experience too.
  • by JimmytheGeek ( 180805 ) <jamesaffeld@yaho ... m minus math_god> on Thursday January 30, 2003 @09:38PM (#5193461) Journal
    I want it now, but I'd whine if it weren't fully tested. Man, to think I'm doing the "gotta go pee" dance over something like this. I need a life.

    We have a lot of single-purpose OBSD boxen here. I like them a lot. Go, team, go!
  • by mbessey ( 304651 ) on Thursday January 30, 2003 @09:39PM (#5193466) Homepage Journal
    I never did understand why this particular set of problems was allowed to exist on most x86 UNIX-like operating systems.

    It's too bad that they weren't able to completely seperate executable and readable memory, but it's good to see somebody finally taking these problems seriously.

    And as a bonus to making the system more secure, these changes will make it easier to debug stack-smashing bugs.

    -Mark
  • But What About...? (Score:5, Interesting)

    by ewhac ( 5844 ) on Thursday January 30, 2003 @09:44PM (#5193494) Homepage Journal

    Theo de Raadt writes:

    W^X is a short form for "PROT_WRITE XOR PROT_EXEC". The basic idea here is that most buffer overflow "eggs" rely on a particular feature: That there is memory which they can write to, and then jump to.

    What if there was no such memory? Does a normal Unix process have memory that is both writeable and executable? Turns out they do: [ ... ]

    But do they need it? [ ... ]

    If you're running a JIT compiler/interpreter or other dynamic code assembly, you sure as hell do.

    I can see how you might be able to write dynamically generated code to a page, then turn off PROT_WRITE and turn on PROT_EXEC before jumping to it. However, this is almost certainly two trips into the kernel, each involving tons of permission checking. So performance will likely suffer, which negates the whole point of doing JIT in the first place.

    I like his survey of MMU architectures, though.

    Schwab

    • You can do JIT (Score:5, Interesting)

      by ^BR ( 37824 ) on Thursday January 30, 2003 @10:09PM (#5193639)

      You just have to explicitly mprotect(2) [openbsd.org] the memory where it happen with PROT_EXEC|PROT_WRITE. The fact that on some OSes it can work without doing that is actually a bug in these OSes.

      What the change is doing is the right thing, using a minimum privilege way to achieve more security. If some static code actually contain data that look like machine code it could be executed this wont be possible anymore.

      Non executable stack by itself was far from enough as most program have some way of putting things on the heap or elsewere for an attacker and he could jump there instead of jumping on the stack. Coding an exploit for OpenBSD will get real tough now, even if there's an actual buffer overflow.

    • by epine ( 68316 )

      There are too many presumptions in this comment about JIT. The protections Theo is using are imposed by the VM system, which means the protections are relative to a *view* of memory. It's not obvious to me that the JIT compiler needs to share the same view of memory as the process for which it compiles. I can't bring myself to believe that for a typical invocation of the JIT compiler, one process transition is a dominant cost.
  • by PetWolverine ( 638111 ) on Thursday January 30, 2003 @09:47PM (#5193506) Journal
    Anyone know how portable these modifications are to other BSDs, notably Darwin?
    • I don't think PPC darwin users need to fear. Shellcode on ppc archs gets so fscking large it's not useable for most stuff anyway. Not that programmers should code less secure on Darwin though. The arch being difficult to exploit doesn't mean that you have a license to produce sloppy code.
  • by Woodrose ( 607437 ) <kelley.johnston@ ... u ['sol' in gap]> on Thursday January 30, 2003 @09:50PM (#5193526) Journal
    Ancient and venerable 24-bit CDC 3150 machine in 1970 solved buffer overflow problems by pre-writing return jump to execover (pass control to data area and bang, you're dumped) instruction throughout user space. When you got the dump, the ASCII interpretation was "ojoy". So you got about thirty pages of blue-bar printout saying "ojoyojoyojoyojoy...".
  • by Sayjack ( 181286 ) on Thursday January 30, 2003 @09:56PM (#5193558) Homepage
    A nonexecutable stack is no guarantee of safety. Solaris 2.6 demonstrated this here. [packetstormsecurity.nl]
    • by Anonymous Coward
      A nonexecutable stack is no guarantee of safety.

      Nobody said it was. To quote TdR, "We feel that these 4 technologies together will be a a royal pain in the ass for the typical buffer overflow attacker."

      If you think "royal pain in the ass" constitutes proof, go back to math 101.
  • Really hard workers (Score:5, Informative)

    by Amsterdam Vallon ( 639622 ) <amsterdamvallon2003@yahoo.com> on Thursday January 30, 2003 @09:56PM (#5193564) Homepage
    The OpenBSD team is a really great group of hard-working coders that don't stop with writing just average code.

    This latest security measure goes to show why they're still #1 when it comes to really closing up a machine's holes to prevent abuse and unwanted infiltration into a system.

    Unfortunately, they still can't get UltraSparcIII documentation [osnews.com] that they need for their project.

    I urge you all to contact SUN Microsystems and demand that they hand over the details of the US III series computers.

    *nix.org [starnix.org] -- Latest article > "Taming OS X"
  • by pmz ( 462998 ) on Thursday January 30, 2003 @09:57PM (#5193567) Homepage
    Note that UltraSPARC systems (as well as Alpha and PA-RISC) allow per-page execute permissions.

    This is another one of those things that is often forgotten when people try to argue that cheap x86 boxes are the cat's meow and all those other architectures should just die off.

    There are real reasons why the real UNIX workstations and servers are worth more (and cost more) than x86-based alternatives. Beyond the MMU stuff, there's also the OS-independent firmware POST diagnostics, more and more mainframe-class redundancy features the bigger the server gets, large CPU caches, SCSI or FC by default, chassis built to be easily maintained, ECC on pretty much every data channel, and on and on.

    I think x86 has a niche in clustering and cheap wintel desktops (granted, a large niche), but it still doesn't excel in hard-core professional workstation and server applications. This memory permissions stuff is just another reason why. x86 is designed from the ground up as a mass-marketed technology (sort of like Chevrolet).
    • Sure (Score:4, Insightful)

      by 0x0d0a ( 568518 ) on Friday January 31, 2003 @07:11AM (#5195246) Journal
      Just find me a chip for another architecture with the same price/performance as the x86, and I'll be set to go.

      Hint: there isn't one. 30% more speed matters a lot more to me and the overwhelming majority of the world than a feature which may or may not be fully used by the OS to try to help prevent a particular type of exploit in buggy servers.

      FC? Now *there*'s a niche product. Slightly more sophisticated MMU? Yawn. Bigger POST? I'm not worried that the solid-state components of my system are going to spontaneously fail. Big CPU cache? All that does is help price/performance a bit, which in other architectures is already behind x86. Easily maintained chassis? What's wrong with simple side panels and a motherboard tray on a case? ECC on every data channel? I'm really not worried about my southbridge spontaneously failing, thank you very much.

      When it comes right down to it, x86 is nothing special, but mass production *is*. You can throw more engineering resources at a chip and reduce cost more.
  • by matman ( 71405 ) on Thursday January 30, 2003 @10:58PM (#5193912)
    The bigger problem is that the principle of least privilege is not adhered to in world of Unix. Programmers will always write bugs and applications will always have vulnerabilities that can be manipulated. Manipulation of services should only effect the service being manipulated, not the whole system. For example, services should have NO access to anything by default. When you install a service you should set up the specific permissions that it requires (this can be made easy - the app, upon install, can recommend the permissions and you can just say, "okay"). If the app tries to do something that it doesn't normally need to do (like access /home/me/mysecretfile), the system should log an access denied message; the Linux kernel right now can't even audit denied access to files! CHUID permissions to deliver mail to people? A much cleaner mechanism is for the mail server to create the files under its own name and give permission to the user to take ownership of the files.

    Linux, and Unix in general, tends to have pretty limited access controls. Even with ACL support, the distros still need to ship with restrictive settings and manage them. A lot can be done to provide a framework under which compromises can be limited and can be audited. Right now we don't have that. Without detection and reaction, how do you know that your prevention is effective?

    • by Flower ( 31351 ) on Friday January 31, 2003 @12:19AM (#5194323) Homepage
      systrace [onlamp.com] anyone?

      I'll leave that as a start.

    • The bigger problem is that the principle of least privilege is not adhered to in world of Unix. Programmers will always write bugs and applications will always have vulnerabilities that can be manipulated. Manipulation of services should only effect the service being manipulated, not the whole system. For example, services should have NO access to anything by default.

      That is very much the approach that OpenBSD is taking - e.g. with privilege separated OpenSSH. If you exploit OpenSSH before authentication, you are unprivileged in a chroot that you can't write too. While this is not invulnerable (you may still abuse kernel bugs to escalate your privilege), it is a good deal better than before. OpenBSD also provides you tools with which you may further protect yourself: systrace [umich.edu] - a system call policy checker.

    • by ctr2sprt ( 574731 ) on Friday January 31, 2003 @02:51AM (#5194777)
      The problem is simple: the controls we have right now are far too coarse for the things we want to do with them. Running daemons as root is like putting out a match with a lake. They never need all the privileges of root. We need a way for root to drop its privileges selectively - or, better yet, a way to confer specific privileges to regular users. "The user apache can bind to port 80 without being root." "The user ftpd can chroot without being root." And so on.

      It would also be lovely to see the tool which manages all this unify all the ACL-like aspects of the system into a single interface. Firewalls, filesystem ACLs, and privilege delegation are all remarkably similar, it'd be great to be able to define them selectively in groups (I think SELinux calls them "contexts") and confer them to various processes. Daemons can bind to ports, but not regular users? Easy: set the firewall ACL policy to deny, then grant the "daemon" context an exemption. Then tie in other useful privileges, like the ability to write to /var/run and send messages to syslogd. All these things are obviously related conceptually, but currently you'd need 3 tools to do it all (I don't think it's possible to regulate syslogd access this way, period, but I'm being optimistic and assuming it is).

      Unfortunately, from what I saw of SELinux when I last used it, all this is some time off... especially the part where it's easy to administer and use. But to end on a positive note, there are a bunch of people who recognize this problem and are working hard to correct it.


  • If they'll just get off of their butts and get SMP support, I'd switch all of my servers over to it in a week. Really. It's just too bad that they don't seem to want to support anything larger than a desktop PC.

    Wait, my desktop is a dual Athlon. I guess my DESKTOP machine is just too advanced for them. C'mon, Theo, get it together.

    steve
  • Hardware (Score:5, Informative)

    by octogen ( 540500 ) <g.bobby@gmx . a t> on Friday January 31, 2003 @03:24AM (#5194866)
    On Intel processors, read- and execute-permission for a memory page is only one flag. For this reason, if you make the stack nonexec on Intel machines, the cpu will have to do a lot of context-switching, because all the protection faults that occur when an application tries to read from a non-exec page need to be handled by the OS kernel.

    On SPARC, Alpha or POWER CPUs there is one flag for the read-permission and another one for the execute-permission.

    To prevent exploitation of buffer overflows, I believe that we simply need much better hardware.

    For example, IBM's AS/400 has Hardware pointer protection. It is absolutely impossible to fake certain types of pointers on the AS/400, because the CPU will recognize when a pointer has been overwritten due to a buffer overflow.

    That's how REAL bufferoverflow-protection works. If you just make dataregions nonexecutable, you can still parameterize some kind of library function (how about execve()?) and make the vulnerable program jump into the codesegment to execute the library function with the parameters specified by the attacker.

    AS/400s simply 'abuse' one bit in the ECC code as a flag for marking valid pointers.

    For every 64 bits in RAM there should be 1 flag bit, which tells the CPU whether the data in memory is a valid pointer or not.

    An instruction like LEA (Load effective address) should then implicitly set the flag, and instructions like MOV (Move Data, actually a copy instruction) should always clear the flag.

    If a RET (return from subroutine) instruction tries to load an unflagged (=invalid) pointer, the CPU should trap to the OS kernel.

    For legacy application, that are too damn stupid to use pointers in a correct way, a privilege could be added to the OS kernel to allow an application to use even invalid pointers.

    Furthermore, read- and execute-permission should be separate flags and all stack- and heap-pages should be nonexec by default.

  • by burbilog ( 92795 ) on Friday January 31, 2003 @04:47AM (#5195021) Homepage
    can't install it on my main production servers. Why? Because it STILL does not have locales. And without locales cyrillic doesn't work in mysql, zope and other applications :( OpenBSD makes a great private net forwarder in remote locations tho.
  • by wowbagger ( 69688 ) on Friday January 31, 2003 @09:49AM (#5195974) Homepage Journal
    I was thinking about this on my drive into work this morning, and I came to the conclusion that what we need to do is to change the C style stack usage to use two or three stacks.

    Consider: C uses the stack for three types of information: call/return information, parameter passing, and local variable storage. I assert that this is the cause of most of the stack problems in C - you are using the same thing for three different needs.

    Let me discuss each of the three uses in turn. I will use x86 terminology for this discussion because that is the primary architecture used for Linux and because that is what I am most familiar with, but you should be able to generalize my points without much problem.

    First, you have the call/return stack. This needs to the be CPU stack so that normal CALL/RET operations work. The only things that should be stored here are the actual return addresses, as well as possibly register saves (esp. for interrupt routines). However, in a unified stack design it is possible for the bad guys to modify the return address. Thus, even in a non-executable stack environment I can still change the return address to point to a function, say exec().

    Second, you have the parameter passing stack. Ideally the only thing on this stack would be parameters passed down to the function from the caller. However, in a unified stack design, I can modify this stack to contain data - thus I can create a pointer to the string "/bin/bash" on the stack. With this and with the call/return modification above, I can cause the current function to "return" to exec(), with "/bin/bash" as the arg. Boom. Remote shell.

    Third, you have local variable storage. If this space were seperate from the other two stacks, then overflowing a local buffer, while still bad, would not be able to modify the return address nor would it be able to create parameters. Ideally, every subroutine would be given its own sparely mapped local space - thus boundary errors would likely throw a page fault (granted, the cost of setting such a mapping up for each subroutine call would be prohibitive, so it is unlikely to happen without some form of hardware acceleration. Perhaps a low/high limit register could be added to the various index functions, such that an access relative to EBP, ESI, EDI would fault if it went out of range.)

    However, consider just seperating the stack into three areas, widely seperated. ESP would point to the hardware stack. EBP to the local storage area, and ESI to the parameter block (with EDI pointing to "this" for C++). Now, a bad programmer has a buffer in local storage and doesn't range check it. A would-be exploiter still cannot modify the return address nor can he modify the parameter stack. The most he could do would be to hose the local variable storage (granted, that might still allow him to corrupt the local vars for some other function and perhaps get an exploit that way, but it would be even more difficult).

    Granted, to do what I just suggested means throwing out *all* standard libraries, tools, compilers, and so forth - I am not actually suggesting that the x86 family do this! However, for new architectures like Sledgehammer et. al, this might be the time to make such a break.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...