Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Operating Systems Software Unix BSD IT Technology

33-Year-Old Unix Bug Fixed In OpenBSD 162

Ste sends along the cheery little story of Otto Moerbeek, one of the OpenBSD developers, who recently found and fixed a 33-year-old buffer overflow bug in Yacc. "But if the stack is at maximum size, this will overflow if an entry on the stack is larger than the 16 bytes leeway my malloc allows. In the case of of C++ it is 24 bytes, so a SEGV occurred. Funny thing is that I traced this back to Sixth Edition UNIX, released in 1975."
This discussion has been archived. No new comments can be posted.

33-Year-Old Unix Bug Fixed In OpenBSD

Comments Filter:
  • by Anonymous Coward on Tuesday July 08, 2008 @08:12PM (#24108651)
    Wouldn't want to let anyone take over your system with yacc. Seriously.
    • by slew ( 2918 ) on Tuesday July 08, 2008 @08:15PM (#24108701)

      Wouldn't want to let anyone take over your system with yacc. Seriously.

      But ./ is already taken over with yak. Seriously.

    • by Anonymous Coward on Tuesday July 08, 2008 @08:28PM (#24108811)

      Who cares about OpenBSD yacc? BSD is dying and Netcraft confirms it. The world has moved to GNU/Linux and Bison.

    • Wouldn't want to let anyone take over your system with yacc. Seriously.

      I think if you've installed yacc with setuid bit then you have other problems to worry about. Seriously.

      • What, so in the "Web 2.0" world, it would inconceivable that somebody would provide a web-accessible yacc service to the world?
        • by setagllib ( 753300 ) on Wednesday July 09, 2008 @12:47AM (#24111833)

          Ah, but it would be written as a J2EE application. And the input wouldn't be .y, it'd be an XML document. And the output wouldn't be C, it'd be another XML, passing through a terabyte of XSLT. Then you pass this compiled parser XML, only a gigabyte in size, and your language file to a parser web service and it returns even more XML representing the parse tree.

          Ahh, progress.

    • by OttoM ( 467655 )
      I think you missed the point. The bug is in the code generated by yacc, which can end up anywhere.
  • by Yold ( 473518 ) on Tuesday July 08, 2008 @08:13PM (#24108669)

    Unix beards were Unix stubble

  • bad omen (Score:5, Funny)

    by spir0 ( 319821 ) on Tuesday July 08, 2008 @08:17PM (#24108709) Homepage Journal

    a 33 year old bug, plus a 25 year old bug (http://it.slashdot.org/article.pl?sid=08/05/11/1339228)....

    if we keep going backwards, will the world implode? or will daemons start spewing out of cracks in time and space?

    • Re:bad omen (Score:5, Funny)

      by je ne sais quoi ( 987177 ) on Tuesday July 08, 2008 @08:30PM (#24108835)
      Nah! What this means is that they are fixing bugs faster than they're making new ones. If they weren't, they'd spend all their time chasing the newest ones. :)
      • That isn't necessarily true. It's just as possible people are wasting time fixing unimportant issues and ignoring more important ones.

        I'm not trying to disparage the OpenBSD team or anything. It's just that no development team is perfect.

        • Re:bad omen (Score:5, Funny)

          by Dunbal ( 464142 ) on Tuesday July 08, 2008 @10:05PM (#24110041)

          It's just as possible people are wasting time fixing unimportant issues and ignoring more important ones.

                We're talking programmers here, not politicians...

        • Re: (Score:3, Insightful)

          by incripshin ( 580256 )
          Well, they're not checking yacc for bugs for the hell of it. They're reimplementing malloc to be more efficient, but it broke buggy code. Is there any other option than to fix yacc?
        • by IcePic ( 23761 )

          Then again, the bug came up while failing to compile xulrunner, so it wasnt hunting for stupid 30+ years old code noone uses, but running a compile of something from this side of the millenium that in the end pointed to this bug.

      • Re: (Score:3, Insightful)

        by p0tat03 ( 985078 )
        Or we're so painfully slow with fixing bugs that we JUST got around to 1975 :P There are always multiple views :P
    • Re:bad omen (Score:5, Funny)

      by exley ( 221867 ) on Tuesday July 08, 2008 @08:41PM (#24108935) Homepage

      a 33 year old bug, plus a 25 year old bug (http://it.slashdot.org/article.pl?sid=08/05/11/1339228)....

      if we keep going backwards, will the world implode?

      Well since time began only 38.5 years ago we should find out the answer very soon!

    • Re:bad omen (Score:4, Funny)

      by Dunbal ( 464142 ) on Tuesday July 08, 2008 @10:03PM (#24110003)

      or will daemons start spewing out of cracks in time and space?

            I finally figured out what the UAC were doing on the Mars colony... and it had nothing to do with those artifacts!

            Thank god there's a division of Space Marines there...

    • Re:bad omen (Score:5, Interesting)

      by K. S. Kyosuke ( 729550 ) on Tuesday July 08, 2008 @10:17PM (#24110199)
      First it was a fourth of a century, then it was a third of a century. The only logical consequence is that the next bug they will find now will be a memory leak in McCarthy's Lisp intepreter from '59 or some strange corner case in the Fortran I compiler. (Oh, and after a careful consideration, I am leaving the *next* bug as an exercise to the reader.)
    • Re: (Score:3, Funny)

      Well since bugs before the epoch [wikipedia.org] were actual insects, judging by past precedent they'll get super powers... like wall-climbing ability or maybe spidey senses ??

    • Re:bad omen (Score:5, Funny)

      by menace3society ( 768451 ) on Wednesday July 09, 2008 @12:02AM (#24111445)

      The next bug will be in Boolean logic. After that, OpenBSD devs will start fixing structural engineering errors the Tower of Pisa.

    • by skeeto ( 1138903 )

      or will daemons start spewing out of cracks in time and space?

      Nope, they will just simply spew from our noses [catb.org].

  • Great! (Score:5, Interesting)

    by Anonymous Coward on Tuesday July 08, 2008 @08:18PM (#24108713)

    Any word on when they're going to fix the even older "Too many arguments" bug?

    Sorry, but any modern system where a command like "ls a*" may or may not work, based exclusively on the number of files in the directory, is broken.

    • Re:Great! (Score:5, Funny)

      by The Master Control P ( 655590 ) <ejkeever@nerdshacFREEBSDk.com minus bsd> on Tuesday July 08, 2008 @08:39PM (#24108927)
      I too was devastated to learn that my poor Linux box can only handle 128KB of command line arguments [in-ulm.de]. How can I possibly finish typing in that uncompressed bitmap...
      • by Dunbal ( 464142 )

        128k should be enough for anyone.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        So, as an example, let's say I want to archive a bunch of files, then remove them from my system, to save space. I packed them up, using:

        tar cf archive.tar dir1 dir2 file1 file2 file3

        and, because I'm extremely paranoid, I only want to delete files I'm sure are in the archive. How would I do that? Could I use:

        rm `tar tf archive.tar`

        How about:

        tar tf archive.tar | xargs rm

        I'm pretty sure neither of those will work in all cas

        • Have a script loop over your directories, adding them to the archive before firing the Are-Em Star at them.

          /The power to destroy an entire filesystem is insignificant next to the power of the Farce
        • Re:Great! (Score:5, Funny)

          by menace3society ( 768451 ) on Wednesday July 09, 2008 @12:10AM (#24111523)

          Burn the contents of the tar archive onto a CD. Mount the CD over the original directory structure. Use find(1)'s -fstype option to locate all the files that aren't on the CD, copy them to an empty disk image, then eject the CD. Remount the disk image over the original directory, delete all the files in the directory, then unmount the disk image. The files identical in name to those that were on the disk image (which are those that weren't on the CD) won't be deleted thanks to the peculiarities of mount(2).

          You're welcome.

        • Re: (Score:3, Informative)

          by evilviper ( 135110 )

          I only want to delete files I'm sure are in the archive. How would I do that?


          tar tf archive.tar | while read FILENAME ; do
              rm "$FILENAME"
          done

        • How about:

          for f in `tar tf archive.tar`; do
                rm $f
          done

        • Psudo code, because my geekhood isn't threatened by petty things like the need for debugging and syntax checks:

          tar tf archive.tar > /tmp/archive.txt
          ls dir1 dir2 file1 file2 file3 > /tmp/tree.txt
          diff /tmp/archive.txt /tmp/tree.txt > /tmp/delta.txt
          rm /tmp/delta.txt

          • It is threatened by the lack of < signs, however. That's supposed to read:
            rm < /tmp/delta.txt

    • Re:Great! (Score:5, Informative)

      by Dadoo ( 899435 ) on Tuesday July 08, 2008 @09:16PM (#24109285) Journal

      While I'm sure you're trolling, I feel I should point out that, 1) I agree with you, and 2) this has apparently been fixed, on Linux:

              http://agnimidhun.blogspot.com/2007/08/vi-editor-causes-brain-damage-ha-ha-ha.html [blogspot.com]

    • Re:Great! (Score:5, Interesting)

      by Craig Davison ( 37723 ) on Tuesday July 08, 2008 @10:23PM (#24110285)

      If "ls a*" isn't working, it's because the shell is expanding a* into a command line >100kB in size. That's not the right way to do it.

      Try "find -name 'a*'", or if you want ls -l style output, "find -name 'a*' -exec ls -l {} \;"

      • Re:Great! (Score:5, Informative)

        by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday July 09, 2008 @12:18AM (#24111601) Homepage Journal

        if you want ls -l style output, "find -name 'a*' -exec ls -l {} \;"

        Yeah, because nothing endears you with the greybeards like racing through the process table as fast as possible. Use something more sane like:

        $ find -name 'a*' -print0 | xargs -0 ls -l

        which only spawns a new process every few thousand entries or so.

        • Re: (Score:3, Informative)

          by QuoteMstr ( 55051 )

          On modern systems, find -name 'a*' -exec ls -l {} +

          Personally, however, I prefer find -name a\* -exec ls -l {} +

          Also, you probably want to add a -type f before the -exec, unless you also want to list directories.

          Either that, or make the command ls -ld to not list the contents of directories.

      • by billstewart ( 78916 ) on Wednesday July 09, 2008 @02:11AM (#24112601) Journal

        You're correct that's it's not the right way to do it. The problem is *why* it's not the right way to do it. It's not the right way to do it because the arg mechanism chokes on it due to arbitrary limits, and/or because your favorite shell chokes on it first, forcing you to use workarounds. Choking on arbitrary limits is a bad behaviour, leading to buggy results and occasional security holes. That's separate from the question of whether it's more efficient to feed a list of names to xargs or use ugly syntax with find.

        Now, if you were running v7 on a PDP-11, there wasn't really enough memory around to do everything without arbitrary limits, so documenting them and raising error conditions when they get exceeded is excusable, and if you were running on a VAX 11/780 which had per-process memory limits around 6MB for some early operating systems, or small-model Xenix or Venix on a 286, it's similarly excusable to have some well-documented arbitrary limits. But certainly this stuff should have been fixed by around 1990.

        • In Defense of Limits (Score:3, Interesting)

          by QuoteMstr ( 55051 )

          Soft limits can actually mitigate bugs. If we limit processes by default to 1,024 file descriptors, and one of them hits the limit, that process probably has a bug, and would have brought the system to its knees had it continued to allocate file descriptors. Programs designed to use more descriptors could to increase the limit.

          • While xargs is a great little workaround/workhorse, it is needed in far too many cases. Why on earth would it be so hard to increase the limits every once in a while? After all, the limit in question was probably perfectly acceptable back in the day when 20mb was a lot of space and 500 files was more files than you could imagine ever creating.

      • Yuck. Much more efficient: find . -name 'a*' -ls
    • it was fixed years ago....

      find . -name "a*" -prune -exec ls -ld {} \;

      (note: this command line was generated by reading the man page for gnu find - may not work on all unix/linux variants)

      • or an even shorter solution...

        ls -c1 | grep "^a"

        and if you wanted upper and lower-case a files,

        ls -c1 | grep -i "^a"

        • Re: (Score:2, Informative)

          Comment removed based on user account deletion
          • uhm - no...
            the ls entries listed above work perfectly well on Solaris, AIX, HP-UX and Linux.

            ksh is the only shell I use, although I'm sure it would work with bash.

            I primarily work on Solaris (SPARC/x86/x64) platforms - I won't go into any kind of flame wars over it, it's just what my company uses primarily.

    • "Sorry, but any modern system where a command like ls "a*" may or may not work, based exclusively on the number of files in the directory, is broken."

      It is not broken. The fact that it complains "too many arguments" is evidence that it is not broken, since the program (ls) is doing bounds checks on the input. If it was broken, you wouldn't get the message; there would be a buffer overflow because the programmer didn't do constraints checking.

      • ERRATA (Score:3, Insightful)

        I'll catch myself before someone else does. Everything I said above is true, except that ls isn't complaining. The OS, specifically exec() and friends, is complaining because the command line length when the shell expands the wildcard exceeds ARG_MAX. Increase ARG_MAX if you want to allow more files, or use a variation of find with the -exec option or xargs, etc.
    • Actually, a patch was recently added to Linux to dynamically allocate the command line, so your argument length is now bounded only by available memory.
    • by rakslice ( 90330 )

      heh... FWIW on Windows people are stuck with only a few kB of command line and no shell wildcard expansion at all, and they don't seem to be crying in their beers (... it's the market leader last time I checked)

      The (not-so-)secret is to not do things by passing big lists around using command line arguments. Back in unix land, you can do glob-filtered listings like the one you suggested with the find command. And even the basic commands like ls can take parameters via xargs instead of regular command line

    • foreach file ( a* )
      echo $file
      end

  • maybe it's just me (Score:1, Interesting)

    by Anonymous Coward

    But this code just seems wrong. What is C code doing referencing the stack pointer directly?

    • I bet you they're not talking about the system stack pointer. Remember, yacc is a parser generator; parsing algorithms always use some sort of stack data structure. So, the "stack pointer" in question is just a plain old pointer, pointing into a stack that yacc's generated code uses.

    • "What is C code doing referencing the stack pointer directly?"

      Because it is C, and C is designed to be able to do so? How do you think the Linux kernel gets implemented, though it also has Assembly to be sure. C was designed to allow implementation of Operating Systems. The capability to reference the Stack Pointer and do other assemblyie things via the asm keyword [gnu.org] is part of its charm ;-)

  • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Tuesday July 08, 2008 @08:55PM (#24109057) Homepage Journal

    Was this a bug when it was originally written, or is it only because of recent developments that it could become exploitable? For instance, the summary mentions stack size. I could imagine that a system written in 1975 would be physically incapable of the process limits we use today, so maybe the program wasn't written to check for them.

    Does your software ensure that it doesn't use more than an exabyte of memory? If it doesn't, would you really call it a bug?

    • by QuantumG ( 50515 ) * <qg@biodome.org> on Tuesday July 08, 2008 @09:43PM (#24109711) Homepage Journal

      If you overflow a buffer then it's a bug, whether it is exploitable or not.

      • by russlar ( 1122455 ) on Tuesday July 08, 2008 @10:18PM (#24110217)

        If you overflow a buffer then it's a bug, whether it is exploitable or not.

        If you can overflow an exabyte-sized memory buffer, you deserve a fucking medal.

        • by JoshJ ( 1009085 )
          int *buffer; /*Pointer to exabyte-sized buffer*/

          while(1){
            *buffer=1;
            buffer++;
          }

          /*Where's my medal?*/
          • by AJWM ( 19027 ) on Wednesday July 09, 2008 @02:15AM (#24112621) Homepage

            /*Where's my medal?*/

            You'll get it when the buffer overflows. If you're running it on a system that processes a billion of those loops per second, that should be in a bit over 31 years. Scale accordingly for your processor and memory speed.

          • by Fzz ( 153115 )
            /*Where's my medal?*/

            Unless you're running on pretty rare 64-bit hardware, an int will still only be 32 bits. I can't remember what happens when you overflow 2^31 and try to de-reference a negative pointer (probably it just implicitly casts to unsigned), but you sure aren't going to overflow an exabyte buffer that way.

        • If you overflow a buffer then it's a bug, whether it is exploitable or not.

          If you can overflow an exabyte-sized memory buffer, you deserve a fucking medal.

          *insert emacs joke here*

      • Re: (Score:3, Interesting)

        by Just Some Guy ( 3352 )

        If you overflow a buffer then it's a bug, whether it is exploitable or not.

        It is today, but my questions is whether it was even overflowable (is that a word?) when it was written. For example, suppose it was written for a 512KB machine and had buffers that could theoretically hold 16MB, then it wasn't really a bug. The OS itself was protecting the process by its inability to manage that much data, and it wouldn't have been considered buggy to not test for provably impossible conditions.

        I'm not saying that's what happened, and maybe it really was just a dumb oversight. However,

        • by QuantumG ( 50515 ) *

          Failure to check for a buffer overflow is an error. It doesn't matter if someone else will do it for you and, as such, the error will never result in a problem for someone. It's simply wrong.

    • Re: (Score:3, Informative)

      by jd ( 1658 )
      It would have been a bug, but not necessarily one that would have security implications, though that could be system-dependent. The summary mentions a specific malloc was used to get a segfault. Another malloc library may well not have faulted. That would only matter if it was possible via the buffer overflow to get yacc to do something (such as run your code) with privileges other than those you would ordinarily have had.

      Now, looking at it just as a bug, if the yacc script overflowed the buffer, yacc can

  • Other Unixes (Score:2, Interesting)

    by jasonmanley ( 921037 )
    Forgive me if this is obvious but if the bug goes that far back will it not affect all other unixes that are based on this same source code - not just OpenBSD?
  • Hilarious! (Score:5, Funny)

    by BollocksToThis ( 595411 ) on Wednesday July 09, 2008 @01:06AM (#24112023) Journal

    Funny thing is that I traced this back to Sixth Edition UNIX, released in 1975

    My sides are completely split! Invite this guy to more parties.

  • I just finished reading the "The A-Z of Programming Languages" series on Computerworld (found out about it in here [slashdot.org]), and now the next article in the series just came up and it's a chat with the creator of Yacc.
    Coincidence?

    And for those that want to read the interview, it can be found here [computerworld.com.au].

  • Only two or three remote holes in the default install not from 33 years ago, in more than 10 years but not less than 33 years!

E = MC ** 2 +- 3db

Working...