33-Year-Old Unix Bug Fixed In OpenBSD 162
Ste sends along the cheery little story of Otto Moerbeek, one of the OpenBSD developers, who recently found and fixed a 33-year-old buffer overflow bug in Yacc. "But if the stack is at maximum size, this will overflow if an entry on the stack is larger than the 16 bytes leeway my malloc allows. In the case of of C++ it is 24 bytes, so a SEGV occurred. Funny thing is that I traced this back to Sixth Edition UNIX, released in 1975."
Great! (Score:5, Interesting)
Any word on when they're going to fix the even older "Too many arguments" bug?
Sorry, but any modern system where a command like "ls a*" may or may not work, based exclusively on the number of files in the directory, is broken.
maybe it's just me (Score:1, Interesting)
But this code just seems wrong. What is C code doing referencing the stack pointer directly?
Was it really a bug back then? (Score:5, Interesting)
Was this a bug when it was originally written, or is it only because of recent developments that it could become exploitable? For instance, the summary mentions stack size. I could imagine that a system written in 1975 would be physically incapable of the process limits we use today, so maybe the program wasn't written to check for them.
Does your software ensure that it doesn't use more than an exabyte of memory? If it doesn't, would you really call it a bug?
Other Unixes (Score:2, Interesting)
Re:bad omen (Score:5, Interesting)
Re:Great! (Score:5, Interesting)
If "ls a*" isn't working, it's because the shell is expanding a* into a command line >100kB in size. That's not the right way to do it.
Try "find -name 'a*'", or if you want ls -l style output, "find -name 'a*' -exec ls -l {} \;"
Re:Great! (Score:2, Interesting)
So, as an example, let's say I want to archive a bunch of files, then remove them from my system, to save space. I packed them up, using:
tar cf archive.tar dir1 dir2 file1 file2 file3
and, because I'm extremely paranoid, I only want to delete files I'm sure are in the archive. How would I do that? Could I use:
rm `tar tf archive.tar`
How about:
tar tf archive.tar | xargs rm
I'm pretty sure neither of those will work in all cases. The first will fail if there are more than a few thousand files in the archive, and the second will fail if the files in the archive contain spaces or special characters. Can you give me one command that will work in all cases?
(modern_system != infinite_memory) (Score:2, Interesting)
It is not broken. The fact that it complains "too many arguments" is evidence that it is not broken, since the program (ls) is doing bounds checks on the input. If it was broken, you wouldn't get the message; there would be a buffer overflow because the programmer didn't do constraints checking.
Re:Was it really a bug back then? (Score:3, Interesting)
If you overflow a buffer then it's a bug, whether it is exploitable or not.
It is today, but my questions is whether it was even overflowable (is that a word?) when it was written. For example, suppose it was written for a 512KB machine and had buffers that could theoretically hold 16MB, then it wasn't really a bug. The OS itself was protecting the process by its inability to manage that much data, and it wouldn't have been considered buggy to not test for provably impossible conditions.
I'm not saying that's what happened, and maybe it really was just a dumb oversight. However, I think there's a pretty strong likelihood that it was safe to run in the environment where it was written, and the real bug was in not addressing that design characteristic when porting it to a newer platform.
See also: Ariane 5. Its software worked great in the Ariane 4, but had interesting behavior when dropped into a faster system.
Re:Time to patch (Score:4, Interesting)
Who cares? Like GCC versus TinyCC, being bloated means it can produce a more useful output. GNUware can be faulted for being heavy compared to traditional Unix tools, but the functionality and flexibility provided more than makes up for it.
Except for autotools. What the HELL were they thinking.
Re:You do realize.. (Score:4, Interesting)
The Problem is *why* it's the wrong way to do it (Score:4, Interesting)
You're correct that's it's not the right way to do it. The problem is *why* it's not the right way to do it. It's not the right way to do it because the arg mechanism chokes on it due to arbitrary limits, and/or because your favorite shell chokes on it first, forcing you to use workarounds. Choking on arbitrary limits is a bad behaviour, leading to buggy results and occasional security holes. That's separate from the question of whether it's more efficient to feed a list of names to xargs or use ugly syntax with find.
Now, if you were running v7 on a PDP-11, there wasn't really enough memory around to do everything without arbitrary limits, so documenting them and raising error conditions when they get exceeded is excusable, and if you were running on a VAX 11/780 which had per-process memory limits around 6MB for some early operating systems, or small-model Xenix or Venix on a 286, it's similarly excusable to have some well-documented arbitrary limits. But certainly this stuff should have been fixed by around 1990.
In Defense of Limits (Score:3, Interesting)
Soft limits can actually mitigate bugs. If we limit processes by default to 1,024 file descriptors, and one of them hits the limit, that process probably has a bug, and would have brought the system to its knees had it continued to allocate file descriptors. Programs designed to use more descriptors could to increase the limit.
Re:bad omen (Score:1, Interesting)
Naw. But I was thinking that 25 and 33 years kind of puts Microsoft's End of Support Policy [microsoft.com] in perspective.
Re:Great! (Score:2, Interesting)