Fix the Bugs, Secure the System 346
LiquidPC writes: "OpenBSD's Louis Bertrand has put his MUSESS 2002 presentation online, entitled
Fix the Bugs, Secure the System. Does an overview of OpenBSD, then explains Format String Ugliness, Buffer Overflows, The Wrong Way to Fix Overflows, along with numerous other things."
Script Kitties (Score:4, Interesting)
Look very closely at that picture (Score:3, Funny)
That was a penguin.
Re:Look very closely at that picture (Score:3, Insightful)
Re:Script Kitties (Score:2)
Great logo, I love that damn fish.
Actually, after seeing the Fish logo a little while ago (5-6 months), I thought geez
Cheers OBSD team.
Re:Script Kitties (Score:2)
The others will represent the other BSD's; FreeBSD, NetBSD, BSD/OS and MacOS X.
Why not just mark the stack non-executable? (Score:2, Informative)
Re:Why not just mark the stack non-executable? (Score:4, Insightful)
granted, a non-executable stack makes it significantly harder to exploit a buffer vulnerability, but it's not impossible. you can also put your shellcode in environment variables, in the heap, or various other places. if you wanted to follow your line of reasoning to completion, you'd have to have an isolated code segment, marked read-only, and everything else marked non-executable. of course, then we have the issue of how to handle run-time dynamic loading, and programs like vmware--pretty much anything that gets machine code from a source outside of itself and the libraries that are linked in at compile time.
i do agree with the idea of a non-executable stack, though. it's just regarded far too often as a panacea for buffer overflows.
Re:Why not just mark the stack non-executable? (Score:5, Informative)
Re:Why not just mark the stack non-executable? (Score:2, Interesting)
Re:Why not just mark the stack non-executable? (Score:2)
Making the stack non-executable would make certain kinds of exploits much harder to write, but it also precludes some tricks that compilers use where they actually do need an executable stack.
Can't wait for this all to get sorted out (Score:2, Interesting)
Not to mention the added overhead of making the system secure from semantic errors. Yeesh, it's a good think I get paid a lot for my C work.
But that's all okay, becuase (finally) technology, like Java, C# (okay this one sucks but whatever), etc that will help out and provide a truly _secure_ development platform.
I jsut hope they still pay me as much when this stuff finally gets easy, like it should be.
Re:Can't wait for this all to get sorted out (Score:2, Insightful)
Interestingly, there's a C dialect called Cyclone [att.com] by AT&T which tries to give the best of both worlds. It doesn't allow careless code (that becomes buffer overflows, etc.) but it doesn't sacrifice performance either.
Performance and C (Score:4, Insightful)
I don't agree with your assessment that safe high-level languages necessarily perform badly. (What is the difference between speed and performance?) But, let's forget about that.
What is "OS-level" about an ftp daemon? BIND? Mozilla? Gnutella? All sorts of network (and other) applications are written in C, even though there certainly isn't any need for performance or device-level bit manipulation. (At least, I would place security way above performance!)
Cyclone is actually from Cornell, by the way. It's a good project for moving systemsy people away from C, but there are already mature programming languages that are not slow, and yet are secure by default. (Try SML or O'Caml, for instance.)
Re:Performance and C (Score:2)
I've read many good things about O'Caml (nearly as fast as C!), so I tryed to read an online book to learn the language, but I've failed
Why? Too different from a "normal" C-like language: I find functionnal-type programs unreadable..
I know with O'Caml you're not really restricted to functionnal style programs, but the paper was pushing this style quite strongly.
I like using C, C++, Java (well I don't like Java but I have no problem using it), Perl, Ruby, Python..
But I can't grasp O'Caml..
So I'm not really sure that O'Caml (and the other functionnal language) would have a great success replacing the C-like language, maybe Ada or Modula would be easier "replacement languages".
Learning functional programming (Score:3, Informative)
I prefer to write functional code in LISP or Scheme, but I won't sneer at someone who uses Python functionally. It might lessen the learning curve for you, let you experiment around with functional programming, and then use what you learn there in Scheme, LISP or Ocaml.
Re:Learning functional programming (Score:2)
So using functionnal style here and there is not difficult, and it is used even more in Ruby than it is in Python.
But it's reading "pure functionnal programs" that I find very, very hard.
Unfortunately whereas O'Caml is nearly as performant as C (compiled O'Caml of course), Ruby is much slower..
I tend to prefer Ruby to Python but both are really equivalent.
Re:Learning functional programming (Score:2)
I am glad to see where python is going; it seems to have a rather clean design, and they like to take good ideas out of the programming languages community. But it is still "just" a scripting language; its semantics preclude an efficient implementation (and make it harder to develop large programs).
Re:Performance and C (Score:2)
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/
All the FTP parsing code is in ftp.sml.
Re:Performance and C (Score:2)
I guess my point is, high level langauges exist that make writing network applications easier. Even Java, since it has garbage collection and value-semantics strings, is certainly easier than C.
> But you need to get the data to C to do anything
> usefull with it in most cases anyway.
Well, this is true, but you can encapsulate these in libraries and never think about C code. Still, wouldn't you agree that the more code written in a high-level safe language, the better? (At least for security reasons?)
Re:Performance and C (Score:3, Insightful)
I can saturate my 100 megabit link and not go above a few percent of processor usage using my SML FTP daemon. The bottlenecks are definitely the disk and network. (I am using the system call to copy file descriptors, anyway, so that part happens just as fast as the C version.) Honestly, I would estimate that my server uses at most 30% more processor time than wu_ftpd. If I actually thought that was slow, I'm sure I could bring it within 10% without much effort. If you think I'm wrong, you're going to have to give me some evidence.
For the vast vast majority of users, nobody needs even close to 100 megabits. For the people runing cdrom.com, or whatever, well, maybe a high performance ftp server is in order. These people hopefully have someone who can maintain it and keep up to date on patches. But for the 99.99% of users who install linux and the default FTP server on a sub-100mbit link (ESPECIALLy home users), the security liabilities of the C version far outweigh the imperceptible speed difference.
Re:Can't wait for this all to get sorted out (Score:2, Informative)
New coding styles, sure. I wish the obnoxious bug-prone coding style of ages pass would just fade out. I wish they'd make the next rev of gcc not support calls to strcpy() (and other security abominations) unless you use the --tolerate-shitty-code flag. But C isn't going anywhere, for system programming. There may be something that can replace it for that purpose, but it will be a language -designed- to be a system programming language (like C originally was). That language is not Java.
Re:Can't wait for this all to get sorted out (Score:2)
And of course that switch will never exist, that's why I called the thing --tolerate-shitty-code.
Re:Can't wait for this all to get sorted out (Score:2)
Re:Can't wait for this all to get sorted out (Score:2)
Why do you think programming methods are truely secure? People are people and will make mistakes that cause security problems. But in few languages besides C/C++ will you ever have a buffer overflow. Languages are not panceas; they will not solve every problem, but they are one step to producing more secure code.
Re:Can't wait for this all to get sorted out (Score:2)
Re:Can't wait for this all to get sorted out (Score:2)
Certain C++ idioms can make things easier, it's true, but you are always subject to heap corruption vulnerabilities, double-frees, etc. C++ just isn't a safe language like Java is. (Even though I'm not so keen on Java, I sure do wish people would use it or something like it when they purport to be writing "secure" code.)
Re:Can't wait for this all to get sorted out (Score:2)
so as you see my friend, its not totally wrong unless the JVM kept track of the intergrety of the byte code in memory.
hard as fuck, but don't be surprised if something like this possibly happens
Wrong!! (Score:3, Informative)
If a hacker exploited one process this way, then why would he bother to exploit the java program rather than just execute whatever code he plans on executing?
You are still totally wrong and I WILL be surprised if something like that happens.
Re:Can't wait for this all to get sorted out (Score:3, Insightful)
C is a great language for the thing it was designed for -- system level (ie OS) programming. It turns out that it is mostly decent (and at least functional) at other programming tasks, and this helped it become ubiquitous.
Most of the problems in C that arise at the system level are a function of the standard library that it came with. For example, strcpy() should have never existed.
As to whether or not C is any good anywhere else... *shrug*. Minus easy-to-use regexp's, I think once you learn how to write good C code it can be a "good" language for just about anything. Still, I'm not saying I wouldn't mind seeing Java become more common in applications... So long as I can get a good static compiler and libraries for it.
Re:Can't wait for this all to get sorted out (Score:2)
while (*s++ = *t++){
This idiom is so seductively terse that I wouldn't be surprised if it was one of the motivations behind what became C's syntax, similar to the influence that my Dog Spot has had in modern Perl. In contrast I believe a language such as say Java would frown on having an assignment being used as the conditional, especially in a syntax where equality is often expressed by ==.
Unix and C were coded as a reaction to the collapse of the Multics project, initially on quite underpowered machines. Terseness was good, sometimes essential to get things running on limited resources. And the initial environment inside a controlled corporate culture was a lot more secure than having to deal with the Internet. I suspect strcpy() was hardly an unfortunate accident, it was by design.
Re:Poor old strcpy (Score:2)
Like any computer operation, strcpy() is safe given a certain set of invariants. In this case, the invariants are that both src and dest are non-null buffers, and that the src is of at most equal size to the destination buffer. However, the only way to know this is to know the size of the src (either at compile time, or strlen()), and the size of the dest.
But since you already have to know the size of the dest, why not just include that as a parameter to the copy? You've eliminated the problematic invariant, and replaced it instead with the invariant that the length parameter you pass has to be correct. Since you have to know that anyway, this should clearly be better.
The only time strcpy() ever made sense was on machines so small that it was advantageous to -not- have to check the size. As soon as this was no longer the case (which i'd argue was as early as the C64), strcpy() should have become deprecated in favor of strncpy().
The only remaining wish... (Score:4, Interesting)
But then I guess producing a high quality operating system keeps then busy enough...
Re:The only remaining wish... (Score:2, Informative)
Which is my first argument for full disclosure of security issues. Not to mention security changelogs.
Re:The only remaining wish... (Score:2)
As if they'd pay attention. And before you mod that as flamebait, ask yourself why strlcpy() still isn't part of glibc..
Re:The only remaining wish... (Score:4, Funny)
Because if it isn't invented at GNU they won't use it?
Re:The only remaining wish... (Score:4, Interesting)
There's a few huge winding threads in libc-alpha <http://sources.redhat.com/ml/libc-alpha> on this. One answer is:
These words make sense. The problem with strlcat and strlcpy is that they
assume that it's okay to arbitrarily discard data for the sake of preventing a
buffer overflow. The buffer overflow may be prevented, but because data may
have been discarded, the program is still incorrect. This is roughly analogous
to clamping floating point overflow to DBL_MAX and merrily continuing
in the calculation.
Agree or disagree, the developers of glibc don't find strlcpy to be an appropriate function based on its merits. Trying to claim otherwise is just trying to stir up trouble.
Quite stupid reasons (Score:2, Insightful)
Since strncpy() does exactly the same thing, just don't bothering always NUL terminating the resulting string.
Data discarding can be detected by checking return values, you can't do much against people not checking the result of their call. The question is, what API is the less troubling ? strncpy() or strlcpy() ?
It is appropriate! (Score:2)
buffer overflow.
A function should always throw out data that doesn't match its parameters. If a function expects an int and the user passes a double, it gets changed back to an int. The user's data gets lost, but thats his fault for using the program incorrectly. Every C compiler known to man behaves this way. Why should strings be any different?
Re:It is appropriate! (Score:2)
No, it should signal an exceptional condition. Checking the return value of strlcpy or strncpy for "actual bytes written" means checking it against strlen
Re:The only remaining wish... (Score:2)
2) Companies with internet connected databases get what's coming to them.
Re:The only remaining wish... (Score:2)
Re:The only remaining wish... (Score:3, Informative)
Try the Secure Programming for Linux and Unix HOWTO [dwheeler.com]
It explains the basics of secure programming and common problems with a variety of programming languages including buffer overflow and many more tricky problems.
Re:The only remaining wish... (Score:2)
The real problem with OpenBSD (Score:3, Interesting)
What's the point of a rock-solid operating system if very few are actually using it (and of course, that happens because of lacking features)? For a server security is always the second issue - the first being the service provided.
(I'm definitely exagerating here, so flame me as you like)
The Raven.
Re:The real problem with OpenBSD (Score:2, Insightful)
I, for one, find that the "secure by default" policy is incredibly convenient for a drop-in firewall solution (and I've done this a few times for various companies).
Re:The real problem with OpenBSD (Score:5, Insightful)
OpenBSD will never show up on my networks - but every packet that gets to my FreeBSD webserves goes through an OpenBSD firewall. I imagine that a lot of packet are touched by OpenBSD - an we'll never know it.
Re:The real problem with OpenBSD (Score:2)
Haveing an OpenBSD box do everything is perfectly reasonable! I respect that arrangement.
However, FreeBSD has more of a 'performance' slant than OpenBSD does - just as I trust OpenBSD for security, I trust FreeBSD and it's mature softupdates for file serving performance. I love them both - and the diferances between them in user-land are minor, so keeping my skillset current in both is no big deal.
I understand the apeal of a unified operating system environment, but I found that variety works well for me. I'm considering adding Apple OS X workstations as well - so you can see I'm a bit odd.
Secure programming (Score:4, Informative)
One of the problems with secure programming is the inertia in the computer industry; most of the operating systems in widespread use today (The *nix clones and DOS derivitives, these days) we developed in a time when security did not matter; *nix has a crude root-or-not security model and MS-DOS has no conception of security at all.
Personally, I think the solution is a model which has a real security model, such as EROS [eros-os.org]. The "audit the code so that it is perfect code without bugs" approach to security does not always work [monkey.org], even with OpenBSD.
- Sam
Re:Secure programming (Score:2)
OK, great. But how can we be assured that your string library doesn't have security problems? Somewhere, someplace, memory is getting allocated and bytes are getting written, string copies are being performed, and buffers await overrunning. Auditing code so it is perfect and without bugs does work for security, it just has to take place in the libraries rather than the applications.
Re:Secure programming (Score:3, Insightful)
But it doesn't really work. It's better than nothing, but if there's one thing that years of security bugs should have taught us, it's that there are always new classes of undiscovered bugs out there. You can eliminate every known bug, but that doesn't guarantee that there are no clever exploits that you haven't figured out but that somebody else has (or will) find out how to use. What you really need is a level of security that's orthogonal to code level security. That can be something like capabilities, mandatory access controls, or even just finer grained control over what the code is allowed to do than Unix's all or nothing model.
Right now, if I want my computer's CD burning software to be able to set itself at high priority to avoid buffer underruns, I have to run it SUID root. That's insane; it means that a single programming error in what reasonably should be a user accessable program could give somebody complete access to the system. That isn't security, it's a nightmare. We need a system where I can assign that program only the right to reset its own priority, and not complete run of the system. Yes, it's better if the code is audited and potential bugs are eliminated, but a system in which a single bug can completely compromise the whole system is badly designed.
Re:Secure programming (Score:2)
Re:Secure programming (Score:3, Interesting)
Re:Secure programming (Score:2, Interesting)
Those aren't guaranteed to be any more safe. You still have to check the value of a pointer after new, and you still need to make sure you use delete.
Different syntax, same old problems. In and of themselves, C++'s stream objects are no safer than printf and scanf.
As for the rest, those have little bearing on writing secure code. In terms of security, C++ is no better than C. Both can be used to write secure code, but you do not get it by default simply because you use C++ over C. It's a design process.
Re:Secure programming (Score:2, Interesting)
That's half true. You still have to make sure you deallocate properly (call delete or delete[] appropriately exactly once). But you don't necessary need to check return from new - it throws an exception instead of returning null.
(This might not be true on all platforms. I think the standard specifies this, but am not sure. You can make a test that ensures this by trying to allocate a ridiculous amount of memory and catching an exception. I actually do test this for a library of mine, but have only run it on Linux, FreeBSD, and HP-UX with default allocators.)
Different syntax, same old problems. In and of themselves, C++'s stream objects are no safer than printf and scanf.
Not true. There's an entire class of vulnerabilities that printf and scanf are vulnerable to that cin and cout are not: format string vulnerabilities. I think cin and cout suck, but they are unquestionably more secure than C-style format strings + varargs.
Mind you, cin with char[] stuff is still vulnerable to buffer overflows. Don't do that. Use a string class instead.
Re:Secure programming (Score:2)
Exceptions are a very handy tool when used correctly. Yes, having to handle an "exception" and a "non-exception" version of the same thing would suck. But when used consistently (and you can make operator new and such have consistent behavior even if you don't like the system's - override them), they eliminate a lot of code. You don't need to handle every error right where it could happen; errors just slide down the stack until they are handled. Programmers are lazy enough that when they have to handle every error right where it happens (many if statements after repeated calls), they don't. So anything that makes error handling easier makes better (yes, more secure) code.
Performance-wise, I've never had a problem with exceptions. Yes, they play games with the stack behind the scenes...probably not as efficient as your goto. But unless you can show me a situation in which exceptions actually cause a performance problem, I'll continue to think they are much, much preferable to goto (ESPECIALLY when the goto is hidden away in a macro - very bad spaghetti). Remember, exceptions are exceptional - if you are throwing them regularly, something is wrong. The only one I have that's thrown even remotely close to commonly is IOBlockError - basically EWOULDBLK/EAGAIN in exception form.
Where I don't like C++ exceptions is debugging. Java has very, very good support for following exceptions and analyzing the stack. It's excellent for debugging; I don't even need a seperate debugger anymore - stack traces are all I use them for. C++ can't match that. g++/gdb are absolutely terrible about debugging exceptions. You can't catch one in the debugger. You have very little idea where one came from. If one reaches the top of the stack, the code does an abort without printing much useful diagnostic information.
In fact, you really should try writing Java code. You'll absolutely hate the performance (if you are doing gotos to get an extra nanosecond or whatever, you'll hate virtual machines). But it does exceptions extremely well, and you'll see they are a far superior way of handling errors correctly. And maybe it will teach you that a few nanoseconds here and there aren't as important as having proper algorithms - differences of seconds or minutes. Basically any little bit of code in Java will execute more slowly than C/C++, no matter what the Java advocates say. But if you do things properly, you can have a larger program that is not much slower - by spending time you would have spent on little things to improve the overall design.
My point was that simply switching from C to C++ is not enough to buy you security. You might get some things for free, but to truly be secure, you'll still have to code securely. There's no way around that (okay, there is, but it involves moving to languages other than C or C++).
C++ will be no more than secure than C if you treat it as C. But if you take advantage of the object-oriented constructs, you can (1) remove varargs (format string vulnerabilities gone) (2) reduce code that handles arrays (buffer overflows less likely). In other languages (Java, for example), you can completely eliminate both of these. There are still other kinds of problems - though not so common or so easy to fix.
Re:Exceptions are exceptional (Score:2)
Re:Exceptions are exceptional (Score:2)
ahde said: Not with java. Exceptions are a normal part of program flow. Not of necessity, but enough of the standard APIs and documentation relies on them to make it fairly standard.
I don't buy that. Yes, just about any function that can signal an error condition does so by an exception. But if your code is correct, that will not happen many times in an execution. I.e., if you've got an inner loop that throws/catches an exception at every iteration, you're doing something wrong. Exceptions are, by definition, not regular program flow.
Secure programming HOWTO for Linux and UNIX (Score:5, Informative)
Re:Secure programming HOWTO for Linux and UNIX (Score:5, Informative)
I've also just posted my presentation on how to write secure programs; it's the presentation I gave at FOSDEM 2002 last week. Note that these presentations have different (overlapping) goals; Louis Bertrand's presentation is primarily about OpenBSD (e.g., how it's developed), while my presentation is primarily about how developers can develop secure programs. My presentation, like the book, is at http://www.dwheeler.com/secure-programs [dwheeler.com].
Presentation... (Score:5, Insightful)
The only thing there was a long list of titles with no information, old or new.
Fix the bugs? (Score:3, Funny)
Re:Fix the bugs? (Score:3, Insightful)
In the end we will see if MS is able to actually execute their goal. OpenBSD already has.
Re:Fix the bugs? (Score:2)
"Why is it that when MSFT does something like stopping to fix bugs and secure systems, we make fun of them, but if it's BSD we look at it as something we can learn from? "
M$ doesn't generally try to fix bugs as much as they try to fix it so that there is the perception that they try to fix bugs. In the end, they are perfectly content to sell highly exploitable systems so long as the ignorant masses will buy them. Witness XP
Security: start in education (Score:5, Insightful)
#define SZ 100;
char buf[SZ];
cout << "Enter courses filename: ";
cin >> buf;
This is C++! We have the string datatype for this! There's absolutely no excuse for this--especially in code that will be referenced as "good" code by everyone else in the class.
So anyway, the point of this rant is that security will remain horrible until we start teaching people to write securely in the first place.
String is not a data type (Score:2)
That said, yeah, he should use cin.getline().
Hey, at least he used #define to set the array size. Wait until you get hit with a 100,000 line program to modify where the author didn't use #define...
Re:String is not a data type (Score:2)
string is a basic_string of char
and that's about all the STL I know.
Re:Security: start in education (Score:4, Funny)
Re:Security: start in education (Score:2)
If it was, they wouldn't have to squeak by on a professor's salary.
--saint
Re:Security: start in education (Score:2, Interesting)
That isn't the prof's responsibility. He (she?) is a computer scientist, not a software engineer, and certainly not a security wacko. The relationship between computer science and software engineering is kind of like the relationship between physics and mechanical engineering -- the scientists create the knowledge and the engineers put it to use. You can't expect a physicist to design a perfect bridge, any more than you can expect a computer scientist to write secure code. It isn't what we do, and we really don't care about it. Computer science is really more about mathematics than programming; if you want to learn good design practices, take a software engineering course.
Re:CS Vs SE (Score:2)
The head of the CS department of my old college is a friend of my Father-in-law, and they don't see the problem - which is why they keep producing people with CS degress, and they can't work in the real world
Re:Security: start in education (Score:2)
Software is a strange beast. There is nothing else so abstract and yet so directly practical. It defies analogy with other fields.
Re:Security: start in education (Score:2, Insightful)
I'd like to see you say that to Bruce Schneier.
Security sucks IRL. Handing people insecure code that they assume is correct is not the way to fix it. If it is not the responsibility of the person writing the code to make it secure (at least against coding errors like string formatting and buffer overflows), whose is it?
Re:Security: start in education (Score:2)
But this is RUDIMENTARY sh-t we're talking about here. If I can't trust a professor to understand why buffer overruns and memory leaks are undesirable, how am I supposed to trust anything that the professor says?
Re:Security: start in education (Score:2)
Actually, I don't think that there should be a software engineering degree. I think that CS should include more courses on working in teams and designing code that is easy for other people to work with. Nobody programs in a box these days. Everyone must work with code that someone else wrote the API for. (which is another thing, every student should have to take a class on API design) 50 years ago when all machines were huge and the program you wrote didn't use any shared libraries this didn't matter. But today, it is almost impossiable to write all the code your program executes yourself. ( I mean not useing any libs, no STL, just your ASM/C/C++ code) . The major exception being embedded dev. At least my embedded dev classes, they made up write all the code ourselves, YMMV.
Re:Security: start in education (Score:2)
The CS professor mentioned at the beginning of the thread probably wasn't trying to teach how to program in a way that is robust to various kinds of errors, including security errors. He almost certainly was not claiming to.
The problem is when people take the limited knowledge they acquired from the CS professor and apply it in the real world without understanding all the implications of what they are doing. That ain't the professor's fault.
The world would probably be a better place if more computer scientists had actually written the code that is being used to do real world things, because they would have cared more about abstract properties like "correctness" rather than concrete properties like "compiles, doesn't crash." And they wouldn't be programming it in C.
Re:Security: start in education (Score:2)
The problem is the instruction. A degree in computer science should mean you know a bit about computers. ALL languages that hide this from the user still depend on the exact same constructs that c exposes.
Re:Security: start in education (Score:2)
What happens when it *is* bigger than expected?
32767 + 1 yields -32768
99 + 1 gives 00 (or
What happens if the string is bigger than the string datatype can hold?
At least the prof put the critical assumption up front.
Re:Security: start in education (Score:2)
Technically, it's the first project of the course, which is in the 200-level. However, the 100-levels are taught in Java, so this is the first C++ project for everyone who didn't transfer. (I only ended up this low on the pile because I had never even heard of OOP before college.)
If it were up to me to define the course description, I'd use Python. Delimiting blocks by indentation forces people to make more readable code, which is easier to grade
Secure the system: get rid of C (Score:3, Insightful)
I can't believe there is not one mention of using a language other than C. Is it the systems community? Is it because of BSD's history?
I don't know why this idea fails to even come up. Network servers are bandwidth-limited, not cpu limited, and writing them in a safe high level language is not only easier, but makes buffer overflows impossible. Being easier to write also of course allows more time for optimization and for other security fixes. (For those that need really high-performance for their gigabit links, maybe a C version and very careful maintenance is possible. For home users, this prospect is ridiculous.)
C seems almost *designed* to allow for buffer overflow exploits. If we want secure programs, we should be starting from more secure foundations!
For more detail, check my previous rant, "C lang remains inappropriate for network daemons": http://slashdot.org/comments.pl?sid=24271&cid=2629 013 [slashdot.org]
Re:Secure the system: get rid of C (Score:2)
Re:Secure the system: get rid of C (Score:3)
I disagree that buffer overflows can be easily avoided. If they are so easy to avoid, why do we continue to see them? Practically every popular network software written by anyone has fallen to a buffer overflow at one time. I also disagree that C code is easy to audit and verify. Wu_FTPD is over 24,000 lines long, and I can't imagine ever trying to think through the security of such a large system on pure willpower. Safe languages give you the benefit of computer checking, and this frees up your mind to think about more important things (such as the security problems that compilers can't check!)
Re:Secure the system: get rid of C (Score:3, Insightful)
Re:Secure the system: get rid of C (Score:2)
Not to detract from your main point, which is a good one and well made, but that particular statement is pretty dubious. Some network servers are bandwidth-bound, some are CPU-bound, some are memory-bound or disk-bound, some are crappy-API-bound, some are bound by complex synchronization/serialization requirements. Most are affected by more than one of these limitations, and by the tradeoffs that must be made between them.
None of this refutes your argument that C is not the best language for servers. Its lacks of type-safety, range/bounds checks, proper overflow handling (which requires exceptions), garbage collection and so on are all well known. Java is a much better language in these regards while still remaining fairly familiar, and even for completely CPU-bound programs there's a compelling argument for HotSpot-style JIT as an alternative to traditional compilation. If only Java supported true MI instead of the inadequate "interface" hack/substitute (and I do understand how the requirements for code mobility made that a reasonable choice at the time). Other, more "exotic", languages such as those in the Scheme or ML categories might appeal to purists, but their chances of achieving widespread adoption will remain almost nil until the "impedance mismatch" with declaratively-oriented system programming interfaces is lessened.
Re:Secure the system: get rid of C (Score:2)
> good one and well made, but that particular
> statement is pretty dubious.
OK, you're right. I guess I what I really meant to say is, "For the home user with a powerful computer and relatively small bandwidth, CPU performance of network applications is nowhere near as important as security is."
I still think SML or O'Caml could take off with hakers, but I'm still trying to figure out a way to convince them to try something new. It's really not that hard, and the interfaces to system libraries are actually rather straightforward.
Re:Secure the system: get rid of C (Score:2)
Re:Secure the system: get rid of C (Score:2)
When I say that a language is safe, I mean that the definition of the language doesn't permit "undefined behavior" (where in the case of a C buffer overflow, this leads to running of arbitrary code by an attacker). Java falls into this category. So does SML. Perl and Python do, too, though it is hard to say because they are only defined by their implementations. (So it is hard to separate the language from implementation.)
Perl and Python, because of their ability to execute commands on the system so easily or interpret code sent by an attacker, are subject to a different class of security holes. Because Perl (especially) attempts to make so much functionality available to the user, it often leads to hack-job scripts that are difficult to reason about. It is true, though, that these languages are "safe" in the sense that they don't permit crashes (that could lead to execution of machine code).
Why is it unfair to mention that perl has had oveflows? I merely want to challenge the notion that slashdot folks seem to have that buffer overflows are an easy thing to avoid, and that they're only made by "bad" programmers. How can you say that "apparently they're all gone now"?
Some safe languages (or, say, natively compiled Java) don't have virtual machines. As far as I know, it's difficult to have a bug in a compiler that leads to exploitable security holes. It is certainly possible, but would take a much stranger situation than the ones that typically cause buffer overflows in C programs! And of course, fixing the compiler would fix all of the programs written using it automatically. (Well, after recompiling.
As for source code, I'm pretty sure there are open source implementations of the JVM. I am not a big java fan, myself. Yes, I have the source to my SML compiler.
Re:Secure the system: get rid of C (Score:2)
If you look at the number of security bugs in each software, he's probably right..
Ada or Modula3 would be interesting I think..
But overcoming the "network effect" to choose a language will be very hard..
Re:Secure the system: get rid of C (Score:2)
Well, I did it for FTP (in SML) at least:
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/tom 7misc/net/mlftpd/ [sourceforge.net]
In fact, it was very easy -- it took me about a weekend to get it working, and then a few days of tinkering to polish it off. That included writing an MD5 crypt library, and mechanisms for writing network daemons that I intend to re-use for later projects.
I don't know why it is, but there is this sort of gut reaction that the slashdot crowd has about this kind of stuff (maybe it comes from UNIX's history). Since it's not written in C, it is seen as somehow inferior. Even though it supports 99% of the RFC (I would finish it off if people cared..), is 100% buffer-overflow free, much shorter and elegant than C alternatives, open source, etc...
Fixing buffer overflows by *ptr EBP (Score:3, Informative)
For x86 with standard stackframe setup, there is an answer: length _MUST_ be less than (EBP - *ptr) if the stack isn't to be trashed. Note that other local data may well get trashed. But at least the pgm doesn't lose control.
The wrapper could drop early chars or trailing chars, but should signal an error in the unlikely event the code has been made with error trapping. Of course, this wouldn't work if the code was compiled with -fomit-frame-pointer [or equivalent], but there is a price for security.
The fundamental problem... (Score:2, Insightful)
If you want to make sure people don't make a particular mistake, make it impossible for them to do so. That means you either 1) fix C to eliminate all buffer overflow issues (impossible, IMO), 2) enforce proper coding technique, possibly through a special string library and/or macros (very difficult on a project as large as an OS), or ditch C completely (virtually impossible given the size of the Linux code base).
C/C++ should have a native string datatype. (Score:3, Informative)
In C++, there is the string class in the std lib, but it's not native to the language. (almost native ok, but not totally like in C#).
C is a language where the respect for the borders of a block of memory is in the hands of the developer. Clearly, that's too old fashioned today, since languageelements can prevent mistakes C allows developers to make.
Secure by default (Score:2)
"Four years without a remote hole in the default install!"
is nothing compared to MS-DOS's twenty year safety track record. That, and thousands of "potential" buffer overflows in realistically safe code like this:
int SomeFunc ()
{
char foo[5] = "Hello";
OtherFunc(foo);
}
OtherFunc(char * foo)
{
* whic passes a string literal. This is, of
* course, completely undocumented. You never
* read this comment.
*/
char * bar = malloc(strlen(foo)+1);
strcpy(bar, foo);
}
Yes, OpenBSD is a very nice OS, but no, it isn't a magic bullet.
Re:Buggy (Score:2)
Re:Buggy (Score:3, Interesting)
Maybe I've been trolled, but thought I'd clear that up. A bug is an error in that a piece of functionality isn't right. An exploitable program or process can be a subset of it... that is, if being exploitable isn't part of the original plan.
Re:Buggy (Score:5, Funny)
Searching for "Brian bug" on Google [google.com] shows 441,000 hits. Clearly you're 20 times buggier then OpenBSD, so I wouldn't be slinging implied accusations around.
Re:You [censored] moron. (Score:3, Informative)
with the same technique, searching for '"OpenBSD bug"' (note the quotes) returns only 93 results.
but this is only using the same yard stick.
beat yourself which ever way you want.
Note that this was google groups, by the way, not generic google search.
on the generic google search, with quotes, the total results are 352 for "openBSD bug"
Re:You [censored] moron. (Score:2)
Ofcourse this is a hit on a newspost containing the quote "I did some OpenBSD bug research, and found that there are none". One reply states that "OpenBSD bugs are dying" and the other 91 results are AOL "me too" replies to the first post.
Re:*BSD is dying (Score:2, Informative)
Re:*BSD: the pallor of death (Score:2, Informative)
Linux is for the windows convert. FreeBSD is for the unix convert.
Linux continues to copy off FreeBSD - just look at the latest VM work being done to the kernels.
I don't care whats popular - if we went by popularity, we would be saying linux was dead.
SCREW THE NUMBERS, BSD FOR EVER!
Re:Why is this code bad? (Score:3, Informative)
Here the code copies the input string to the destination, regardless of what size the input string is.
if (strlen(dest) => MAXLEN) {}
Here the code checks to see if the input data is larger than the buffer that it is being copied to, which is great and all except that it is being done AFTER the cpy took place. It's like drinking a bottle of clear liquid in a chemistry lab and THEN checking the label to see if it's sulfuric acid.
I'm no C expert either, so I may have missed something.