OpenBSD Team Cleaning Up OpenSSL 304
First time accepted submitter Iarwain Ben-adar (2393286) writes "The OpenBSD has started a cleanup of their in-tree OpenSSL library. Improvements include removing "exploit mitigation countermeasures", fixing bugs, removal of questionable entropy additions, and many more. If you support the effort of these guys who are responsible for the venerable OpenSSH library, consider a donation to the OpenBSD Foundation. Maybe someday we'll see a 'portable' version of this new OpenSSL fork. Or not."
de Raadt (Score:4, Insightful)
Yes, Theo has a bad temper and is not filtering carefully enough what he says, but his heart is in the right place, and he's a fucking great leader. I don't mind one bit his bad temper, because it usually hits those that really deserve it. And on the other hand, he's one of the most effective open source leaders.
Backport\Upstream? Seems unlikely (Score:4, Informative)
Removal of ancient MacOS, Netware, OS/2, VMS and Windows build junk
Removal of “bugs” directory, benchmarks, INSTALL files, and shared library goo for lame platforms
Ripping out some windows-specific cruft
Removal of various wrappers for things like sockets, snprintf, opendir, etc. to actually expose real return values
There's no doubt that OpenSSL needs work, but they seem to be needlessly combining actual security review with "break every platform that I don't like." At a minimum, anyone else trying to benefit from this will need to unravel the worthwhile security changes from the petty OS wars crap.
"Ancient." "Cruft." (Score:5, Insightful)
Re:"Ancient." "Cruft." (Score:5, Funny)
Those should be avoided at all times as well if security is the main concern. Have you ever heard of a security breach on a OpenBSD system? You probably did, it's because that is actually newsworthy! News of a new MS security breach is chucked into the same lame bin as 'Cat is stuck in tree', 'Small baby is born', 'MH370 is finally found', 'Cat still stuck', 'MH370 still not found', 'Is this the year for BitCoins'?, "Cat climbed down himself', and other nonsense that will surprise no one at all.
(P.S. This is not meant snarly, cynical or negative, just slightly blasé)
Re: (Score:3)
Re:"Ancient." "Cruft." (Score:5, Insightful)
For whatever unfathomable reason Microsoft decided to make Winsock use the BSD socket API *but* you need to use Windows specific error handling mechanisms, Windows specific constants, initialization and shutdown functions, etc.
While it may sound like that... (Score:5, Insightful)
The first step to cleaning up the code is getting it into a state where you're not leaving crap in place because 'It's for something I don't understand.'
That's what got us in the current OpenSSL mess in the first place.
Additionally, once the core code is cleaned up you can always follow the changelogs and merge the legacy stuff back in (assuming they're using git, or another VCS with a good git check(in/out) module.)
Honestly anyone still running any of those OSes is probably running a 0.9 series library and thus wasn't vulnerable to this bug to begin with. Who knows how many of those alternate paths even still worked anymore.
Re:Backport\Upstream? Seems unlikely (Score:5, Insightful)
It's a fork specifically for OpenBSD. Why would they keep support for other OSes?
I agree that if they were trying to create a general replacement fork of OpenSSL, that those would be bad things, but for what they're trying to do, these are good decisions. They're trying to improve OpenBSD's security - OpenSSL is a big attack surface, and they're trying to make it smaller by removing the things they don't need.
This will complicate things both ways, going forward. Updates to OpenSSL might be harder to integrate with OpenBSD's fork (if it becomes an actual independent product, can we call it OpenOpenSSL? Or Open^2SSL?), if it touches upon the altered parts. Likewise, anyone trying to merge an Open^2SSL fix into OpenSSL might have difficulty. I expect that if OpenBSD's fork of OpenSSL becomes a separate project, one or the other will die off, simply due to all that duplicated effort.
What I expect to happen in that case is that Open^2SSH will maintain compatibility with all the platforms OpenSSH or OpenSMTPD (which are OpenBSD projects) support - pretty much any Unix-like environment, including Linux, BSD, OS X, Cygwin, and most proprietary Unices. If there's enough desire for support for other platforms, a second fork might happen to maintain them, but I honestly doubt it (Mac OS 9? Really?).
Re: (Score:3, Insightful)
It's a fork specifically for OpenBSD. Why would they keep support for other OSes?
You only fork when you want to put distance between the original; there is nothing stopping them from making changes/"improvements" to the original OpenSSL project except for scope constraint (i.e. if they just want OpenBSD to be secure) or ego. Either one stinks of selfishness. I cant criticize them directly since they are still doing all of their work for "free" and are publishing it freely, but it has to be pointed out that they are choosing the greater of two evils.
Re: (Score:2)
Re:Backport\Upstream? Seems unlikely (Score:5, Insightful)
they are choosing the greater of two evils.
No.
Eventually supporting too many screwy and ancient systems starts to cause just so many problems that it is really, really hard to write solid, well tested, clear code. The heartbleed bug was exactly a result of this. Because of supporting so many screwy platforms, they couldn't even rely on having malloc() work well. That means they had their own malloc implementation working from internal memory pools. Had they not, they would have benefited from the modern mmap() based implementations and you'd have got a segfault rather than a dump of the process memory.
Supporting especially really old systems means having reimplementations of things which ought to be outside the scope of OpenSSL. Then you have to decide whether to always use the reimplementation or switch on demand between the custom one and the system one and whether or not to have some sort of quirk/bug correction.
This sort of stuff starts to add up and lead to a maintainance nightmare.
What OpenBSD are doing: throwing out all the accumulated crud and keeping the good parts is a fine way to proceed. It will almost certainly be portable to the other BSDs, OSX and Linux since they provide similar low level facilities. I doubt a port back to Windows would be hard because modern windows provides enough sane facilities that it's generally not too bad for heavily algorithmic code like this.
Basically there's no way for them to get started except to first rationalise the code base and then audit it.
Re:Backport\Upstream? Seems unlikely (Score:5, Interesting)
If they end up stripping it down to a minimal library with the core functionality, cleaning up the public interface (e.g. exported functions), and making it easy to create your own OS-specific wrapper around it, then they are actually doing something that should have been done in the first place. If they do it right, it will become much more popular (and most likely more light-weight and secure) than the current OpenSSL project.
Re: (Score:2, Insightful)
I think the point is to have as little code as possible, and to have no code that isn't covered by their tests. Both of which are excellent ideas if you want to write secure code.
Re: (Score:3)
Indeed. Most coders just add more code when they run into an issue. That is a problem for normal code, but it is death for anything security-critical.
Re:Backport\Upstream? Seems unlikely (Score:5, Insightful)
It's not remotely about petty OS wars. Complexity is bad for security, mmkay? If you want a newer version of openssl for OS/2, netware, or pre OSX MacOS, I'd really like to know what exactly you are doing. Dropping those platforms is the right thing.
Re: (Score:2)
OpenSSL can just backport anything OpenBSD fixes. (Score:2)
There's no doubt that OpenSSL needs work, but they seem to be needlessly combining actual security review with "break every platform that I don't like." At a minimum, anyone else trying to benefit from this will need to unravel the worthwhile security changes from the petty OS wars crap.
I don't see this as a problem. Since OpenBSD is working on their own, for-themselves, branch, they can fix it any way they want. If they do a good job (as expected), the OpenSSL project can then backport their fixes into t
Re: (Score:2)
Indeed. If there are still a significant-enough number of OS/2 users out there to warrant OpenSSL upgrades, then someone will fill the need. But we all know there isn't enough users of these old OSs to actually warrant ancient code being maintained.
Re:de Raadt (Score:5, Interesting)
He is technically incapable of evaluating what's actually happening, and likes to go off-list when he's angry and wrong [fbcdn.net].
The freelist is not an "exploit mitigation countermeasure", but rather standard allocation caching behavior that many high-rate allocation applications and algorithms implement--for example, ring buffers [wikipedia.org] are common as all hell. The comment even says that it's done because performance on allocators is slow.
Further, the only bug in Heartbleed was a READ OVERFLOW BUG caused by lack of input validation. It would actually read that a user said "This heartbeat is 65 thousand bytes long", allocate 65 thousand bytes plus room for instrumentation data, put instrumentation data in place, and then copy 65 thousand bytes from a request that was 1 byte long. While there are mitigation techniques, most allocators--anything that uses brk() to allocate the heap for allocations smaller than say 128KB (glibc's pmalloc and freebsd's kmalloc both use brk() until you ask for something bigger than 128KB, then use mmap())--don't do that. That's how this flaw worked: It would just read 64KB, most likely from the brk() area, and send it back to you.
Read overflows don't kill canaries, so you wouldn't detect it except for with an unmapped page--a phenomena that doesn't happen with individual allocations smaller than 128KB in an allocator that uses brk(), like the default allocator on Linux and FreeBSD. Write overflows would kill canaries, but they actually allocated enough space to copy the too-large read into. And the code is, of course, correct for invalid input.
Theo made a lot of noise about how all these other broken things were responsible for heartbleed, when the reality is one failed validation carries 100% of the weight for Heartbleed. If you perfectly cleaned up OpenSSL except for that single bug, slapped it on Linux with the default allocator, and ran it, it would still have the vulnerability. And it only behaves strange when being exploited--and any test would have sent back a big packet, raising questions.
There was never really any hope that this was going to be caught before it was in the wild and "possibly had leaked your SSL keys'. It may have happened sooner, maybe, maybe not; but it still would have been a post-apocalyptic shit storm. And all those technical mitigations Theo is prattling on about would have helped if OpenSSL were cleaned up... AND if those technical mitigations were in Linux, not just OpenBSD.
Re:de Raadt (Score:5, Informative)
He was being somewhat sarcastic, because OpenBSD's allocator is in contrast to
and does try to separate allocations specifically to mitigate Heartbleed-style vulnerabilities.
In other words, the OpenBSD allocatior does have exploit mitigation, and the OpenSSL freelist acts as a countermeasure to those mitigation capabilities whether it was intended or not.
It says it's slow on "some platforms", yet they disabled it on all and then didn't test the alternative.
But of course everyone knows it's way better to quickly implement a dramatically awful security vulnerability than to do things slowly and correctly.
Re: (Score:3)
Yes yes of course but you're missing the point. We'll ignore that sarcasm doesn't carry and that the exploit mitigation stuff in OpenSSL has been repeated again and again without a hint of irony and so one may be lead to believe such a thing exists in OpenSSL.
First off,
And all those technical mitigations Theo is prattling on about would have helped if OpenSSL were cleaned up... AND if those technical mitigations were in Linux, not just OpenBSD.
Here's the thing: OpenBSD is a hobby OS. It's like Linux with grsecurity: yes they've mitigated all this shit ages ago, yes people run grsecurity in production, yes anything that grsecurity "would have prevented" is effectively unpro
Re:de Raadt (Score:4, Interesting)
OpenBSD is a hobby OS
*every* community-driven operating system is a hobby OS. Is that relevant?
It's like Linux with grsecurity
Maybe for you. Not for me. And it is actually easier to audit and it has a smaller kernel. And a kernel debugger. Something that is quite handy to find and troubleshoot problems.
(...) I would avoid the bet.
There are also more Windows machines than *nix machines with an internet connection. Some little-known RTOS are way more popular than Linux and BSD combined. Your point is?
OpenBSD's allocator is what we call "Proof of Concept".
Monolythic kernels are a proof of concept of monolythic designs. Every existing implementation of something is a proof of concept of the given concept. Again, what is the point?
It exists somewhere in real life, you can leverage it (I've leveraged proof-of-concept exploit code from Bugtraq in actual exploit kits), but it's not this ubiquitous thing that's out there enough to have an impact on the real world.
While OpenBSD itself is a niche product, its team is very well known for producing hugely popular products, including OpenSSH and PF. BSD server usage is low, but there are no real stats on middleware - routers, storage units, set-top boxes, closed devices, etc. FreeBSD is reportedly used in Playstation - that's more users than most Linux distros has. Is popularity usage relevant to the discussion? Not really.
Suntrust, Bank of America, slashdot, the NSA, Verisign, Microsoft, Google--is running a non-OpenBSD operating system with no such protections
I'd actually be very surprised if none of these companies use OpenBSD goodies - Either OpenBSD by itself, or middleware BSD products. And then you can add to this OpenSSH, OpenBGPD and a couple more interesting products. Microsoft used OpenBSD as a basis for the Microsoft Services for Unix. But again - is it relevant to the discussion? Not really.
And again, the concept of allocation caching is common. Freelists are used when allocations are all the same size; that gripe is essentially that a valid data object is not valid because they dislike it. Plenty of software uses freelists, and freelists are a generalization of the object pool software design pattern used for database connection caching in ORMs, token caching in security systems, and network buffers (ring buffer...). I would be surprised if OpenBSD's libc and kernel didn't make use of freelists or object pools somewhere.
So, you're saying that optimizing memory allocation in privileged space is the same as optimizing memory allocation on a userland library? That managing fixed-sized, out-of-the-userspace-address-pool structures is the same as trying to be smarter than the local malloc implementation? No system is perfect, but it generally sounds like a very bad idea.
In short: there's a lot of whanging on that OpenSSL made OpenBSD's security allocator feature go away, and that (implication) if OpenSSL had not done that, then an exploit attempt would have come across one of the 0.01% of interesting servers running OpenBSD, and a child Apache process would have crashed, and some alarms would have gone off, and someone would have looked into the logs despite the server humming along just fine as if nothing had happened, and they would have seen the crash, and investigated it with a debugger, and then reproduced the crash by somehow magically divining what just happened, and done so BEFORE THE WHOLE WORLD HAD DEPLOYED OPENSSL 1.0.1.
So, you're assuming there aren't compromised OpenBSD servers because of this. And that no one actually tried to exploit it in OpenBSD. The fact is that no one kows exactly the extent of the damage of this vulnerability, or if it could have been detected way earlier by using OpenBSD or Linux with grsecurity or whatnot. And
Re:de Raadt (Score:5, Insightful)
Actually, it is you who are wrong.
Theo's point from the beginning is that a custom allocator was used here, which removed any beneficial effects of both good platform allocators AND "evil" allocator tools.
His response was a specific circumstance of the poor software engineering practices behind openSSL.
Furthermore, at some point, openSSL became behaviorally dependant on its own allocator -- that is, when you tried to use a system allocator, it broke -- because it wasn't handing you back unmodified memory contents you had just freed.
This dependency was known and documented. And not fixed.
IMO, using a custom allocator is a bit like doing your own crypto. "Normal people" shouldn't do it.
If you look at what open SSL is
1) crypto software ... you should already be squarely in the land of "doing every possible software engineering best practice possible". This is software that needs to be written differently than "normal" software; held to a higher standard, and correct for correctness sake.
2) that is on by default
3) that listens to the public internet
4) that accepts data under the control of attackers
I would say that, "taking a hard dependence on my own custom allocator" and not investigating _why_ the platform allocator can no longer be used to give correct behavior is a _worst practice_. And its especially damning given how critical and predisposed to exploitability something like openSSL is.
Yet that is what the openSSL team did. And they knew it. And they didn't care. And it caught up with them.
The point of Theo's remarks is not to say "using a system allocator would have prevented bad code from being exploitable". The point is "having an engineering culture that ran tests using a system allocator and a debugging allocator would have prevented this bad code from staying around as long as it did"
Let people swap the "fast" allocator back in at runtime, if you must. But make damn sure the code is correct enough to pass on "correctness checking" allocators.
Re: (Score:2)
OpenSSL is broken in many ways. I dispute that the specific citations are technically correct: that unbreaking use of the system allocator would have made Heartbleed not happen; and that heartbleed was an allocation bug (this was alleged as a use-after-free early in this whole theater, but it's not; it's a validation bug, it reads too much, and it allocates an appropriate buffer of appropriate size to write to).
Look up freelists and object pooling. These design patterns are common and considered impor
Re: (Score:2)
Bitch about this instead [regehr.org]. A fucking static checker found heartbleed.
No, it says, "Coverity did not find the heartbleed bug itself", which very clearly means that Coverity did not find Heartbleed. And Coverity themselves confim that Coverity does not detect the problem [coverity.com] (though in response, they've added a new heuristic that does detect it, but no word on how the new heuristic affects the false positive rate).
Re: (Score:3)
Rigorous practices and robust design concepts universally reduce the likelihood of bad things. Trust me, I'm a risk management guy, I know how this shit works.
So then a bug shows up which leaks the content of memory mishandled by that layer. If the memoory had been properly returned via free, it would likely have been handed to munmap, and triggered a daemon crash instead of leaking your keys.
The above quote by Theo spawned a whole lot of chatter about how Heartbleed was caused by double-free or use-after-free. The above says that memory would be handed to munmap(), which people have interpreted as to say, "Oh, OpenSSL was copying a buffer that it had already freed, that's heartbleed". No, not really. There are all kinds of tangenti
Re: (Score:2)
"not an "exploit mitigation countermeasure", but rather standard allocation caching behavior"
There is a bug where memory is used after being freed which is hidden by the custom nonclearing LIFO freelist, i.e. you could realloc and get your record back. If you tried to use a normal malloc it would fail. Clearing memory exposes others bugs so no one was willing to clear memory.
"The comment even says that it's done because performance on allocators is slow. "
It was slow on some architectures, at the time, so t
Re: (Score:3)
There is a bug where memory is used after being freed which is hidden by the custom nonclearing LIFO freelist, i.e. you could realloc and get your record back.
Yeah, and that bug is unrelated to Heartbleed: heartbleed reads beyond the end of an allocation, and allocates a big enough allocation to store all that, and then sends it and frees the allocation. In its own little atom of horrible mishandling of program execution, it's fully consistent except for reading off the end of an allocation. There are no double-frees or use-after-frees causing heartbleed; the entire bug is a memcpy() that's too long.
Thus the subverted the OSS benefit on 'many eyeballs" and did so ON SECURITY SOFTWARE.
You can read the code. Hell, a static checker found Heart [regehr.org]
Re: (Score:2)
If Theo had a more constructive outlook, this would go a lot different and we'd all benefit.
Instead of screaming vitriol at someone's app architecture inadvertently defeating his platform-specific feature, he should be asking why they felt the need to go with that architecture (hint: it was a perfectly reasonable need), and perhaps if he can do something to make integrating his security feature easier for that type of architecture.
Like you say, freelists are an extremely common design choice when performanc
Re: (Score:3)
I don't think there's something broken about OpenBSD's allocator. Performance trade-off, yeah. Broken, no.
I think the target of all this ire is a lot of technically incorrect bullshit, like calling freelists irresponsible programming or claiming that something in the allocator broke OpenSSL (it didn't; the bug was wholly self-contained), or trying to claim that OpenBSD would have caught the bug for some odd reason or another when it can only be caught in that way (with secure allocators) when actively ex
Re: (Score:2)
Re: (Score:2)
Yeah. I don't agree with the conjectures about how fixing X problem would have caught this because it would have to be tested/exploited to be caught even in the absolute most strict case (100% of all code except that one function correct). If somebody would have thought to do a test for a bad payload length, they would have thought to fix it.. or got a huge packet back and gone wtf? And if you tested it otherwise, you wouldn't trigger any invalid behavior.
A static checker did specifically identify he
Re: (Score:3)
The person who wrote the RFC also wrote the buggy code, so it may not be quite so clear.
I just donated (Score:4, Funny)
I think my CC number got stolen.
Re:I just donated (Score:5, Funny)
I know. Pay your bill, your card is being refused by everybody.
Anyone know if there are regression tests? (Score:3)
If they're doing a large-scale refactoring, a regression test suite is really advisable (in addition to static code analysis) to ensure that they don't create new, subtle bugs while removing things that might look like crud. Does anyone know how good their test coverage is?
Re: (Score:2)
... Does anyone know how good their test coverage is?
Not obvious to me if by "their" you mean OpenSSL or OpenBSD (*) but it seems to me the answer is "not sufficient." I'm sure it will be enhanced to cover Heartbleed.
(*) OpenSSL, OpenBSD ... phrased that way it sounds like a match made in heaven! ;)
Re: (Score:2)
Whatever they're using as the baseline of their fork. There are already patches that fix Heartbleed (the simplest being "don't support heartbeats", which are not mandatory in the spec anyway). If they're taking this as an opportunity to do radical cleanup, that's great -- but I'm sure we'd all feel better if regression tests were in place to reduce the risk of introducing another subtle bug. Major surgery on critical security infrastructure should not be rushed.
Re:Anyone know if there are regression tests? (Score:4, Informative)
Added to this, most of what they're doing is removing code and exposing the underlying code to the safeguards they already have in place at the OS level. Refactoring suddenly becomes a LOT easier, as there's less to test. They're pruning their tree, essentially.
The beauty is that the way the handlers are designed at the OS level (and have already been tested against all other packages) means that if there IS a failure, it'll immediately cause a hard fail in OpenSSL -- which might seem bad, but it means that it'll be immediately reported and fixed, and the actual problem will be easy to find. It also means that there's less likelihood of an attacker being able to leverage the bug other than to perform denial of service attacks.
Re:And they've already stopped (Score:5, Interesting)
$30,949 is how much the OpenBSD Foundation received in donations in 2013. [openbsdfoundation.org] That has to get fixed as their expenses were $54,914 and only a one-time transfer from an old account covered the deficit.
The community that depends on OpenSSH, OpenNTPD and the like needs to figure out how to support these projects.
Personally I'd like to see the Foundation offer targeted donations to specific projects with a percentage (~20% perhaps) going into the general operations fund. I bet there are a bunch of people who would throw a hundred bucks at OpenSSH but would be concerned that a general donation would go to some odd thing Theo is doing (whether that be fair or not).
And if "Fixing OpenSSL" were one of the donation options, then hold on to your hats - I think we're all in agreement on this. We do know that the folks currently working on the projects are paid by others but if the Foundation can get enough money to offset expenses then it could actually do some development work and possibly finally take care of some sorely-neglected tasks on a few of these codebases.
Re:And they've already stopped (Score:4, Informative)
Apparently you didn't read the second news item on the OpenBSD news site, where they reached their 2014 funding goal of $150,000 last week.
Re: (Score:3, Interesting)
Should they not be getting some tax funding??? This is so ironic projects like this that if they fail can take down the next every company that uses these projects pays taxes should tax money not be given with out any strings like no nsa back doors or the sort but rather make sure all bugs are taken care of? To many companies profit from ssl yet the project that maintains it is on very week standings.
Re: (Score:2)
$30,949 is how much the OpenBSD Foundation received in donations in 2013. [openbsdfoundation.org]
That's about $29,000 more than OpenSSL receives every year, and still $22,000 more than they received this month when the entire world realized that they had been freeloading and scrambled to make themselves look good by making one-time donations.
Re: (Score:3)
$30,949 is how much the OpenBSD Foundation received in donations in 2013.
And yet... I heard OpenSSL itself gets at most $2000 [marc.info] in a typical year. Despite tens of thousands of banks, retailers, hardware manufacturers, software manufacturers, all relying on their code in a security critical fashion to support their business activities. The MOST the OpenSSL project gets in contributions is a mere shilling?
And no real support for high quality code review, maintenance, and release management. Just support
Wonderful! (Score:2, Insightful)
I have little doubt that the OpenBSD team will significantly improve the code.
"Please Put OpenSSL Out of Its Misery" (Score:5, Informative)
We also have a comment from the FreeBSD developer Poul-Henning Kamp [acm.org].
He starts by saying "The OpenSSL software package is around 300,000 lines of code, which means there are probably around 299 bugs still there, now that the Heartbleed bug — which allowed pretty much anybody to retrieve internal state to which they should normally not have access — has been fixed." After that he notes that we need to ensure that the compiler correctly translates the high-level language to machine instructions. Later Kamp rants a bit about the craziness of CAs in general — would you trust "TÜRKTRUST BLG LETM VE BLM GÜVENL HZMETLER A.."? Then he lists some bullet points about things that are wrong in OpenSSL:
- The code is a mess
- Documentation is misleading
- The defaults are deceptive
- No central architectural authority
- 6,740 goto statements
- Inline assembly code
- Multiple different coding styles
- Obscure use of macro preprocessors
- Inconsistent naming conventions
- Far too many selections and options
- Unexplained dead code
- Misleading and incoherent comments
"And it's nobody's fault. No one was ever truly in charge of OpenSSL, it just sort of became the default landfill for prototypes of cryptographic inventions, and since it had everything cryptographic under the sun (somewhere , if you could find out how to use it), it also became the default source of cryptographic functionality. [...] We need a well-designed API, as simple as possible to make it hard for people to use it incorrectly. And we need multiple independent quality implementations of that API, so that if one turns out to be crap, people can switch to a better one in a matter of hours."
Re:"Please Put OpenSSL Out of Its Misery" (Score:5, Informative)
I agree that the OpenSSL code base is very bad. (I was doing some work based on modifying the library recently and I had to hold my nose.) However, I take objection with some of this:
Otherwise known as "the only sane way to simulate exceptions in C". Seriously. Read up on how "goto" is used in low-level code bases such as OS kernels, instead of citing some vague memory of a 1960s paper without understanding its criticisms.
Otherwise known as "making the thing go fast". Yes, I want the bignum library, or hashing algorithms, to use assembly. Things like SIMD make these tasks really effing fast and that is a good thing...
Right on. (Score:5, Interesting)
Otherwise known as "the only sane way to simulate exceptions in C". Seriously. Read up on how "goto" is used in low-level code bases such as OS kernels, instead of citing some vague memory of a 1960s paper without understanding its criticisms.
People who don't use goto for error handling in C more often than not either have incorrect error handling or way too much error-prone duplication of resource cleanup code. It makes sense to very strictly warn newbies away from goto, much in the same sense that you warn them from multithreading. You don't want them used as a universal hammer for every nail in the code. At some point though, people need to jump off the bandwagon and learn to respect, not fear, these things that actually have some very compelling uses.
Re: (Score:3)
Heh. I've written many gotos in C. Almost universally, the label they go to is called "error_exit" which sits right before resource deallocators and the return statement. :-)
Re: (Score:2, Interesting)
It's not just the library that's in shit shape, the openssl commandline tools themselves are annoying. Getting it to generate a UCC/SAN certificate with multiple hostnames is a hoot (you hardcode the list of alternate names into the openssl configuration file. Then when you want to create a different certificate you hardcode a new list of alternate names into the openssl configuration file), and just using it for its intended purpose basically requires that you either completely understand SSL certificate
Re: (Score:2)
You do know that you can use the SAN environment variable to create UCC certs without having to modify openssl.cnf right? Its much easier to create multiple UCC certs that way.
Ted Unangst's article (Score:5, Informative)
Ted Unangst wrote a good article called "analysis of openssl freelist reuse" [tedunangst.com]
His analysis:
it's a very good read.
Re: (Score:2)
Re: (Score:2)
The most concerning part comes at the end "Here's a 4 year old bug report showing the issue, and another, and another". OpenSLL is supposed to be security software... shouldn't happen.
OpenSSL OR... (Score:3, Interesting)
Or simply support and use the GnuTLS!
both have their own set of problems, but at least now you have the a alternative.
Re: (Score:2)
OpenBSD tries extremely hard to make the entire system BSD-licensed. AFAIK the only non-BSD items in a default installation is GCC, and that is an optional-but-default item. There are a few optional, not-compiled-by-default and rarely-used kernel modules that are GPL (an FPU emulator for very early x86 systems is the only one I recall), and of course you can install non-BSD packages as you wish, but the base OS and all components are BSD-licensed.
GnuTLS, naturally, uses the LGPL, which is probably why they
What other dragons are out there? (Score:2)
The single heartbleed vulnerability awakened people to actually take a deep look into the OpenSSL code. Where were all the "eyeballs" before that? Now we are in a position where suddenly the whole OpenSSL code needs massive rearchitecturing. It just makes me cringe when I think what other OSS projects possibly have similar serious problems, possibly in mission critical components. It's starting to be quite obvious that there is not enough people to actually take a look under the hood and say, hey this quite
Theo de Raadt redeems himself (Score:2)
I've not liked some of the things I've heard from Mr. de Raadt in the past because they seemed to be less fact than emotion, but in this case Theo has redeemed himself in a big way.
Like it or not, OpenSSL is now one of the most important pieces of software in the world. OpenSSL protects people's bank account numbers, credit card numbers, medical records, and employment records. OpenSSL protects corporate and government secrets (hopefully in combination with other defensive tactics). OpenSSL is not used for
Rights and Wrongs of good code. (Score:2)
A while back a programmer friend of mine was guiding me in programming. Back then I wrote some C code, and in a error handling routine, I used the dreaded GOTO statement.
Boy did this unleash his complete wrath. He could talk about nothing else afterwards.
I was berated, screamed at, told I will never be a good programmer. I should give up now. GOTO's are the tools of a lazy ignora
Re:Okay, Go! (Score:5, Informative)
Re:Okay, Go! (Score:5, Informative)
Obviously since OpenBSD is running their fork of OpenSSL 0.9.8 which essentially doesn't have this exploit, this is just a shameless plug.
OpenBSD 5.3 - 5.5 was affected: see their Security Advisories [openbsd.org]
Re: (Score:3)
OpenBSD 5.3 was running 1.0.1c which was affected by the bug. This is not PR. It is fixing bugs in a critical component of their OS.
Re: (Score:2)
Correction of myself: That should be 5.3 and 5.4 which both had 1.0.1c.
Re:Okay, Go! (Score:4, Informative)
Yeah, I just read their security advisory. I was basing my information on the original Heartbleed slashdot article which listed OpenBSD as unaffected.
(Note to self: Verify all thy claims before making a near-first comment on slashdot...)
Re:Okay, Go! (Score:4, Interesting)
Not necessarily. It looks like they're removing what they can't support, such as VMS, Netware and OS/2. The few people that care can still use the original OpenSSL code.
I'd expect them to ensure it support the hardware platforms OpenBSD supports at the very least. Then, if they go the "portable" route like they did for OpenSSH, support for the other Unix and Unix-like systems.
http://www.openssh.com/portable.html [openssh.com]
More power to them.
Re:What about a re-implementation... (Score:4, Insightful)
Re: (Score:2, Troll)
C is a perfectly safe language if used properly.
As safe as juggling very long, very, very greasy sharp knives while skating on very thin ice...
Re: (Score:3)
And yet you'll trust languages implemented in it?
Re: (Score:2)
And yet you'll trust languages implemented in it?
Why do you presume that of me?
(Back in the day, languages bootstrapped themselves. Now GOML!)
Re:What about a re-implementation... (Score:4, Informative)
And thay changes things, how? C++ allows all the same "unsafe" things as C does. Have you ever used C++ before?
Re: (Score:3)
But C++ gives you the tools to automatically catch various kinds of errors and memory leaks. If you use class destructors correctly, you can ensure that an object is automatically closed properly when it goes out of scope. There are a lot of standard classes such as smart pointers that are specifically designed with this kind of programming in mind. It's not 100% foolproof but it is a lot more reliable than having to remember to do it all manually in C (or C masquerading as C++).
None of these would have stopped the Heartbleed bug.
Re: (Score:2)
Only for those of small skill. Those people should stay away from any security-critical code anyways.
Re: (Score:2)
Given the number of bugs (security and otherwise) in so many applications, there must be 10 metric trainloads of "small skill" programmers out there.
Re: (Score:2)
I'll give you ubiquitous and portable, but C is not remotely a "perfectly safe" language. That's a ridiculous claim.
Re:What about a re-implementation... (Score:5, Insightful)
So if C is so bad why should we trust the languages that are implemented in it? You do realize that most of these "safe" languages are written in C, right?
Re: (Score:2)
First: Many languages are largely or even entirely self-hosted in terms of compiler and/or runtime. This means that if they provide, say, better type safety than C, those benefits carry over to the portions of the language that are self-hosted.
Second: the directness of the problem. It's easy for a C program to allow a very direct exploit, e.g. Heartbeat. I'm not saying easy to find, or that you'll necessarily get what you want to see every time, but the bug itself is about as simple as you can possibly get.
Re:What about a re-implementation... (Score:5, Insightful)
If your language runtime has a bug instead, it's much more likely to be a very indirect one, because now not only do you likely have to cause a specific behavior in the program itself, but that behavior has to trip up the runtime in a way that causes that bug to lead to something bad.
Yeah and? Has that stopped all the exploits of the Flash runtime and the Sun/Oracle JVM? Nope. In fact, those two are among the most exploited pieces of userspace software on the OS.
Re: (Score:2)
Coincidentally, they're also the two applications that are internet-facing the most. Oh wait, that's not a coincidence at all. If you put C into that role, and let your browser download and run C programs, the result would make Java and Flash look like Fort Knox.
(NaCl isn't C, I will point out, and is closer to a better Java implementation than it is to compiling and running C.)
Re: (Score:2)
I can't say for Flash, but most of the headline Java bugs related to the web start API's / DLL's which are actually outside of the core JVM sandbox (though there were a few in-sandbox flaws which were patched as well). You could say the same thing if there were gaping holes in Jlaunch, or Oracle's JVM API, etc.. The only difference is that web start for better or worse is included in the standard JRE release.
You don't hear about the countless exploits possible in java based server code, considering that bas
Re: (Score:2)
So if C is so bad why should we trust the languages that are implemented in it? You do realize that most of these "safe" languages are written in C, right?
I'm just going to link to a previous comment [slashdot.org] where I answered the same question.
TLDR: Languages aren't "written in" anything, and the language the compiler is written in has no bearing on the capabilities of the language it compiles. (We're all Turing complete here.)
Re: (Score:2)
If C-programmers make secure programs 85% of the time, it's pretty safe to make something with.
It's not safe to make everything with.
It's like how desks and other furniture will have rounded edges and corners instead of sharp points. Sure, you trust adults not to bump into inanimate objects around the office and hurt themselves, but would you bet on it never happening?
Re:What about a re-implementation... (Score:5, Interesting)
Re: (Score:3)
And all these vaunted "safer languages" are written in... C.
Re: (Score:2)
Every so called "safer" language (than C) is also less efficient.
Depending on what you're doing. Compiled languages without pointers (such as Fortran and Pascal) are usually much faster for mathematical tasks, since side effects are so few (essentially concerning globals only), the compiler can much more easily automatically vectorise, reorder, factorise and unroll code. They also happen to be more safe, since you cannot simply tell the computer to write at a memory address, buffers can be bounds checked and so no overflows or underflows.
Problem is, they generally suck f
Re: (Score:2)
While it might be nice to use a safe(r) language, can't we at least have a compile option in C that adds bounds checking?
And while you're at it, how about making it impossible to execute code that isn't in the code segment and write protecting the code segment.
Re: (Score:3)
That is complete BS. code is insecure because the coders suck. Language makes no difference. In a "safe" language, the bigs are just harder to find.
Re: (Score:2)
Actually, a certain amount of effort is required to understand code. Depending on how the language provides its interfaces, the abstract and logical ways in which people think and which programmatic processes normally flow based on how people envision them are either simple to interpret (for humans) or immensely complex to interpret.
Because of this, one language may have an advantage over another in terms of both speed of program implementation and frequency of defects. Fewer objects for programmers to tr
Re: (Score:2)
With C you have to be hypervigilinte
And yet the vast majority of C programmers... are not hypervigilant.
Re: (Score:2)
Let's not single C out as if it is the exception to the rule.
While true, this thread is about whether or not C is or can be a safe language.
Re: (Score:2)
I understand that Karpeles (of Mt.Gox fame) wrote his own SSHD using PHP. Let's use that! :-)
http://falkvinge.net/2014/03/1... [falkvinge.net]
Re:What about a re-implementation... (Score:5, Informative)
As I understand it, one reason that security-related code is best done in low level languages is that the implementer has absolute control over sensitive data.
For example, consider an server which acquires a passphrase from the client for authentication purposes. If your implementation language is C, you can receive that passphrase into a char array on the stack, use it, and zero it out immediately. Poof, gone in microseconds.
But let's say you used some language which dynamically allocates memory for all strings and garbage-collects them when they go out of scope. It's "safer" in one respect, because it prevents the developer from having to do their own memory management. But auto-growing strings (and lists) often work via some invisible sleight-of-hand whereby the string's data is copied to new memory once it grows enough to fill its original underlying buffer. This can happen several times as you concatenate more characters onto the end of that string. So as you read it a long passphrase into a dynamically-growing string, little now-unused copies of the prefixes are being put back on the heap all the time, completely outside your control. If that daemon dumps core and you inspect the dumpfile, you might see something like "correct-horse-battery-sta". Marry that to the log of IP connections, and boom, you can make an educated guess at what Randall Munroe's passphrase is.
Re: (Score:2)
For example, consider an server which acquires a passphrase from the client for authentication purposes. If your implementation language is C, you can receive that passphrase into a char array on the stack, use it, and zero it out immediately. Poof, gone in microseconds. But let's say you used some language which dynamically allocates memory for all strings and garbage-collects them when they go out of scope. (...)
That would be true if high level languages only offered the default implementation but usually they have a special implementation like SecureString in .NET, it'll let you do the exact same thing. For bonus points it'll also encrypt the data in memory in case you have to keep it around a little while, sure it's a bit of security through obscurity but it won't be trivial to find with a memory dump. The issue is more that people who aren't aware of the issues won't ever think to look for or use these classes,
Re: (Score:2)
one reason that security-related code is best done in low level languages is that the implementer has absolute control over sensitive data. For example, consider an server which acquires a passphrase from the client for authentication purposes. If your implementation language is C, you can receive that passphrase into a char array on the stack, use it, and zero it out immediately.
That scenario actually explains why security-related code is best done in MANAGED languages using something like SecureString
http://msdn.microsoft.com/en-u... [microsoft.com] -- this way, you still have API control to zero it out immediately, but you also benefit from the fact that you can make it ReadOnly, the fact that it's encrypted, the fact that it was authored by someone who's more expert in security than you and has had more eyes to review it than your ad-hoc solution.
Re: (Score:2)
...you can receive that passphrase into a char array on the stack, use it, and zero it out immediately. Poof, gone in microseconds.
Only if you've set that part of your stack to locked. Otherwise it could get paged out to disk. Thanks to the fun of timing on computers, the amount of actual time that passes between "receive [into memory]" and "zero out" is arbitrarily long.
Re: (Score:2)
Issues like that are why real, bulletproof security is incredibly hard. At least with low-level languages, you're close enough to the machine to at least be able to think about such things, and maybe even do something about them.
Re: (Score:2)
I only used Go as an example of a somewhat safer (type checking, bounds checking, etc) language.
I never said I want a Go implementation, in fact I alluded to the fact that Go can't even build shared libraries.
Re: (Score:3)
Re:Thanks! (Score:5, Interesting)
No. We all love to hate on OpenSSL because it's a pile of poo.
There are vested interests who make a living because they have write permissions to OpenSSL and they can charge companies to do it and the barrier to entry to others is really high because it's a undocumented, over complex pile of source.
Re: (Score:2)
I must have missed it. When did the OpenSSL team use illegal and immoral businesses practices to coherce or trick people into having to pay money for their software? You see. That's the difference in a nutshell. If you offer someone something for free they tend to be more forgiving of flaws than
Re: (Score:2)
You do know that the difference is that if this was closed source, we would just have heard about it a lot later, right?