Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Open Source Software BSD IT

OpenBSD Team Cleaning Up OpenSSL 304

First time accepted submitter Iarwain Ben-adar (2393286) writes "The OpenBSD has started a cleanup of their in-tree OpenSSL library. Improvements include removing "exploit mitigation countermeasures", fixing bugs, removal of questionable entropy additions, and many more. If you support the effort of these guys who are responsible for the venerable OpenSSH library, consider a donation to the OpenBSD Foundation. Maybe someday we'll see a 'portable' version of this new OpenSSL fork. Or not."
This discussion has been archived. No new comments can be posted.

OpenBSD Team Cleaning Up OpenSSL

Comments Filter:
  • de Raadt (Score:4, Insightful)

    by Anonymous Coward on Tuesday April 15, 2014 @12:33PM (#46757899)

    Yes, Theo has a bad temper and is not filtering carefully enough what he says, but his heart is in the right place, and he's a fucking great leader. I don't mind one bit his bad temper, because it usually hits those that really deserve it. And on the other hand, he's one of the most effective open source leaders.

    • by Anonymous Coward on Tuesday April 15, 2014 @12:48PM (#46758121)

      Removal of ancient MacOS, Netware, OS/2, VMS and Windows build junk
      Removal of “bugs” directory, benchmarks, INSTALL files, and shared library goo for lame platforms
      Ripping out some windows-specific cruft
      Removal of various wrappers for things like sockets, snprintf, opendir, etc. to actually expose real return values

      There's no doubt that OpenSSL needs work, but they seem to be needlessly combining actual security review with "break every platform that I don't like." At a minimum, anyone else trying to benefit from this will need to unravel the worthwhile security changes from the petty OS wars crap.

      • by jabberw0k ( 62554 ) on Tuesday April 15, 2014 @01:04PM (#46758363) Homepage Journal
        I read that as discarding stuff for Windows 98, 2000, and other ancient platforms that have fallen almost entirely from use, and certainly outside the pool of what's tested: A good thing.
        • by Razgorov Prikazka ( 1699498 ) on Tuesday April 15, 2014 @01:32PM (#46758761)
          You forgot Windows 1.x, 2.x, 3.x, 95, PocketPC 200x, mobile 5, 6, 7, 8, RT, NT3.1, 3.5, 4.0, XP, Server 20xx, XP, Vista, 7, 8, 8.1 and finally CE1.x to CE7.x.

          Those should be avoided at all times as well if security is the main concern. Have you ever heard of a security breach on a OpenBSD system? You probably did, it's because that is actually newsworthy! News of a new MS security breach is chucked into the same lame bin as 'Cat is stuck in tree', 'Small baby is born', 'MH370 is finally found', 'Cat still stuck', 'MH370 still not found', 'Is this the year for BitCoins'?, "Cat climbed down himself', and other nonsense that will surprise no one at all.

          (P.S. This is not meant snarly, cynical or negative, just slightly blasé)
        • Comment removed based on user account deletion
      • by Anonymous Coward on Tuesday April 15, 2014 @01:05PM (#46758381)

        The first step to cleaning up the code is getting it into a state where you're not leaving crap in place because 'It's for something I don't understand.'

        That's what got us in the current OpenSSL mess in the first place.

        Additionally, once the core code is cleaned up you can always follow the changelogs and merge the legacy stuff back in (assuming they're using git, or another VCS with a good git check(in/out) module.)

        Honestly anyone still running any of those OSes is probably running a 0.9 series library and thus wasn't vulnerable to this bug to begin with. Who knows how many of those alternate paths even still worked anymore.

      • by gman003 ( 1693318 ) on Tuesday April 15, 2014 @01:07PM (#46758403)

        It's a fork specifically for OpenBSD. Why would they keep support for other OSes?

        I agree that if they were trying to create a general replacement fork of OpenSSL, that those would be bad things, but for what they're trying to do, these are good decisions. They're trying to improve OpenBSD's security - OpenSSL is a big attack surface, and they're trying to make it smaller by removing the things they don't need.

        This will complicate things both ways, going forward. Updates to OpenSSL might be harder to integrate with OpenBSD's fork (if it becomes an actual independent product, can we call it OpenOpenSSL? Or Open^2SSL?), if it touches upon the altered parts. Likewise, anyone trying to merge an Open^2SSL fix into OpenSSL might have difficulty. I expect that if OpenBSD's fork of OpenSSL becomes a separate project, one or the other will die off, simply due to all that duplicated effort.

        What I expect to happen in that case is that Open^2SSH will maintain compatibility with all the platforms OpenSSH or OpenSMTPD (which are OpenBSD projects) support - pretty much any Unix-like environment, including Linux, BSD, OS X, Cygwin, and most proprietary Unices. If there's enough desire for support for other platforms, a second fork might happen to maintain them, but I honestly doubt it (Mac OS 9? Really?).

        • Re: (Score:3, Insightful)

          by jeffmeden ( 135043 )

          It's a fork specifically for OpenBSD. Why would they keep support for other OSes?

          You only fork when you want to put distance between the original; there is nothing stopping them from making changes/"improvements" to the original OpenSSL project except for scope constraint (i.e. if they just want OpenBSD to be secure) or ego. Either one stinks of selfishness. I cant criticize them directly since they are still doing all of their work for "free" and are publishing it freely, but it has to be pointed out that they are choosing the greater of two evils.

          • With something as big and messy as crufty as OpenSSL, there probably isn't a sane way to approach the problem of decrapifying it that doesn't involve first stripping it down to the minimum.The OpenBSD devs aren't Windows devs, Apple Devs, or Linux Devs. There is no "greater evil" in making something more secure in less time for your own platform when contorting themselves to maintain compatiility keeps junk that slows them in their task to the point they don't every get to the clean secure rewrite.
          • by serviscope_minor ( 664417 ) on Tuesday April 15, 2014 @02:11PM (#46759243) Journal

            they are choosing the greater of two evils.

            No.

            Eventually supporting too many screwy and ancient systems starts to cause just so many problems that it is really, really hard to write solid, well tested, clear code. The heartbleed bug was exactly a result of this. Because of supporting so many screwy platforms, they couldn't even rely on having malloc() work well. That means they had their own malloc implementation working from internal memory pools. Had they not, they would have benefited from the modern mmap() based implementations and you'd have got a segfault rather than a dump of the process memory.

            Supporting especially really old systems means having reimplementations of things which ought to be outside the scope of OpenSSL. Then you have to decide whether to always use the reimplementation or switch on demand between the custom one and the system one and whether or not to have some sort of quirk/bug correction.

            This sort of stuff starts to add up and lead to a maintainance nightmare.

            What OpenBSD are doing: throwing out all the accumulated crud and keeping the good parts is a fine way to proceed. It will almost certainly be portable to the other BSDs, OSX and Linux since they provide similar low level facilities. I doubt a port back to Windows would be hard because modern windows provides enough sane facilities that it's generally not too bad for heavily algorithmic code like this.

            Basically there's no way for them to get started except to first rationalise the code base and then audit it.

          • by s_p_oneil ( 795792 ) on Tuesday April 15, 2014 @02:53PM (#46759697) Homepage

            If they end up stripping it down to a minimal library with the core functionality, cleaning up the public interface (e.g. exported functions), and making it easy to create your own OS-specific wrapper around it, then they are actually doing something that should have been done in the first place. If they do it right, it will become much more popular (and most likely more light-weight and secure) than the current OpenSSL project.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        I think the point is to have as little code as possible, and to have no code that isn't covered by their tests. Both of which are excellent ideas if you want to write secure code.

        • by gweihir ( 88907 )

          Indeed. Most coders just add more code when they run into an issue. That is a problem for normal code, but it is death for anything security-critical.

      • by dirtyhippie ( 259852 ) on Tuesday April 15, 2014 @01:15PM (#46758525) Homepage

        It's not remotely about petty OS wars. Complexity is bad for security, mmkay? If you want a newer version of openssl for OS/2, netware, or pre OSX MacOS, I'd really like to know what exactly you are doing. Dropping those platforms is the right thing.

      • Their theory is that they need an SSL system for OpenBSD. They're not trying to build it for other platforms, and the extra code adds complexity (and can contain vulnerabilities) so they're not going to maintain it. They're cutting out unnecessary cruft. That cruft may be needed for some users, but OpenBSD doesn't have any use for OS/2 support.
      • There's no doubt that OpenSSL needs work, but they seem to be needlessly combining actual security review with "break every platform that I don't like." At a minimum, anyone else trying to benefit from this will need to unravel the worthwhile security changes from the petty OS wars crap.

        I don't see this as a problem. Since OpenBSD is working on their own, for-themselves, branch, they can fix it any way they want. If they do a good job (as expected), the OpenSSL project can then backport their fixes into t

    • Re:de Raadt (Score:5, Interesting)

      by bluefoxlucid ( 723572 ) on Tuesday April 15, 2014 @01:37PM (#46758835) Homepage Journal

      He is technically incapable of evaluating what's actually happening, and likes to go off-list when he's angry and wrong [fbcdn.net].

      The freelist is not an "exploit mitigation countermeasure", but rather standard allocation caching behavior that many high-rate allocation applications and algorithms implement--for example, ring buffers [wikipedia.org] are common as all hell. The comment even says that it's done because performance on allocators is slow.

      Further, the only bug in Heartbleed was a READ OVERFLOW BUG caused by lack of input validation. It would actually read that a user said "This heartbeat is 65 thousand bytes long", allocate 65 thousand bytes plus room for instrumentation data, put instrumentation data in place, and then copy 65 thousand bytes from a request that was 1 byte long. While there are mitigation techniques, most allocators--anything that uses brk() to allocate the heap for allocations smaller than say 128KB (glibc's pmalloc and freebsd's kmalloc both use brk() until you ask for something bigger than 128KB, then use mmap())--don't do that. That's how this flaw worked: It would just read 64KB, most likely from the brk() area, and send it back to you.

      Read overflows don't kill canaries, so you wouldn't detect it except for with an unmapped page--a phenomena that doesn't happen with individual allocations smaller than 128KB in an allocator that uses brk(), like the default allocator on Linux and FreeBSD. Write overflows would kill canaries, but they actually allocated enough space to copy the too-large read into. And the code is, of course, correct for invalid input.

      Theo made a lot of noise about how all these other broken things were responsible for heartbleed, when the reality is one failed validation carries 100% of the weight for Heartbleed. If you perfectly cleaned up OpenSSL except for that single bug, slapped it on Linux with the default allocator, and ran it, it would still have the vulnerability. And it only behaves strange when being exploited--and any test would have sent back a big packet, raising questions.

      There was never really any hope that this was going to be caught before it was in the wild and "possibly had leaked your SSL keys'. It may have happened sooner, maybe, maybe not; but it still would have been a post-apocalyptic shit storm. And all those technical mitigations Theo is prattling on about would have helped if OpenSSL were cleaned up... AND if those technical mitigations were in Linux, not just OpenBSD.

      • Re:de Raadt (Score:5, Informative)

        by EvanED ( 569694 ) <evaned@NOspAM.gmail.com> on Tuesday April 15, 2014 @01:50PM (#46758971)

        The freelist is not an "exploit mitigation countermeasure",...

        He was being somewhat sarcastic, because OpenBSD's allocator is in contrast to

        Read overflows don't kill canaries, so you wouldn't detect it except for with an unmapped page--a phenomena that doesn't happen with individual allocations smaller than 128KB in an allocator that uses brk(), like the default allocator on Linux and FreeBSD

        and does try to separate allocations specifically to mitigate Heartbleed-style vulnerabilities.

        In other words, the OpenBSD allocatior does have exploit mitigation, and the OpenSSL freelist acts as a countermeasure to those mitigation capabilities whether it was intended or not.

        The comment even says that it's done because performance on allocators is slow.

        It says it's slow on "some platforms", yet they disabled it on all and then didn't test the alternative.

        But of course everyone knows it's way better to quickly implement a dramatically awful security vulnerability than to do things slowly and correctly.

        • Yes yes of course but you're missing the point. We'll ignore that sarcasm doesn't carry and that the exploit mitigation stuff in OpenSSL has been repeated again and again without a hint of irony and so one may be lead to believe such a thing exists in OpenSSL.

          First off,

          And all those technical mitigations Theo is prattling on about would have helped if OpenSSL were cleaned up... AND if those technical mitigations were in Linux, not just OpenBSD.

          Here's the thing: OpenBSD is a hobby OS. It's like Linux with grsecurity: yes they've mitigated all this shit ages ago, yes people run grsecurity in production, yes anything that grsecurity "would have prevented" is effectively unpro

          • Re:de Raadt (Score:4, Interesting)

            by rev0lt ( 1950662 ) on Tuesday April 15, 2014 @05:32PM (#46761233)

            OpenBSD is a hobby OS

            *every* community-driven operating system is a hobby OS. Is that relevant?

            It's like Linux with grsecurity

            Maybe for you. Not for me. And it is actually easier to audit and it has a smaller kernel. And a kernel debugger. Something that is quite handy to find and troubleshoot problems.

            (...) I would avoid the bet.

            There are also more Windows machines than *nix machines with an internet connection. Some little-known RTOS are way more popular than Linux and BSD combined. Your point is?

            OpenBSD's allocator is what we call "Proof of Concept".

            Monolythic kernels are a proof of concept of monolythic designs. Every existing implementation of something is a proof of concept of the given concept. Again, what is the point?

            It exists somewhere in real life, you can leverage it (I've leveraged proof-of-concept exploit code from Bugtraq in actual exploit kits), but it's not this ubiquitous thing that's out there enough to have an impact on the real world.

            While OpenBSD itself is a niche product, its team is very well known for producing hugely popular products, including OpenSSH and PF. BSD server usage is low, but there are no real stats on middleware - routers, storage units, set-top boxes, closed devices, etc. FreeBSD is reportedly used in Playstation - that's more users than most Linux distros has. Is popularity usage relevant to the discussion? Not really.

            Suntrust, Bank of America, slashdot, the NSA, Verisign, Microsoft, Google--is running a non-OpenBSD operating system with no such protections

            I'd actually be very surprised if none of these companies use OpenBSD goodies - Either OpenBSD by itself, or middleware BSD products. And then you can add to this OpenSSH, OpenBGPD and a couple more interesting products. Microsoft used OpenBSD as a basis for the Microsoft Services for Unix. But again - is it relevant to the discussion? Not really.

            And again, the concept of allocation caching is common. Freelists are used when allocations are all the same size; that gripe is essentially that a valid data object is not valid because they dislike it. Plenty of software uses freelists, and freelists are a generalization of the object pool software design pattern used for database connection caching in ORMs, token caching in security systems, and network buffers (ring buffer...). I would be surprised if OpenBSD's libc and kernel didn't make use of freelists or object pools somewhere.

            So, you're saying that optimizing memory allocation in privileged space is the same as optimizing memory allocation on a userland library? That managing fixed-sized, out-of-the-userspace-address-pool structures is the same as trying to be smarter than the local malloc implementation? No system is perfect, but it generally sounds like a very bad idea.

            In short: there's a lot of whanging on that OpenSSL made OpenBSD's security allocator feature go away, and that (implication) if OpenSSL had not done that, then an exploit attempt would have come across one of the 0.01% of interesting servers running OpenBSD, and a child Apache process would have crashed, and some alarms would have gone off, and someone would have looked into the logs despite the server humming along just fine as if nothing had happened, and they would have seen the crash, and investigated it with a debugger, and then reproduced the crash by somehow magically divining what just happened, and done so BEFORE THE WHOLE WORLD HAD DEPLOYED OPENSSL 1.0.1.

            So, you're assuming there aren't compromised OpenBSD servers because of this. And that no one actually tried to exploit it in OpenBSD. The fact is that no one kows exactly the extent of the damage of this vulnerability, or if it could have been detected way earlier by using OpenBSD or Linux with grsecurity or whatnot. And

      • Re:de Raadt (Score:5, Insightful)

        by bmajik ( 96670 ) <matt@mattevans.org> on Tuesday April 15, 2014 @02:36PM (#46759527) Homepage Journal

        Actually, it is you who are wrong.

        Theo's point from the beginning is that a custom allocator was used here, which removed any beneficial effects of both good platform allocators AND "evil" allocator tools.

        His response was a specific circumstance of the poor software engineering practices behind openSSL.

        Furthermore, at some point, openSSL became behaviorally dependant on its own allocator -- that is, when you tried to use a system allocator, it broke -- because it wasn't handing you back unmodified memory contents you had just freed.

        This dependency was known and documented. And not fixed.

        IMO, using a custom allocator is a bit like doing your own crypto. "Normal people" shouldn't do it.

        If you look at what open SSL is

        1) crypto software
        2) that is on by default
        3) that listens to the public internet
        4) that accepts data under the control of attackers ... you should already be squarely in the land of "doing every possible software engineering best practice possible". This is software that needs to be written differently than "normal" software; held to a higher standard, and correct for correctness sake.

        I would say that, "taking a hard dependence on my own custom allocator" and not investigating _why_ the platform allocator can no longer be used to give correct behavior is a _worst practice_. And its especially damning given how critical and predisposed to exploitability something like openSSL is.

        Yet that is what the openSSL team did. And they knew it. And they didn't care. And it caught up with them.

        The point of Theo's remarks is not to say "using a system allocator would have prevented bad code from being exploitable". The point is "having an engineering culture that ran tests using a system allocator and a debugging allocator would have prevented this bad code from staying around as long as it did"

        Let people swap the "fast" allocator back in at runtime, if you must. But make damn sure the code is correct enough to pass on "correctness checking" allocators.

        • OpenSSL is broken in many ways. I dispute that the specific citations are technically correct: that unbreaking use of the system allocator would have made Heartbleed not happen; and that heartbleed was an allocation bug (this was alleged as a use-after-free early in this whole theater, but it's not; it's a validation bug, it reads too much, and it allocates an appropriate buffer of appropriate size to write to).

          Look up freelists and object pooling. These design patterns are common and considered impor

          • by Dahan ( 130247 )

            Bitch about this instead [regehr.org]. A fucking static checker found heartbleed.

            No, it says, "Coverity did not find the heartbleed bug itself", which very clearly means that Coverity did not find Heartbleed. And Coverity themselves confim that Coverity does not detect the problem [coverity.com] (though in response, they've added a new heuristic that does detect it, but no word on how the new heuristic affects the false positive rate).

      • "not an "exploit mitigation countermeasure", but rather standard allocation caching behavior"

        There is a bug where memory is used after being freed which is hidden by the custom nonclearing LIFO freelist, i.e. you could realloc and get your record back. If you tried to use a normal malloc it would fail. Clearing memory exposes others bugs so no one was willing to clear memory.

        "The comment even says that it's done because performance on allocators is slow. "

        It was slow on some architectures, at the time, so t

        • There is a bug where memory is used after being freed which is hidden by the custom nonclearing LIFO freelist, i.e. you could realloc and get your record back.

          Yeah, and that bug is unrelated to Heartbleed: heartbleed reads beyond the end of an allocation, and allocates a big enough allocation to store all that, and then sends it and frees the allocation. In its own little atom of horrible mishandling of program execution, it's fully consistent except for reading off the end of an allocation. There are no double-frees or use-after-frees causing heartbleed; the entire bug is a memcpy() that's too long.

          Thus the subverted the OSS benefit on 'many eyeballs" and did so ON SECURITY SOFTWARE.

          You can read the code. Hell, a static checker found Heart [regehr.org]

      • If Theo had a more constructive outlook, this would go a lot different and we'd all benefit.

        Instead of screaming vitriol at someone's app architecture inadvertently defeating his platform-specific feature, he should be asking why they felt the need to go with that architecture (hint: it was a perfectly reasonable need), and perhaps if he can do something to make integrating his security feature easier for that type of architecture.

        Like you say, freelists are an extremely common design choice when performanc

        • I don't think there's something broken about OpenBSD's allocator. Performance trade-off, yeah. Broken, no.

          I think the target of all this ire is a lot of technically incorrect bullshit, like calling freelists irresponsible programming or claiming that something in the allocator broke OpenSSL (it didn't; the bug was wholly self-contained), or trying to claim that OpenBSD would have caught the bug for some odd reason or another when it can only be caught in that way (with secure allocators) when actively ex

      • I disagree that there was no way to catch this. From code I saw, at its core, it was a simple case of using memcpy with the size of the destination buffer rather than the source buffer. Any automated bounds checker would have caught this. But, in addition, there should have been a compliance test that a packet with a specified size bigger than its payload went unanswered since anything else is noncompliant with the RFC. Clearly the person who wrote the RFC understood that answering a heartbeat request w
        • Yeah. I don't agree with the conjectures about how fixing X problem would have caught this because it would have to be tested/exploited to be caught even in the absolute most strict case (100% of all code except that one function correct). If somebody would have thought to do a test for a bad payload length, they would have thought to fix it.. or got a huge packet back and gone wtf? And if you tested it otherwise, you wouldn't trigger any invalid behavior.

          A static checker did specifically identify he

        • by pjt33 ( 739471 )

          Clearly the person who wrote the RFC understood that answering a heartbeat request with a size different than its payload was a potential problem since the behavior was specified.

          The person who wrote the RFC also wrote the buggy code, so it may not be quite so clear.

  • by Anonymous Coward on Tuesday April 15, 2014 @12:37PM (#46757943)

    I think my CC number got stolen.

  • by QilessQi ( 2044624 ) on Tuesday April 15, 2014 @12:38PM (#46757957)

    If they're doing a large-scale refactoring, a regression test suite is really advisable (in addition to static code analysis) to ensure that they don't create new, subtle bugs while removing things that might look like crud. Does anyone know how good their test coverage is?

    • ... Does anyone know how good their test coverage is?

      Not obvious to me if by "their" you mean OpenSSL or OpenBSD (*) but it seems to me the answer is "not sufficient." I'm sure it will be enhanced to cover Heartbleed.

      (*) OpenSSL, OpenBSD ... phrased that way it sounds like a match made in heaven! ;)

      • Whatever they're using as the baseline of their fork. There are already patches that fix Heartbleed (the simplest being "don't support heartbeats", which are not mandatory in the spec anyway). If they're taking this as an opportunity to do radical cleanup, that's great -- but I'm sure we'd all feel better if regression tests were in place to reduce the risk of introducing another subtle bug. Major surgery on critical security infrastructure should not be rushed.

  • by bill_mcgonigle ( 4333 ) * on Tuesday April 15, 2014 @12:40PM (#46757985) Homepage Journal

    $30,949 is how much the OpenBSD Foundation received in donations in 2013. [openbsdfoundation.org] That has to get fixed as their expenses were $54,914 and only a one-time transfer from an old account covered the deficit.

    The community that depends on OpenSSH, OpenNTPD and the like needs to figure out how to support these projects.

    Personally I'd like to see the Foundation offer targeted donations to specific projects with a percentage (~20% perhaps) going into the general operations fund. I bet there are a bunch of people who would throw a hundred bucks at OpenSSH but would be concerned that a general donation would go to some odd thing Theo is doing (whether that be fair or not).

    And if "Fixing OpenSSL" were one of the donation options, then hold on to your hats - I think we're all in agreement on this. We do know that the folks currently working on the projects are paid by others but if the Foundation can get enough money to offset expenses then it could actually do some development work and possibly finally take care of some sorely-neglected tasks on a few of these codebases.

    • by Anonymous Coward on Tuesday April 15, 2014 @01:00PM (#46758285)

      Apparently you didn't read the second news item on the OpenBSD news site, where they reached their 2014 funding goal of $150,000 last week.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Should they not be getting some tax funding??? This is so ironic projects like this that if they fail can take down the next every company that uses these projects pays taxes should tax money not be given with out any strings like no nsa back doors or the sort but rather make sure all bugs are taken care of? To many companies profit from ssl yet the project that maintains it is on very week standings.

    • by Minwee ( 522556 )

      $30,949 is how much the OpenBSD Foundation received in donations in 2013. [openbsdfoundation.org]

      That's about $29,000 more than OpenSSL receives every year, and still $22,000 more than they received this month when the entire world realized that they had been freeloading and scrambled to make themselves look good by making one-time donations.

    • by mysidia ( 191772 )

      $30,949 is how much the OpenBSD Foundation received in donations in 2013.

      And yet... I heard OpenSSL itself gets at most $2000 [marc.info] in a typical year. Despite tens of thousands of banks, retailers, hardware manufacturers, software manufacturers, all relying on their code in a security critical fashion to support their business activities. The MOST the OpenSSL project gets in contributions is a mere shilling?

      And no real support for high quality code review, maintenance, and release management. Just support

  • Wonderful! (Score:2, Insightful)

    by cold fjord ( 826450 )

    I have little doubt that the OpenBSD team will significantly improve the code.

  • by Anonymous Coward on Tuesday April 15, 2014 @12:42PM (#46758023)

    We also have a comment from the FreeBSD developer Poul-Henning Kamp [acm.org].

    He starts by saying "The OpenSSL software package is around 300,000 lines of code, which means there are probably around 299 bugs still there, now that the Heartbleed bug — which allowed pretty much anybody to retrieve internal state to which they should normally not have access — has been fixed." After that he notes that we need to ensure that the compiler correctly translates the high-level language to machine instructions. Later Kamp rants a bit about the craziness of CAs in general — would you trust "TÜRKTRUST BLG LETM VE BLM GÜVENL HZMETLER A.."? Then he lists some bullet points about things that are wrong in OpenSSL:

    - The code is a mess
    - Documentation is misleading
    - The defaults are deceptive
    - No central architectural authority
    - 6,740 goto statements
    - Inline assembly code
    - Multiple different coding styles
    - Obscure use of macro preprocessors
    - Inconsistent naming conventions
    - Far too many selections and options
    - Unexplained dead code
    - Misleading and incoherent comments

    "And it's nobody's fault. No one was ever truly in charge of OpenSSL, it just sort of became the default landfill for prototypes of cryptographic inventions, and since it had everything cryptographic under the sun (somewhere , if you could find out how to use it), it also became the default source of cryptographic functionality. [...] We need a well-designed API, as simple as possible to make it hard for people to use it incorrectly. And we need multiple independent quality implementations of that API, so that if one turns out to be crap, people can switch to a better one in a matter of hours."

    • by Anonymous Coward on Tuesday April 15, 2014 @01:19PM (#46758597)

      I agree that the OpenSSL code base is very bad. (I was doing some work based on modifying the library recently and I had to hold my nose.) However, I take objection with some of this:

      - 6,740 goto statements

      Otherwise known as "the only sane way to simulate exceptions in C". Seriously. Read up on how "goto" is used in low-level code bases such as OS kernels, instead of citing some vague memory of a 1960s paper without understanding its criticisms.

      - Inline assembly code

      Otherwise known as "making the thing go fast". Yes, I want the bignum library, or hashing algorithms, to use assembly. Things like SIMD make these tasks really effing fast and that is a good thing...

      • Right on. (Score:5, Interesting)

        by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Tuesday April 15, 2014 @02:13PM (#46759259) Homepage

        Otherwise known as "the only sane way to simulate exceptions in C". Seriously. Read up on how "goto" is used in low-level code bases such as OS kernels, instead of citing some vague memory of a 1960s paper without understanding its criticisms.

        People who don't use goto for error handling in C more often than not either have incorrect error handling or way too much error-prone duplication of resource cleanup code. It makes sense to very strictly warn newbies away from goto, much in the same sense that you warn them from multithreading. You don't want them used as a universal hammer for every nail in the code. At some point though, people need to jump off the bandwagon and learn to respect, not fear, these things that actually have some very compelling uses.

        • by rk ( 6314 )

          Heh. I've written many gotos in C. Almost universally, the label they go to is called "error_exit" which sits right before resource deallocators and the return statement. :-)

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      It's not just the library that's in shit shape, the openssl commandline tools themselves are annoying. Getting it to generate a UCC/SAN certificate with multiple hostnames is a hoot (you hardcode the list of alternate names into the openssl configuration file. Then when you want to create a different certificate you hardcode a new list of alternate names into the openssl configuration file), and just using it for its intended purpose basically requires that you either completely understand SSL certificate

      • You do know that you can use the SAN environment variable to create UCC certs without having to modify openssl.cnf right? Its much easier to create multiple UCC certs that way.

  • by grub ( 11606 ) <slashdot@grub.net> on Tuesday April 15, 2014 @12:45PM (#46758065) Homepage Journal

    Ted Unangst wrote a good article called "analysis of openssl freelist reuse" [tedunangst.com]

    His analysis:

    This bug would have been utterly trivial to detect when introduced had the OpenSSL developers bothered testing with a normal malloc (not even a security focused malloc, just one that frees memory every now and again). Instead, it lay dormant for years until I went looking for a way to disable their Heartbleed accelerating custom allocator.

    it's a very good read.

    • by grub ( 11606 )
      heh and I see it was linked to in TFA. Sorry.
    • The most concerning part comes at the end "Here's a 4 year old bug report showing the issue, and another, and another". OpenSLL is supposed to be security software... shouldn't happen.

  • OpenSSL OR... (Score:3, Interesting)

    by higuita ( 129722 ) on Tuesday April 15, 2014 @01:22PM (#46758643) Homepage

    Or simply support and use the GnuTLS!

    both have their own set of problems, but at least now you have the a alternative.

    • OpenBSD tries extremely hard to make the entire system BSD-licensed. AFAIK the only non-BSD items in a default installation is GCC, and that is an optional-but-default item. There are a few optional, not-compiled-by-default and rarely-used kernel modules that are GPL (an FPU emulator for very early x86 systems is the only one I recall), and of course you can install non-BSD packages as you wish, but the base OS and all components are BSD-licensed.

      GnuTLS, naturally, uses the LGPL, which is probably why they

  • The single heartbleed vulnerability awakened people to actually take a deep look into the OpenSSL code. Where were all the "eyeballs" before that? Now we are in a position where suddenly the whole OpenSSL code needs massive rearchitecturing. It just makes me cringe when I think what other OSS projects possibly have similar serious problems, possibly in mission critical components. It's starting to be quite obvious that there is not enough people to actually take a look under the hood and say, hey this quite

  • I've not liked some of the things I've heard from Mr. de Raadt in the past because they seemed to be less fact than emotion, but in this case Theo has redeemed himself in a big way.

    Like it or not, OpenSSL is now one of the most important pieces of software in the world. OpenSSL protects people's bank account numbers, credit card numbers, medical records, and employment records. OpenSSL protects corporate and government secrets (hopefully in combination with other defensive tactics). OpenSSL is not used for

  • For starters I am by no means an expert in programming. I know enough to be dangerous, and to serve my own personal projects.
    A while back a programmer friend of mine was guiding me in programming. Back then I wrote some C code, and in a error handling routine, I used the dreaded GOTO statement.
    Boy did this unleash his complete wrath. He could talk about nothing else afterwards.
    I was berated, screamed at, told I will never be a good programmer. I should give up now. GOTO's are the tools of a lazy ignora

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...