Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source Security BSD News Your Rights Online

De Raadt Doubts Alleged Backdoors Made It Into OpenBSD 136

itwbennett writes "In follow-up to last week's controversy over allegations that the FBI installed a number of back doors into the encryption software used by the OpenBSD operating system, OpenBSD lead developer Theo de Raadt said on a discussion list Tuesday, that he believes that a government contracting firm that contributed code to his project 'was probably contracted to write backdoors,' which would grant secret access to encrypted communications. But that he doesn't think that any of this software made it into the OpenBSD code base."
This discussion has been archived. No new comments can be posted.

De Raadt Doubts Alleged Backdoors Made It Into OpenBSD

Comments Filter:
  • Audit necessary (Score:5, Insightful)

    by dewarrn1 ( 985887 ) on Wednesday December 22, 2010 @08:10AM (#34640146)
    I hope that he's right, but without a thorough audit, who can say?
    • Re:Audit necessary (Score:5, Informative)

      by CAPSLOCK2000 ( 27149 ) on Wednesday December 22, 2010 @08:30AM (#34640304) Homepage

      Even with a thorough audit you will never be sure. That's the beauty of these kinds of accusations, no matter what you do, you can never 100% sure.
      OpenBSD is among the best audited code in the world. People have been looking for this backdoor specifically for an entire week and nothing fishy has been found yet.

      • Re:Audit necessary (Score:5, Interesting)

        by Anonymous Coward on Wednesday December 22, 2010 @08:56AM (#34640460)

        Well, great way to halt the actual development, right?

        Remember how Microsoft accused ReactOS of copying NT code?

        They spent LOTS of time auditing.

        • by dougmc ( 70836 )

          They spent LOTS of time auditing.

          Looking for code taken from somewhere else is relatively simple when you have access to both sets of code -- all it takes is a program that looks for the same code in each set. (It's not trivial, mind you, but it's not terribly difficult.)

          Looking for backdoors or cryptographic weaknesses (intentional or otherwise) -- that's MUCH harder.

          • Re:Audit necessary (Score:4, Insightful)

            by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday December 22, 2010 @10:41AM (#34641596) Homepage Journal

            And while you are entirely correct, the differentiating factor between OpenBSD and basically any other operating system is that it is under continual code review for things that might cause security problems, which has famously rendered OpenBSD immune to a number of attacks to which other systems are vulnerable, including systems which started with the same common codebase. As such OpenBSD seems least likely of all possible projects which could have absorbed this code.

          • Re:Audit necessary (Score:4, Insightful)

            by jc42 ( 318812 ) on Wednesday December 22, 2010 @02:03PM (#34643970) Homepage Journal

            They spent LOTS of time auditing.

            Looking for code taken from somewhere else is relatively simple when you have access to both sets of code

            So did MS actually show the ReactOS people the supposedly stolen code? A few years ago, when MS made similar accusations of stolen Windows code in linux, there were lots of calls for MS to tell us exactly what code they were talking about. MS simply stonewalled those requests, and continued to make vague, non-specific public accusations that couldn't be validated. It was widely understood to be a marketing ploy, to put the fear of Microsoft's lawyers into potential linux customers' minds.

            If a company is serious about infringements, the laws generally require that the accusers state explicitly what is being infringed where, and give the culprits a chance to remove the offending infringement. An accusation without the specifics is legally worthless, since nobody can stop doing something if they don't know what the something is.

            There was also the suspicion that, if there was common code in both OSs, it was because MS "stole" the publicly-published linux code rather than the other way around. But, while that's more credible (due to the difficulty in getting a copy of MS's source code), it's a different story than we're talking about here.

            There was at least one bit of humor in the "linux stole from Windows" story. At one point, a MS rep mentioned a line count for the stolen code. Someone did a count, and said that the number matched the number of "/*" and "*/" lines in the linux kernel source. This might sound frivolous, but it goes along with the famous story of the Sys/V version of /bin/true, which was a shell script consisting solely of a blank line and an AT&T copyright notice. MS claiming copyright ownership of comment delimiters would be roughly similar to AT&T claiming copyright ownership of a blank line.

            • I believe you are confusing Microsoft with SCO Group (which took alot of money from Microsoft for a "licence" that is unclear as to why MS actually needed it).

              SCO Group said that Linux had stolen a lot of Unix SysV code, but refused to state what that code was (because then the Linux developers would take it out... WTF?). They did show some alleged parts, and the Open Source essentially shredded them.

              Microsoft, on the other hand, has continually said that Linux infringes on its patents.

            • by youn ( 1516637 )

              It took a while but they actually audited the whole code and they documented how they came out with most stuff... they really did a good job... they are moving slowly but doing phenomenal work

      • If they can get a backdoor built into the compiler used to build the binaries for the general releases, the backdoor doesn't have to be anywhere in the source.

        So, yeah, an audit isn't foolproof.

        • So audit the compiler. ;) And then audit the compiler that compiled the compiler. In the end I suppose you need to build a compiler by hand to make sure no backdoors are present.

          • In the end I suppose you need to build a compiler by hand to make sure no backdoors are present.

            In the end, you'd have to build the computer and all it's components by hand, at least from the standpoint of Thompson's "Reflections on Trusting Trust".

          • Can't you just compare your compiler binary with a known good source? If they're different and they should be the same then the warning bells go off. And it seems like borrowing a gc binary from someplace far away from your codebase and toolchain and trusted ought to be a simple way to boot strap back into safe territory?

            • Comparing binary isn't a good idea. Different c++ compilers have different name mangling schemes. The binary output of a c/c++ compiler isn't standardized.

            • Because you can't effectively compare a binary to source code.

              You could compare a binary to another binary compiled from known good source.

              But that presumes that the compiler used to compile the known good source doesn't contain the backdoor.

              There are other variations of this, such as decompiling the binary and comparing the output to the original source. But that presumes that the decompiler doesn't know about the backdoor.

              The rabbit hole goes pretty deep on this one.

          • by PaulBu ( 473180 ) on Wednesday December 22, 2010 @04:50PM (#34646072) Homepage

            "Reflections on trusting trust", by Ken Thompson:

            http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]

            Paul B.

        • by mysidia ( 191772 )

          If they can get a backdoor built into the compiler used to build the binaries for the general releases, the backdoor doesn't have to be anywhere in the source.

          This is why they should rebuild the compiler from source for every release, and make sure to publish the source code to that compiler, as well as the low-level code used to bootstrap that compiler, and always use a boot CD from the previous release to verify that the bootstrap compiler binary has not changed from the original version.

          The initia

          • by Minwee ( 522556 )

            This is why they should rebuild the compiler from source for every release, and make sure to publish the source code to that compiler,

            Okay then. What should they use to rebuild the compiler? Do they need to rebuild the compiler compiler? And what happens if the compiler compiler compiler compiler compiler has been compromised?

            It's turtles all the way down.

            • by mysidia ( 191772 )

              Okay then. What should they use to rebuild the compiler?

              This is called the 'bootstrap'. A couple pieces of software written in machine language whose sole purpose is to compile the compiler.

              The alternative is to hand-compile the compiler, by having a human read off the source code and manually translate by hand each function into machine language. A very time consuming process, and only feasible with fairly simple compilers.

              However, you can use a very simple compiler to compile a more complex com

            • by jc42 ( 318812 ) on Wednesday December 22, 2010 @02:34PM (#34644318) Homepage Journal

              This is why they should rebuild the compiler from source for every release, and make sure to publish the source code to that compiler,

              Okay then. What should they use to rebuild the compiler? Do they need to rebuild the compiler compiler? And what happens if the compiler compiler compiler compiler compiler has been compromised?

              It's turtles all the way down.

              Nah, probably not. The techniques for doing this tend to be variants of the famous example that Ken Thompson published back in 1983, and consists of a compiler routine that recognizes a specific chunk of code somewhere in the victim software, and adds the "backdoor" to the output. The meta stage consists of the compiler also recognizing the section of its own source code where this is done, and inserting the backdoor-insertion code there. This then allows you to remove the actual backdoor code from all the software, and recompiles will continue to insert it even though the code do do this no longer exists in source form anywhere.

              The conventional scheme to defeat this is to use multiple compilers to compile each other. The more compilers the better, since if you have N compilers, the insertion code has to be developed for each compiler, and each of the N compilers must recognize the appropriate insertion point in all N compilers. If you randomize the use of compilers, a single instance of compiler i not correctly inserting the backdoor-insertion code into compiler j will break the loop, and after a few compiles, the backdoors will all evaporate.

              This is actually a case where non-open code has a use. If you have one or more tightly-held compilers that you use as part of the random rotation, you can make it effectively impossible for an outside agency to successfully insert a backdoor-insertion routine into your other compilers, or into your system's binaries. This is most effective if you can keep these internal compilers a secret, of course, because the outside agency will attempt to bribe your people to get the backdoor-insertion code into those compilers, too.

              But each independently-developed compiler makes the intruder's job exponentially more difficult. Even a few compilers would suffice to defeat most existing "outside agencies", especially since it would be very difficult to hide the massive communication and bribery needed to keep the backdoor code installed and functional. And it's especially difficult with open-source compilers, where the hacker community has a strong motive (reputation) to find and expose any mysterious, undocumented code in the code base.

      • Re: (Score:2, Flamebait)

        OpenBSD is among the best audited code in the world.

        Citation needed. I'm not necessarily thinking the opposite, but is OpenBSD really that much audited? Are we talking about the kernel? The network stack? Or the encryption protocols?

        • by Anonymous Coward

          Do your own goddamn research, you lazy git. Stop shouting "citation needed". It wasn't funny when XKCD did it, and it got stale fast.

        • by Plunky ( 929104 )

          Citation needed.

          Ok then, the first hit from "openbsd auditing" [google.com] leads to an OpenBSD Security [openbsd.org] page which has a section claiming that OpenBSD has a continual audit process and that it is successful..

          I'm not necessarily thinking the opposite, but is OpenBSD really that much audited? Are we talking about the kernel? The network stack? Or the encryption protocols?

          As I understand it, OpenBSD refers to the whole release, everything they ship.

          Now, I'm not sure if claims from the OpenBSD marketing department actually

        • Flamebait indeed. It doesn't take much baiting for OpenBSD fans to fire up the torches.

          But here: http://www.openbsd.org/security.html#process [openbsd.org]

          A description of the auditing process. It's not as awesome as your average "they're among the best" proclaimers would think, but it's healthy. This is no way substantiates the claim, though.

          You're right to question. And it makes the OpenBSD fans look bad that you got modded flamebait.

      • by Nadaka ( 224565 )

        If you know who is suspect and have good version control, you can use identify any changes that they made and start looking for problems there. It won't be 100% sure because they there is always a possibility that you don't suspect the right people.

      • True, you cannot be 100% sure but you are almost guaranteed to be safer using OpenBSD to secure your systems than any other operating system. Imagine using Windows to secure your infrastructure. I am not stupid enough to do that. Even Cisco and Linux have been compromised but it is much, much harder. Despite these allegations, I would throw an OpenBSD box to protect my infrastructure any day.
    • by Anonymous Coward on Wednesday December 22, 2010 @09:00AM (#34640496)

      I hope that he's right, but without a thorough audit, who can say?

      It is physically impossible that a backdoor makes it past De Raadt's ego into the kernel.

    • Re:Audit necessary (Score:5, Informative)

      by milonssecretsn ( 1392667 ) on Wednesday December 22, 2010 @09:04AM (#34640544)

      OpenBSD does have an ongoing code audit [openbsd.org]

      Perhaps not as thorough as you were suggesting. However, I think for others who are not familiar with OpenBSD's ongoing code audit, the above link will be essential for fully understanding these stories.

    • Re:Audit necessary (Score:5, Insightful)

      by Eil ( 82413 ) on Wednesday December 22, 2010 @09:17AM (#34640658) Homepage Journal

      As unlikely as it is that any backdoors have made it into OpenBSD, even an audit cannot conclusively prove that there are no backdoors in the code. Witness the Underhanded C Code Contest [xcott.com]. The goal of the contest is to write a chunk of code that does something, well, underhanded that is difficult to detect even upon close examination of the code. The winners have been quite successful. Even with only 15-20 lines of code, it's a challenge to locate the underhandedness even when you know exactly what you're looking for. The phase "microscopic needle in a galactic hacksack" comes to mind when imagining the challenge of finding malicious code that may or may not even be there, in a code base thousands or millions of lines long.

      • Re:Audit necessary (Score:4, Interesting)

        by ThePhilips ( 752041 ) on Wednesday December 22, 2010 @10:12AM (#34641212) Homepage Journal

        The goal of the contest is to write a chunk of code that does something, well, underhanded that is difficult to detect even upon close examination of the code.

        First two examples on the front page haven't passed even through my shallow code review.

        The third sample failed at readability (ambiguous operator precedence) and I would have immediately subjected it to re-factoring.

        It is not that difficult to detect the problems.

        My first, the most generic rule of code review: code works much like the way it looks. And I know for a fact that OpenBSD folks use that rule too.

        P.S. The 3 samples I looked at are the winners from the year 2008.

        • by Rhaban ( 987410 )

          The third sample failed at readability (ambiguous operator precedence) and I would have immediately subjected it to re-factoring.

          The operator precedence have nothing to do with the maliciousness of the code (if the third sample is the code from John Meacham). This part could have been refactored any way you'd like, the exploit is in the algorithm itself.

          • Re: (Score:2, Interesting)

            by ThePhilips ( 752041 )

            To me, it doesn't matter where in the implementation the bug is, since it has to be rewritten anyway for readability reasons.

            It also BTW would trigger another alarm in the eyes of seasoned code reviewers: in the "isdigit() == true" branch it looses the read character, printing '0' instead.

            • by Anonymous Coward

              It's SUPPOSED to lose the character. The whole point is to censor an image, and it does that by replacing the to-be-censored region with black pixels (value 0). The evil part is the information it leaves behind the resulting image. I don't think you understand how it works.

            • Re:Audit necessary (Score:5, Insightful)

              by Chris Burke ( 6130 ) on Wednesday December 22, 2010 @12:56PM (#34643246) Homepage

              To me, it doesn't matter where in the implementation the bug is, since it has to be rewritten anyway for readability reasons.

              Which is a fallacious viewpoint, because when you reject the patch, the author could easily recode it within the appropriate coding guidelines yet the bug would remain. In fact, you could have refactored the code yourself and yet still kept the malicious payload.

              Code style is important and it's right to reject a patch with it. It's wrong to say this negates the need to actually find the bug. Which you didn't.

              It also BTW would trigger another alarm in the eyes of seasoned code reviewers: in the "isdigit() == true" branch it looses the read character, printing '0' instead.

              And then someone would say "No, that's the [intended, benign] purpose of the routine".

              So with the style issues resolved, and the thing you thought was the bug not being a bug at all, on what basis would this "seasoned code reviewer" reject the patch? At this point the only reason is because you know it's malicious. But if you didn't, it looks like this would have passed your review.

              Don't feel bad about that, though. Feel bad about thinking finding flaws in deliberately crafted malicious code is so easy when real seasoned code reviewers know it isn't.

            • by Anonymous Coward

              "seasoned" code reviewers who can't understand what the program is supposed to do in the first place? You are overestimating your competence.

              It's supposed to write a '0'! The sneaky bit is it writes extra zeroes depending on what is being "censored".

              Good luck with a rewrite when your resulting program has better readability but doesn't even work.

              See: http://underhanded.xcott.com/?p=8 [xcott.com]

              The object of this year's contest: write a short, simple C program that redacts (blocks out) rectangles in an image. The user

        • by jhol13 ( 1087781 )

          I call bullshit.

          I am certain that a few thousand lines of code like that, which is about average of C I have seen during last ~20 years, it would pass your audit.

          There is exactly zero change that you would fix every single missing parenthesis.
          Besides, as pointed out, the missing parentheses has nothing to do with the problem.

        • It is not that difficult to detect the problems...

          Maybe not for an experienced code reviewer who's examining 20 lines of code for an extremely simple security need. In the real world it takes extraordinary resources (talent, discipline, passion) at both the individual and organizational level to produce "logically" secure software. Even then, it usually takes academic/hacker security research to find subtle, indirect attacks that depend on power consumption, network behavior, and other such complexities

      • As unlikely as it is that any backdoors have made it into OpenBSD, even an audit cannot conclusively prove that there are no backdoors in the code. Witness the Underhanded C Code Contest.

        Except, of course, they know who these contributors were, and they have a source control system. Scrutinizing their changes would be trivial.

        Of course, it's always possible they worked through third-party intermediates, or broke into the SCM, but if that's the case, the OpenBSD team has far bigger problems, IMO.

      • Unfortunately it seems that the contest is no more, as they still haven't announced the winners from last year's contest, which ended 9 months ago.
    • Re:Audit necessary (Score:5, Insightful)

      by mysidia ( 191772 ) on Wednesday December 22, 2010 @10:01AM (#34641108)

      I hope that he's right, but without a thorough audit, who can say?

      The whole scare behind crypto backdoors is they can include sidechannel leakage, and they can include subtle leakage through the underlying drivers. Which can amount to elaborate timing vulnerabilities and other types of vulnerabilities intentionally introduced that are poorly understood by developers in general.

      Remember... even though the crypto in the SSH protocol was perfectly sound, as you were typing a password in SSH; a timing attack could be used to assist an attacker in guessing the password typed. For example, the minute timing between keystrokes can identify some passwords that are much more likely to have been typed than others, reducing the attack required to something much easier than brute force.

      You can have a backdoor without even revealing the key material or having an obvious vulnerability; all the 3 letter agencies need is a mechanism of reducing the work to crack the key to something much less than brute force. If the operation of the cryptosystem in any way makes the key easier to get than brute force, then the attacker's work is massively reduced.

      In other words, it's so subtle that even a thorough audit cannot say, and a complete rewrite of the code would be required to guarantee no intentionally backdoors by the original authors (though it won't guarantee no backdoors by the new authors. and it definitely won't guarantee no subtle vulnerabilities)

      It's possible can be no visible error for an audit to discover, and yet, the way the code is structured, could cause information to still be vulnerable through essentially a form of compromising virtual emissions.

      • by Ecks ( 52930 )
        Anyone using ssh to it's maximum security potential isn't sending a password across the channel more than once. On new systems I use some variation of the following to push my key onto the remote system.

        $ ssh-add
        $ ssh -A myUser@remoteSystem "ssh-add -L >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"

        Anyone using ssh with passwords would do well to read up on public key authentication in ssh.

        -- Ecks
        • $ man ssh-copy-id
          • by pixr99 ( 560799 )

            $ man ssh-copy-id

            This doesn't appear to exist on FreeBSD systems. I did check out a linux system though and found that I had it. It's a shell script that basically performs exactly what Ecks and I do to install our keys.

      • The problem of perfect security in general is that you need to secure the whole system. That's an impossibility, which is why three-letter agencies also secure the premises the systems are housed in.

        Of course, you can compete on complexity (e.g. fighting timing attacks with one-time pads or otherwise inserting bits of entropy into the system), but there's a point when things just get too complex to be usable.

        The only thing the masses have against a determined attacker is to be one among the herd.

        • by mysidia ( 191772 )

          The only thing the masses have against a determined attacker is to be one among the herd.

          The problem is the "herd" doesn't use encryption.

          Using encryption separates you from the herd.

          The herd sends sensitive documents over unencrypted e-mail. The herd sends passwords over unencrypted IM.

          The herd uses FTP and WebDAV over HTTP to store/collaborate on documents.

          Can't win, I am afraid. Unless your encryption is completely passive and undetectable.

      • by jthill ( 303417 )
        You're talking about covert channels, which not even the highest of the old Orange Book security ratings contemplates forbidding. The most they required, even for an A1 rating, was formal bandwidth analysis and mitigation, and an explanation of why the remaining channels couldn't be reduced. They've been superseded, but I don't think any commercial system ever got an A1 rating.

        You don't have to try to put such weaknesses into cryptographic code, and they (mostly) are vulnerable only to very, very high-p

    • by lysdexia ( 897 )

      From TFA:

      Since Perry's allegations were made public, developers have found two new bugs in OpenBSD, but de Raadt said Tuesday that he thinks that neither of them is a back door.

      In fact, de Raadt seems to think that the whole incident has helped OpenBSD. "I am happy that people are taking the opportunity to audit an important part of the tree which many had assumed -- for far too long -- to be safe as it is," he said.

  • ...can be made over something so obvious. OpenBSD's code has been screened again and again. If something was amiss somebody would have noticed it. Even now that such allegations have been made, anybody could go over the code and check for such backdoors. Yet nothing has been reported yet. What the f..k. I'll continue putting my trust on OpenBSD for security in data communication.
    • by tomz16 ( 992375 ) on Wednesday December 22, 2010 @08:23AM (#34640240)

      ...can be made over something so obvious. OpenBSD's code has been screened again and again. If something was amiss somebody would have noticed it . . .

      Yeah just look for the parts commented //super secret FBI backdoor, shhh!

      You obviously have not seen things like this http://underhanded.xcott.com/ [xcott.com]

    • Re: (Score:2, Insightful)

      The OpenBSD source as is known is the best for security as everything is screened and checked, this would have been discovered in that process. Microsoft, who checks their source? They could have any backdoors installed and how would you know? The Open Source way is the best way.

    • by Anonymous Coward

      First, most "open source" code is written by employees working for a corporation.

      Second, nobody reviews it outside a very small number of people. It's easy to miss things like well-hidden back doors. And that's not even getting into the politics of open source review and the insular cliques of developers - just try and get anyone to listen to you when you start saying you found a back door.

      Third, it's cryptographic code. There are probably an uncountable number of "back doors" that could be incorporated

    • by Anonymous Coward

      There are already admissions made by Theo and others that there *are* some security problems with the code in question, which have been addressed with commits on 12/15 and 12/16 of this year. Whether or not these are the "backdoors" originally referred to is unknown. Here's validation of my statement -- read the entire post from De Raadt, as it includes admissions as well as the commits themselves. And don't forget to read the very last paragraph of his post too.

      http://marc.info/?l=openbsd-tech&m=129 [marc.info]

  • Please pardon my likely sheer ignorance (or even misunderstanding) on this topic, but how is it possible for someone to code a backdoor into encryption software in an open source project..? I mean, wouldn't someone notice..? Isn't that like someone just making another entrance to your house and then painting it over to match the brickwork..?

    Unless, of course, all code is accepted in general good faith and there are very few eyes that are looking at this sort of thing.

    Or it's open source code talking to clos

    • by AccUser ( 191555 )

      The backdoor in question might simply be a guaranteed or determinable byte-sequence in a stream, which could aid in the decoding of said stream. It need not be a simple --with-backdoor option passed on the command line... ;)

      • by 0123456 ( 636235 )

        The backdoor in question might simply be a guaranteed or determinable byte-sequence in a stream, which could aid in the decoding of said stream. It need not be a simple --with-backdoor option passed on the command line... ;)

        Except the output of the IPSEC stack has to interoperate with other IPSEC stacks. IPSEC basically takes TCP/IP data, encrypts it and sticks on some headers.... if it doesn't do that the correct way then it's not going to be able to talk to machines using a different stack. Even if it only corrupts a small number of packets, someone's eventually likely to notice that some are getting dropped.

        Certainly it could generate poor random keys, or somehow leak private key bits into the key or random padding so that

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Wednesday December 22, 2010 @08:51AM (#34640430)
      Comment removed based on user account deletion
    • by Knuckles ( 8964 )

      Bugs are often not obvious. As someone else pointed out above, the code may even look perfectly fine but can still exploit compiler quirks. Also, look at at http://www.ioccc.org/ [ioccc.org]

    • Isn't that like someone just making another entrance to your house and then painting it over to match the brickwork..?

      Check out the Underhanded C Contest [xcott.com]. Sure, a patch containing, "if(packet_csum=SEKRUT_FBI_BACKD00R_P4KT) { /* d0 3v1Lz */ }" would get noticed pretty quickly. But good security is really subtle; it's probably difficult, but not impossible, to make proper-looking code that actually screws up in just the right places. The main problem is that anything that subtle is as likely to get broken

      • Check out the Underhanded C Contest

        I have checked it and yes all the code there has trivial coding bugs which are very easy to spot to professional coders.

        • Not to mention the fact that most of the bugs are hidden in idioms that OpenBSD's style(7) explicitly prohibit. These would be refactored before being committed, and the hidden bugs would probably be fixed without anyone noticing that they were there...
    • by Anonymous Coward

      Please pardon my likely sheer ignorance (or even misunderstanding) on this topic, but how is it possible for someone to code a backdoor into encryption software in an open source project..? I mean, wouldn't someone notice..?

      Like how everyone saw the UnrealIRCD trojan [linux.com] as soon as it was inserted in the source? Oh wait...

      • by m50d ( 797211 )
        You mean the one that wasn't actually inserted into the source, but rather the binary was replaced on their server? The one that could never have happened on a system that uses signed packages (as I believe Debian does by default now)?
    • by gsslay ( 807818 )

      Step 1. Contribute lots of shoddy obfuscated code that no-one will follow without a lot of effort. Comment code minimally with suggestions that what it's doing is so obvious to anyone who knows anything, it's hardly worth mentioning

      Step 2. Conceal deep in your shoddy obfuscated code your backdoor. Do not call it Back_door_func.

      Step 3. As long as your code works no-one will touch it or try to pick it apart. Hope no other coder is brave enough to suggests your code is beyond all understanding.

      • by johneee ( 626549 )

        Except that with a project as high profile as OpenBSD, code that shoddy would never be accepted.

        I hope.

        Right? Anyone?

        • by arivanov ( 12034 )

          Every commit needs to be signed off by at least several more people as far as I know and a lot of commits to key parts need to be signed off by Mr Ego himself till this day.

          Obfuscated? Shoddy? Forget that.

          The same is the story with all BSDs. They can be used as a textbook on how to write code and in a lot of places you do not even need the (otherwise excellent) documentation to determine what is going on. It is just readable (TM).

    • by Andy Dodd ( 701 )

      The original email sort of gave hints to this, referencing side channel/key leaking vulnerabilities. Side channel attacks can be VERY esoteric and difficult to identify - Look at Adi Shamir's work with abusing the Pentium 4's HyperThreading implementation.

      However, I believe within the first days of the audit, some of the code contributions from Netsec appeared to, if anything, be an attempt at eliminating a potential timing-based side channel attack.

      Honestly, I still can't figure out why Theo even believes

  • by brunes69 ( 86786 ) <slashdot@nOSpam.keirstead.org> on Wednesday December 22, 2010 @08:40AM (#34640360)

    Since the useless summary did not include one

    http://marc.info/?l=openbsd-tech&m=129296046123471&w=2 [marc.info]

  • Don't know why everyone's so concerned? If the FBI put backdoors into BSD or any other operating system, then it's for a good purpose - to protect us. "Sure there are some problem but they are doing the best they can, and we should not criticize them." - B5 chick

    • I sincerely hope that was sarcasm.

      Sad thing is I know an Art Teacher just like that. "Stop criticizing the cops." And I reply, "But the video shows them beating a citizen who had done nothing wrong (just walking his dog)." "I'm sure if the cops were beating him, they had a justified reason to do so." "....."

      Anyway the FBI should not be spying on us via backdoors in our OSes (or phones or thermostats or cars) - it violates the LEGAL requirement to obtain a search warrant from a judge.

  • by DoofusOfDeath ( 636671 ) on Wednesday December 22, 2010 @08:54AM (#34640444)

    If the FBI did this without a court order, wouldn't they have been in breech of laws regarding attempted wiretapping and/or unauthorized computer access?

    If so, have we just accepted that the FBI, CIA, and NSA break laws with impunity, and that there's nothing we can do about it?

    • Yup. Pretty much. Welcome to Ameika! It's been that way at least since Hoover and it's only getting worse.
    • But I think it would only be against the law for law enforcement to use such backdoors. I don't know that any existing law prohibits law enforcement agencies from the creation of such backdoors for possible future use.

      • Well, it shows that you are not a lawyer because the DoJ can invoke a national security interests clause to be able to utilize these back doors. It is legal but very, very shady and unethical.
    • by Andy Dodd ( 701 )

      Even more difficult to believe, the claim is they did it in order to spy on another organization within the DOJ.

      Legal or not, any truth to this would have ignited a political shitstorm within the DOJ.

      Also note: If there were anything possibly illegal about this, the fact that the alleged target organization of the backdooring (EOUSA) is FULL OF LAWYERS, you can bet someone would've been torn a new legal asshole over this.

    • If the FBI did this without a court order, wouldn't they have been in breech of laws regarding attempted wiretapping and/or unauthorized computer access?

      Probably not; AFAIK -- even assuming wiretapping laws would apply -- there is no law prohibiting the FBI from contracting others to build in a capacity that could be used for wiretapping. The only time they would need a warrant is to actually make use of the facility.

    • What's the difference between a spy and a criminal? A government badge. One of the mandates for the existence of organizations like the CIA are to break laws. Granted usually laws in other countries, but to break laws none the less. NSA to a lesser extent as their job is primarily to make and break codes.

  • by martyros ( 588782 ) on Wednesday December 22, 2010 @09:01AM (#34640510)

    A link to Theo's post [marc.info] on the subject is much more informative.

    Highlights:

    • Two of the guys named in the original allegation did work on the security stack, but
    • Almost certainly didn't check in any malicious code, and
    • "wrote much code in many areas that we all rely on. Daily. Outside the ipsec stack."

    Also:

    I believe that NETSEC was probably contracted to write backdoors as alleged. If those were written, I don't believe they made it into our tree. They might have been deployed as their own product.

  • by Giant Ape Skeleton ( 638834 ) on Wednesday December 22, 2010 @09:08AM (#34640572) Homepage
    "I doubt it, therefore it's not true": Security through incredulity!
  • by Dcnjoe60 ( 682885 ) on Wednesday December 22, 2010 @09:48AM (#34640958)

    Hah, that's just like the government contractor -- write a backdoor into a system that doesn't actually work. Since the so called announcement, and the source being available. If this back door were true, wouldn't there be a patch issued for it?

    Personally, I think that the leak got it wrong, it's not about making OpenBSD insecure, it was to openly create the BSoD in another well known operating system.

    • by 0123456 ( 636235 )

      Hah, that's just like the government contractor -- write a backdoor into a system that doesn't actually work.

      Does this mean that the government can demand their money back?

  • by curious.corn ( 167387 ) on Wednesday December 22, 2010 @10:51AM (#34641712)

    Backdoors, who needs backdoors?

    Forgetting to close an attack vulnerability on all but the software encryption implementation is a much more dramatic and questionable error. Anyone that has taken the trouble to add hardware acceleration to their encryption stands a good chance to have something to protect from undesired access.

    But, by doing so they have exposed themselves to the vulnerability itself. Brilliant!

  • by DaMattster ( 977781 ) on Wednesday December 22, 2010 @11:28AM (#34642150)
    FUD is already getting spread around about OpenBSD, see the following article from Linux Journal, "Allegations of OpenBSD back doors may be true" [bit.ly] This came as a comment from within the article. The journalist rambles on about far reaching impacts and doomsday scenario for the project. Okay, reality check. If backdoors are found, (a) Theo and company immediately release patches closing the back doors and (b) the FBI would get another black eye for being caught in a major public lie. Far reaching, my ass. In the end, the only changes made will be to close off core commit privileges to all US-based, OpenBSD contributers. Only certain trusted individuals will have core commit privileges. Say what you want about Theo, the man sticks to his principles like cement. Even if back doors are found, I'll still continue to trust OpenBSD as the most secure OS in the world. Why? For every security hole in OpenBSD found, I'll bet that there are several hundred in other operating systems. A 1/~250 ratio is not bad at all!
    • by durdur ( 252098 )

      Certainly OpenBSD has a good track record and finding and fixing security flaws. But in this case, I wouldn't assume the flaw, if any, can be found quickly and fixed. The post alleging it was certainly not very detailed about it.

      • I never mentioned or assumed anything about any speed of finding the flaws. I simply wrote that, if found, the OpenBSD team will immediately disclose and offer patches.
    • by AHuxley ( 892839 )
      A 1/~250 ratio is not bad at all!
      After Enigma during ww2 and its 'safe' use after ww2, Libyan and Iranian embassy leaks, weakened banking security, telco security ect.the lack of seeking a back door would be strange.
      The one safe hobby OS that the US gov let slip away?
  • was there someone behind him showing today's newspaper headlines when he made the statement? We just want to make sure...

  • Not convinced .. (Score:3, Interesting)

    by 0dugo0 ( 735093 ) on Wednesday December 22, 2010 @12:16PM (#34642774)

    Paranoid mickey's take on it .. Interesting read.
    http://mickey.lucifier.net/b4ckd00r.html [lucifier.net]

  • Has anyone else considered the timing of this?

    Just as Wikileaks has made it fashionable to expose government wrongdoings and showed how feasible it is to get and handle information that government agencies are interested in, comes the allegation that the most secure system in the world isn't secure.

    The vulnerability would specifically be one that the U.S. agencies can exploit. In other words, the agencies that serve the government that is most embarrassed by recent leaks seem to have more teeth now. At the

  • What is the process for vetting developers who contribute to an open source project?

    I know what the answer may be that in most cases there isn't any. Contributors are judged alone by their code no doubt, but nobody bothers to find out what ties the individual has.

    Open-source is great at peer-review, resulting code quality has to be good due to sheer brute force of eyes looking it over. But you have to wonder, since it's perfectly possible to hide malicious code in plain sight, code that actually does
  • Its not like hes going to admit there could be a gaping hole in the code. But would be a lot more comforting to people that rely on it if they did a code audit like yesterday, so he doesn't have to use the word 'doubt'..

"jackpot: you may have an unneccessary change record" -- message from "diff"

Working...