Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Open Source Security BSD News Your Rights Online

De Raadt Doubts Alleged Backdoors Made It Into OpenBSD 136

itwbennett writes "In follow-up to last week's controversy over allegations that the FBI installed a number of back doors into the encryption software used by the OpenBSD operating system, OpenBSD lead developer Theo de Raadt said on a discussion list Tuesday, that he believes that a government contracting firm that contributed code to his project 'was probably contracted to write backdoors,' which would grant secret access to encrypted communications. But that he doesn't think that any of this software made it into the OpenBSD code base."
This discussion has been archived. No new comments can be posted.

De Raadt Doubts Alleged Backdoors Made It Into OpenBSD

Comments Filter:
  • Audit necessary (Score:5, Insightful)

    by dewarrn1 ( 985887 ) on Wednesday December 22, 2010 @09:10AM (#34640146)
    I hope that he's right, but without a thorough audit, who can say?
  • by bejiitas_wrath ( 825021 ) <johncartwright302@gmail.com> on Wednesday December 22, 2010 @09:24AM (#34640248) Homepage Journal

    The OpenBSD source as is known is the best for security as everything is screened and checked, this would have been discovered in that process. Microsoft, who checks their source? They could have any backdoors installed and how would you know? The Open Source way is the best way.

  • by Anonymous Coward on Wednesday December 22, 2010 @09:28AM (#34640280)

    First, most "open source" code is written by employees working for a corporation.

    Second, nobody reviews it outside a very small number of people. It's easy to miss things like well-hidden back doors. And that's not even getting into the politics of open source review and the insular cliques of developers - just try and get anyone to listen to you when you start saying you found a back door.

    Third, it's cryptographic code. There are probably an uncountable number of "back doors" that could be incorporated into the code that would get by almost all very experienced and very good cryptographic programmers. Just write the code in such a way that you remove a little bit of randomness. Hell, maybe you can write what looks like perfect code but rely on a quirky compiler optimization to do your work for you. It won't matter how many times you screen the source code for something like that. And how many good, experienced cryptographic coders spend their spare time reviewing BSD code in detail anyway?

  • by vbraga ( 228124 ) on Wednesday December 22, 2010 @09:51AM (#34640430) Journal

    One of the problems is the lack of people with enough knowledge and time to review, for free, something as cryptographic code.

  • by DoofusOfDeath ( 636671 ) on Wednesday December 22, 2010 @09:54AM (#34640444)

    If the FBI did this without a court order, wouldn't they have been in breech of laws regarding attempted wiretapping and/or unauthorized computer access?

    If so, have we just accepted that the FBI, CIA, and NSA break laws with impunity, and that there's nothing we can do about it?

  • Re:Audit necessary (Score:5, Insightful)

    by Eil ( 82413 ) on Wednesday December 22, 2010 @10:17AM (#34640658) Homepage Journal

    As unlikely as it is that any backdoors have made it into OpenBSD, even an audit cannot conclusively prove that there are no backdoors in the code. Witness the Underhanded C Code Contest [xcott.com]. The goal of the contest is to write a chunk of code that does something, well, underhanded that is difficult to detect even upon close examination of the code. The winners have been quite successful. Even with only 15-20 lines of code, it's a challenge to locate the underhandedness even when you know exactly what you're looking for. The phase "microscopic needle in a galactic hacksack" comes to mind when imagining the challenge of finding malicious code that may or may not even be there, in a code base thousands or millions of lines long.

  • by Dcnjoe60 ( 682885 ) on Wednesday December 22, 2010 @10:48AM (#34640958)

    Hah, that's just like the government contractor -- write a backdoor into a system that doesn't actually work. Since the so called announcement, and the source being available. If this back door were true, wouldn't there be a patch issued for it?

    Personally, I think that the leak got it wrong, it's not about making OpenBSD insecure, it was to openly create the BSoD in another well known operating system.

  • Re:Audit necessary (Score:5, Insightful)

    by mysidia ( 191772 ) on Wednesday December 22, 2010 @11:01AM (#34641108)

    I hope that he's right, but without a thorough audit, who can say?

    The whole scare behind crypto backdoors is they can include sidechannel leakage, and they can include subtle leakage through the underlying drivers. Which can amount to elaborate timing vulnerabilities and other types of vulnerabilities intentionally introduced that are poorly understood by developers in general.

    Remember... even though the crypto in the SSH protocol was perfectly sound, as you were typing a password in SSH; a timing attack could be used to assist an attacker in guessing the password typed. For example, the minute timing between keystrokes can identify some passwords that are much more likely to have been typed than others, reducing the attack required to something much easier than brute force.

    You can have a backdoor without even revealing the key material or having an obvious vulnerability; all the 3 letter agencies need is a mechanism of reducing the work to crack the key to something much less than brute force. If the operation of the cryptosystem in any way makes the key easier to get than brute force, then the attacker's work is massively reduced.

    In other words, it's so subtle that even a thorough audit cannot say, and a complete rewrite of the code would be required to guarantee no intentionally backdoors by the original authors (though it won't guarantee no backdoors by the new authors. and it definitely won't guarantee no subtle vulnerabilities)

    It's possible can be no visible error for an audit to discover, and yet, the way the code is structured, could cause information to still be vulnerable through essentially a form of compromising virtual emissions.

  • Re:Audit necessary (Score:4, Insightful)

    by drinkypoo ( 153816 ) <martin.espinoza@gmail.com> on Wednesday December 22, 2010 @11:41AM (#34641596) Homepage Journal

    And while you are entirely correct, the differentiating factor between OpenBSD and basically any other operating system is that it is under continual code review for things that might cause security problems, which has famously rendered OpenBSD immune to a number of attacks to which other systems are vulnerable, including systems which started with the same common codebase. As such OpenBSD seems least likely of all possible projects which could have absorbed this code.

  • Re:Audit necessary (Score:4, Insightful)

    by Anonymous Coward on Wednesday December 22, 2010 @01:06PM (#34642664)

    That's not the point. The point is that every discussion these days ends in "citation needed" when there are no more arguments.

    The _fact_ that BSD gets audited constantly is can be found easily, it's not obscure knowledge.

    But, let me give you an example why this is annoying: You say that the burden of proof lies on the guy making the bold statement. Well, is that a fact? Can you cite some references for that? How are you so sure? Then you state that OpenBSD is an irrelevant niche OS. Well, that's your opinion, I think, unless you can point to some peer-reviewed research on the matter. And I could go on.

    See how you can't have a normal discussion when one party doesn't bring arguments, but only shouts "citation needed"?

  • Re:Audit necessary (Score:5, Insightful)

    by Chris Burke ( 6130 ) on Wednesday December 22, 2010 @01:56PM (#34643246) Homepage

    To me, it doesn't matter where in the implementation the bug is, since it has to be rewritten anyway for readability reasons.

    Which is a fallacious viewpoint, because when you reject the patch, the author could easily recode it within the appropriate coding guidelines yet the bug would remain. In fact, you could have refactored the code yourself and yet still kept the malicious payload.

    Code style is important and it's right to reject a patch with it. It's wrong to say this negates the need to actually find the bug. Which you didn't.

    It also BTW would trigger another alarm in the eyes of seasoned code reviewers: in the "isdigit() == true" branch it looses the read character, printing '0' instead.

    And then someone would say "No, that's the [intended, benign] purpose of the routine".

    So with the style issues resolved, and the thing you thought was the bug not being a bug at all, on what basis would this "seasoned code reviewer" reject the patch? At this point the only reason is because you know it's malicious. But if you didn't, it looks like this would have passed your review.

    Don't feel bad about that, though. Feel bad about thinking finding flaws in deliberately crafted malicious code is so easy when real seasoned code reviewers know it isn't.

  • Re:Audit necessary (Score:4, Insightful)

    by jc42 ( 318812 ) on Wednesday December 22, 2010 @03:03PM (#34643970) Homepage Journal

    They spent LOTS of time auditing.

    Looking for code taken from somewhere else is relatively simple when you have access to both sets of code

    So did MS actually show the ReactOS people the supposedly stolen code? A few years ago, when MS made similar accusations of stolen Windows code in linux, there were lots of calls for MS to tell us exactly what code they were talking about. MS simply stonewalled those requests, and continued to make vague, non-specific public accusations that couldn't be validated. It was widely understood to be a marketing ploy, to put the fear of Microsoft's lawyers into potential linux customers' minds.

    If a company is serious about infringements, the laws generally require that the accusers state explicitly what is being infringed where, and give the culprits a chance to remove the offending infringement. An accusation without the specifics is legally worthless, since nobody can stop doing something if they don't know what the something is.

    There was also the suspicion that, if there was common code in both OSs, it was because MS "stole" the publicly-published linux code rather than the other way around. But, while that's more credible (due to the difficulty in getting a copy of MS's source code), it's a different story than we're talking about here.

    There was at least one bit of humor in the "linux stole from Windows" story. At one point, a MS rep mentioned a line count for the stolen code. Someone did a count, and said that the number matched the number of "/*" and "*/" lines in the linux kernel source. This might sound frivolous, but it goes along with the famous story of the Sys/V version of /bin/true, which was a shell script consisting solely of a blank line and an AT&T copyright notice. MS claiming copyright ownership of comment delimiters would be roughly similar to AT&T claiming copyright ownership of a blank line.

  • by jc42 ( 318812 ) on Wednesday December 22, 2010 @03:34PM (#34644318) Homepage Journal

    This is why they should rebuild the compiler from source for every release, and make sure to publish the source code to that compiler,

    Okay then. What should they use to rebuild the compiler? Do they need to rebuild the compiler compiler? And what happens if the compiler compiler compiler compiler compiler has been compromised?

    It's turtles all the way down.

    Nah, probably not. The techniques for doing this tend to be variants of the famous example that Ken Thompson published back in 1983, and consists of a compiler routine that recognizes a specific chunk of code somewhere in the victim software, and adds the "backdoor" to the output. The meta stage consists of the compiler also recognizing the section of its own source code where this is done, and inserting the backdoor-insertion code there. This then allows you to remove the actual backdoor code from all the software, and recompiles will continue to insert it even though the code do do this no longer exists in source form anywhere.

    The conventional scheme to defeat this is to use multiple compilers to compile each other. The more compilers the better, since if you have N compilers, the insertion code has to be developed for each compiler, and each of the N compilers must recognize the appropriate insertion point in all N compilers. If you randomize the use of compilers, a single instance of compiler i not correctly inserting the backdoor-insertion code into compiler j will break the loop, and after a few compiles, the backdoors will all evaporate.

    This is actually a case where non-open code has a use. If you have one or more tightly-held compilers that you use as part of the random rotation, you can make it effectively impossible for an outside agency to successfully insert a backdoor-insertion routine into your other compilers, or into your system's binaries. This is most effective if you can keep these internal compilers a secret, of course, because the outside agency will attempt to bribe your people to get the backdoor-insertion code into those compilers, too.

    But each independently-developed compiler makes the intruder's job exponentially more difficult. Even a few compilers would suffice to defeat most existing "outside agencies", especially since it would be very difficult to hide the massive communication and bribery needed to keep the backdoor code installed and functional. And it's especially difficult with open-source compilers, where the hacker community has a strong motive (reputation) to find and expose any mysterious, undocumented code in the code base.

Perfection is acheived only on the point of collapse. - C. N. Parkinson