De Raadt Doubts Alleged Backdoors Made It Into OpenBSD 136
itwbennett writes "In follow-up to last week's controversy over allegations that the FBI installed a number of back doors into the encryption software used by the OpenBSD operating system, OpenBSD lead developer Theo de Raadt said on a discussion list Tuesday, that he believes that a government contracting firm that contributed code to his project 'was probably contracted to write backdoors,' which would grant secret access to encrypted communications. But that he doesn't think that any of this software made it into the OpenBSD code base."
Audit necessary (Score:5, Insightful)
Re:Audit necessary (Score:5, Informative)
Even with a thorough audit you will never be sure. That's the beauty of these kinds of accusations, no matter what you do, you can never 100% sure.
OpenBSD is among the best audited code in the world. People have been looking for this backdoor specifically for an entire week and nothing fishy has been found yet.
Re:Audit necessary (Score:5, Interesting)
Well, great way to halt the actual development, right?
Remember how Microsoft accused ReactOS of copying NT code?
They spent LOTS of time auditing.
Re: (Score:2)
They spent LOTS of time auditing.
Looking for code taken from somewhere else is relatively simple when you have access to both sets of code -- all it takes is a program that looks for the same code in each set. (It's not trivial, mind you, but it's not terribly difficult.)
Looking for backdoors or cryptographic weaknesses (intentional or otherwise) -- that's MUCH harder.
Re:Audit necessary (Score:4, Insightful)
And while you are entirely correct, the differentiating factor between OpenBSD and basically any other operating system is that it is under continual code review for things that might cause security problems, which has famously rendered OpenBSD immune to a number of attacks to which other systems are vulnerable, including systems which started with the same common codebase. As such OpenBSD seems least likely of all possible projects which could have absorbed this code.
Re:Audit necessary (Score:4, Insightful)
They spent LOTS of time auditing.
Looking for code taken from somewhere else is relatively simple when you have access to both sets of code
So did MS actually show the ReactOS people the supposedly stolen code? A few years ago, when MS made similar accusations of stolen Windows code in linux, there were lots of calls for MS to tell us exactly what code they were talking about. MS simply stonewalled those requests, and continued to make vague, non-specific public accusations that couldn't be validated. It was widely understood to be a marketing ploy, to put the fear of Microsoft's lawyers into potential linux customers' minds.
If a company is serious about infringements, the laws generally require that the accusers state explicitly what is being infringed where, and give the culprits a chance to remove the offending infringement. An accusation without the specifics is legally worthless, since nobody can stop doing something if they don't know what the something is.
There was also the suspicion that, if there was common code in both OSs, it was because MS "stole" the publicly-published linux code rather than the other way around. But, while that's more credible (due to the difficulty in getting a copy of MS's source code), it's a different story than we're talking about here.
There was at least one bit of humor in the "linux stole from Windows" story. At one point, a MS rep mentioned a line count for the stolen code. Someone did a count, and said that the number matched the number of "/*" and "*/" lines in the linux kernel source. This might sound frivolous, but it goes along with the famous story of the Sys/V version of /bin/true, which was a shell script consisting solely of a blank line and an AT&T copyright notice. MS claiming copyright ownership of comment delimiters would be roughly similar to AT&T claiming copyright ownership of a blank line.
Re: (Score:2)
I believe you are confusing Microsoft with SCO Group (which took alot of money from Microsoft for a "licence" that is unclear as to why MS actually needed it).
SCO Group said that Linux had stolen a lot of Unix SysV code, but refused to state what that code was (because then the Linux developers would take it out... WTF?). They did show some alleged parts, and the Open Source essentially shredded them.
Microsoft, on the other hand, has continually said that Linux infringes on its patents.
Re: (Score:2)
It took a while but they actually audited the whole code and they documented how they came out with most stuff... they really did a good job... they are moving slowly but doing phenomenal work
The code doesn't even have to be in the source (Score:3, Interesting)
If they can get a backdoor built into the compiler used to build the binaries for the general releases, the backdoor doesn't have to be anywhere in the source.
So, yeah, an audit isn't foolproof.
Re: (Score:2)
So audit the compiler. ;) And then audit the compiler that compiled the compiler. In the end I suppose you need to build a compiler by hand to make sure no backdoors are present.
Re: (Score:1)
In the end, you'd have to build the computer and all it's components by hand, at least from the standpoint of Thompson's "Reflections on Trusting Trust".
Re: (Score:2)
Can't you just compare your compiler binary with a known good source? If they're different and they should be the same then the warning bells go off. And it seems like borrowing a gc binary from someplace far away from your codebase and toolchain and trusted ought to be a simple way to boot strap back into safe territory?
Re: (Score:2)
Comparing binary isn't a good idea. Different c++ compilers have different name mangling schemes. The binary output of a c/c++ compiler isn't standardized.
Re: (Score:1)
Because you can't effectively compare a binary to source code.
You could compare a binary to another binary compiled from known good source.
But that presumes that the compiler used to compile the known good source doesn't contain the backdoor.
There are other variations of this, such as decompiling the binary and comparing the output to the original source. But that presumes that the decompiler doesn't know about the backdoor.
The rabbit hole goes pretty deep on this one.
Yes, you are right... (Score:5, Informative)
"Reflections on trusting trust", by Ken Thompson:
http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]
Paul B.
Re: (Score:1)
If they can get a backdoor built into the compiler used to build the binaries for the general releases, the backdoor doesn't have to be anywhere in the source.
This is why they should rebuild the compiler from source for every release, and make sure to publish the source code to that compiler, as well as the low-level code used to bootstrap that compiler, and always use a boot CD from the previous release to verify that the bootstrap compiler binary has not changed from the original version.
The initia
Re: (Score:2)
Okay then. What should they use to rebuild the compiler? Do they need to rebuild the compiler compiler? And what happens if the compiler compiler compiler compiler compiler has been compromised?
It's turtles all the way down.
Re: (Score:1)
Okay then. What should they use to rebuild the compiler?
This is called the 'bootstrap'. A couple pieces of software written in machine language whose sole purpose is to compile the compiler.
The alternative is to hand-compile the compiler, by having a human read off the source code and manually translate by hand each function into machine language. A very time consuming process, and only feasible with fairly simple compilers.
However, you can use a very simple compiler to compile a more complex com
Re:The code doesn't even have to be in the source (Score:4, Insightful)
Okay then. What should they use to rebuild the compiler? Do they need to rebuild the compiler compiler? And what happens if the compiler compiler compiler compiler compiler has been compromised?
It's turtles all the way down.
Nah, probably not. The techniques for doing this tend to be variants of the famous example that Ken Thompson published back in 1983, and consists of a compiler routine that recognizes a specific chunk of code somewhere in the victim software, and adds the "backdoor" to the output. The meta stage consists of the compiler also recognizing the section of its own source code where this is done, and inserting the backdoor-insertion code there. This then allows you to remove the actual backdoor code from all the software, and recompiles will continue to insert it even though the code do do this no longer exists in source form anywhere.
The conventional scheme to defeat this is to use multiple compilers to compile each other. The more compilers the better, since if you have N compilers, the insertion code has to be developed for each compiler, and each of the N compilers must recognize the appropriate insertion point in all N compilers. If you randomize the use of compilers, a single instance of compiler i not correctly inserting the backdoor-insertion code into compiler j will break the loop, and after a few compiles, the backdoors will all evaporate.
This is actually a case where non-open code has a use. If you have one or more tightly-held compilers that you use as part of the random rotation, you can make it effectively impossible for an outside agency to successfully insert a backdoor-insertion routine into your other compilers, or into your system's binaries. This is most effective if you can keep these internal compilers a secret, of course, because the outside agency will attempt to bribe your people to get the backdoor-insertion code into those compilers, too.
But each independently-developed compiler makes the intruder's job exponentially more difficult. Even a few compilers would suffice to defeat most existing "outside agencies", especially since it would be very difficult to hide the massive communication and bribery needed to keep the backdoor code installed and functional. And it's especially difficult with open-source compilers, where the hacker community has a strong motive (reputation) to find and expose any mysterious, undocumented code in the code base.
Re: (Score:2, Flamebait)
OpenBSD is among the best audited code in the world.
Citation needed. I'm not necessarily thinking the opposite, but is OpenBSD really that much audited? Are we talking about the kernel? The network stack? Or the encryption protocols?
Re: (Score:1)
Do your own goddamn research, you lazy git. Stop shouting "citation needed". It wasn't funny when XKCD did it, and it got stale fast.
Re:Audit necessary (Score:4, Insightful)
That's not the point. The point is that every discussion these days ends in "citation needed" when there are no more arguments.
The _fact_ that BSD gets audited constantly is can be found easily, it's not obscure knowledge.
But, let me give you an example why this is annoying: You say that the burden of proof lies on the guy making the bold statement. Well, is that a fact? Can you cite some references for that? How are you so sure? Then you state that OpenBSD is an irrelevant niche OS. Well, that's your opinion, I think, unless you can point to some peer-reviewed research on the matter. And I could go on.
See how you can't have a normal discussion when one party doesn't bring arguments, but only shouts "citation needed"?
Re: (Score:3)
Ok then, the first hit from "openbsd auditing" [google.com] leads to an OpenBSD Security [openbsd.org] page which has a section claiming that OpenBSD has a continual audit process and that it is successful..
As I understand it, OpenBSD refers to the whole release, everything they ship.
Now, I'm not sure if claims from the OpenBSD marketing department actually
Re: (Score:2)
Flamebait indeed. It doesn't take much baiting for OpenBSD fans to fire up the torches.
But here: http://www.openbsd.org/security.html#process [openbsd.org]
A description of the auditing process. It's not as awesome as your average "they're among the best" proclaimers would think, but it's healthy. This is no way substantiates the claim, though.
You're right to question. And it makes the OpenBSD fans look bad that you got modded flamebait.
Re: (Score:2)
If you know who is suspect and have good version control, you can use identify any changes that they made and start looking for problems there. It won't be 100% sure because they there is always a possibility that you don't suspect the right people.
Re: (Score:2)
Re:Audit necessary (Score:5, Funny)
I hope that he's right, but without a thorough audit, who can say?
It is physically impossible that a backdoor makes it past De Raadt's ego into the kernel.
Re:Audit necessary (Score:5, Informative)
OpenBSD does have an ongoing code audit [openbsd.org]
Perhaps not as thorough as you were suggesting. However, I think for others who are not familiar with OpenBSD's ongoing code audit, the above link will be essential for fully understanding these stories.
Re:Audit necessary (Score:5, Insightful)
As unlikely as it is that any backdoors have made it into OpenBSD, even an audit cannot conclusively prove that there are no backdoors in the code. Witness the Underhanded C Code Contest [xcott.com]. The goal of the contest is to write a chunk of code that does something, well, underhanded that is difficult to detect even upon close examination of the code. The winners have been quite successful. Even with only 15-20 lines of code, it's a challenge to locate the underhandedness even when you know exactly what you're looking for. The phase "microscopic needle in a galactic hacksack" comes to mind when imagining the challenge of finding malicious code that may or may not even be there, in a code base thousands or millions of lines long.
Re:Audit necessary (Score:4, Interesting)
The goal of the contest is to write a chunk of code that does something, well, underhanded that is difficult to detect even upon close examination of the code.
First two examples on the front page haven't passed even through my shallow code review.
The third sample failed at readability (ambiguous operator precedence) and I would have immediately subjected it to re-factoring.
It is not that difficult to detect the problems.
My first, the most generic rule of code review: code works much like the way it looks. And I know for a fact that OpenBSD folks use that rule too.
P.S. The 3 samples I looked at are the winners from the year 2008.
Re: (Score:3)
The third sample failed at readability (ambiguous operator precedence) and I would have immediately subjected it to re-factoring.
The operator precedence have nothing to do with the maliciousness of the code (if the third sample is the code from John Meacham). This part could have been refactored any way you'd like, the exploit is in the algorithm itself.
Re: (Score:2, Interesting)
To me, it doesn't matter where in the implementation the bug is, since it has to be rewritten anyway for readability reasons.
It also BTW would trigger another alarm in the eyes of seasoned code reviewers: in the "isdigit() == true" branch it looses the read character, printing '0' instead.
Re: (Score:1)
It's SUPPOSED to lose the character. The whole point is to censor an image, and it does that by replacing the to-be-censored region with black pixels (value 0). The evil part is the information it leaves behind the resulting image. I don't think you understand how it works.
Re:Audit necessary (Score:5, Insightful)
To me, it doesn't matter where in the implementation the bug is, since it has to be rewritten anyway for readability reasons.
Which is a fallacious viewpoint, because when you reject the patch, the author could easily recode it within the appropriate coding guidelines yet the bug would remain. In fact, you could have refactored the code yourself and yet still kept the malicious payload.
Code style is important and it's right to reject a patch with it. It's wrong to say this negates the need to actually find the bug. Which you didn't.
It also BTW would trigger another alarm in the eyes of seasoned code reviewers: in the "isdigit() == true" branch it looses the read character, printing '0' instead.
And then someone would say "No, that's the [intended, benign] purpose of the routine".
So with the style issues resolved, and the thing you thought was the bug not being a bug at all, on what basis would this "seasoned code reviewer" reject the patch? At this point the only reason is because you know it's malicious. But if you didn't, it looks like this would have passed your review.
Don't feel bad about that, though. Feel bad about thinking finding flaws in deliberately crafted malicious code is so easy when real seasoned code reviewers know it isn't.
Re: (Score:1)
"seasoned" code reviewers who can't understand what the program is supposed to do in the first place? You are overestimating your competence.
It's supposed to write a '0'! The sneaky bit is it writes extra zeroes depending on what is being "censored".
Good luck with a rewrite when your resulting program has better readability but doesn't even work.
See: http://underhanded.xcott.com/?p=8 [xcott.com]
The object of this year's contest: write a short, simple C program that redacts (blocks out) rectangles in an image. The user
Re: (Score:2)
I call bullshit.
I am certain that a few thousand lines of code like that, which is about average of C I have seen during last ~20 years, it would pass your audit.
There is exactly zero change that you would fix every single missing parenthesis.
Besides, as pointed out, the missing parentheses has nothing to do with the problem.
Re: (Score:3)
Maybe not for an experienced code reviewer who's examining 20 lines of code for an extremely simple security need. In the real world it takes extraordinary resources (talent, discipline, passion) at both the individual and organizational level to produce "logically" secure software. Even then, it usually takes academic/hacker security research to find subtle, indirect attacks that depend on power consumption, network behavior, and other such complexities
Re:Audit necessary (Score:4, Interesting)
All of these would be sore thumbs in a code review. Getting this into production code would have to rely on your co-contributors being nitwits.
Working at a Very Highly Notable Computer Operating System Producing Company, I was hit by a number of reviews that likely would not have caught any of this code, because no one I worked with cared particularly hard about code-reviews at all. I would constantly get code reviews back that state: "looks good", and then after performing my own code review, I would pull up some crazy stupid easy-to-catch bug that anyone should have seen if they actually looked over my code review.
And when I gave these code reviews to others, what I received back were people being pissed and upset that I would nitpick their code.
As for using totally stupid and inefficient algorithms, I hit across a number of those while working on a bug, and I attempted to refactor them when I could. One of the most egregious issues I dealt with was a config file reader that listed magic constants THREE TIMES throughout the code. Once for a a validity check, the next for a more robust check, and then a huge IF-THEN-ELSE block to define behavior. After refactoring this to use ONE set of definitions for these magic constants, the code was so altered that it could not be backported at all.
I absolutely don't trust code reviews from anyone outside of the open source community anymore. They have their own job to do, and they rarely consider code reviews of someone else's code to be their "job". They view it as being non-productive and non-work... like going to meetings. Volunteers at least take the code-reviews seriously, as it is their own time that they're spending on it.
Re: (Score:1)
I posted the comment you are replying to. I have a feeling I work at that nameless company you mention. :-)
Your comment rings true for the most part. It is seen by most as unglamorous. But the good people are the exceptions, and they understand better. Hence, I write: getting this into production code requires your co-contributors to be nitwits. Read into that whatever you want about the nature of most co-contributors. :-)
Re: (Score:2)
As unlikely as it is that any backdoors have made it into OpenBSD, even an audit cannot conclusively prove that there are no backdoors in the code. Witness the Underhanded C Code Contest.
Except, of course, they know who these contributors were, and they have a source control system. Scrutinizing their changes would be trivial.
Of course, it's always possible they worked through third-party intermediates, or broke into the SCM, but if that's the case, the OpenBSD team has far bigger problems, IMO.
Re: (Score:2)
I think that may be arrogance.
Oh, no no, I think you misunderstand what I said ('course, I did communicate it poorly).
What I meant was, it's trivial to identify any changes these people made. You are right in that any weaknesses or backdoors introduced may be challenging to spot. But at least the OBSD folks don't have to sift through the entire codebase to find them.
Re: (Score:1)
Re:Audit necessary (Score:5, Insightful)
I hope that he's right, but without a thorough audit, who can say?
The whole scare behind crypto backdoors is they can include sidechannel leakage, and they can include subtle leakage through the underlying drivers. Which can amount to elaborate timing vulnerabilities and other types of vulnerabilities intentionally introduced that are poorly understood by developers in general.
Remember... even though the crypto in the SSH protocol was perfectly sound, as you were typing a password in SSH; a timing attack could be used to assist an attacker in guessing the password typed. For example, the minute timing between keystrokes can identify some passwords that are much more likely to have been typed than others, reducing the attack required to something much easier than brute force.
You can have a backdoor without even revealing the key material or having an obvious vulnerability; all the 3 letter agencies need is a mechanism of reducing the work to crack the key to something much less than brute force. If the operation of the cryptosystem in any way makes the key easier to get than brute force, then the attacker's work is massively reduced.
In other words, it's so subtle that even a thorough audit cannot say, and a complete rewrite of the code would be required to guarantee no intentionally backdoors by the original authors (though it won't guarantee no backdoors by the new authors. and it definitely won't guarantee no subtle vulnerabilities)
It's possible can be no visible error for an audit to discover, and yet, the way the code is structured, could cause information to still be vulnerable through essentially a form of compromising virtual emissions.
Re: (Score:2)
$ ssh-add
$ ssh -A myUser@remoteSystem "ssh-add -L >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"
Anyone using ssh with passwords would do well to read up on public key authentication in ssh.
-- Ecks
Re: (Score:2)
Re: (Score:1)
$ man ssh-copy-id
This doesn't appear to exist on FreeBSD systems. I did check out a linux system though and found that I had it. It's a shell script that basically performs exactly what Ecks and I do to install our keys.
Re: (Score:2)
The problem of perfect security in general is that you need to secure the whole system. That's an impossibility, which is why three-letter agencies also secure the premises the systems are housed in.
Of course, you can compete on complexity (e.g. fighting timing attacks with one-time pads or otherwise inserting bits of entropy into the system), but there's a point when things just get too complex to be usable.
The only thing the masses have against a determined attacker is to be one among the herd.
Re: (Score:1)
The only thing the masses have against a determined attacker is to be one among the herd.
The problem is the "herd" doesn't use encryption.
Using encryption separates you from the herd.
The herd sends sensitive documents over unencrypted e-mail. The herd sends passwords over unencrypted IM.
The herd uses FTP and WebDAV over HTTP to store/collaborate on documents.
Can't win, I am afraid. Unless your encryption is completely passive and undetectable.
Re: (Score:2)
You don't have to try to put such weaknesses into cryptographic code, and they (mostly) are vulnerable only to very, very high-p
Re: (Score:1)
From TFA:
Since Perry's allegations were made public, developers have found two new bugs in OpenBSD, but de Raadt said Tuesday that he thinks that neither of them is a back door.
In fact, de Raadt seems to think that the whole incident has helped OpenBSD. "I am happy that people are taking the opportunity to audit an important part of the tree which many had assumed -- for far too long -- to be safe as it is," he said.
Strange how much fuss... (Score:1)
Re:Strange how much fuss... (Score:4, Interesting)
...can be made over something so obvious. OpenBSD's code has been screened again and again. If something was amiss somebody would have noticed it . . .
Yeah just look for the parts commented //super secret FBI backdoor, shhh!
You obviously have not seen things like this http://underhanded.xcott.com/ [xcott.com]
Re: (Score:2)
Well, if you have a few minutes I think that Theo deRaadt has a few lines of code [openbsd.org] that could use your expert review.
Re: (Score:2, Insightful)
The OpenBSD source as is known is the best for security as everything is screened and checked, this would have been discovered in that process. Microsoft, who checks their source? They could have any backdoors installed and how would you know? The Open Source way is the best way.
Re: (Score:2)
Microsoft Code is audited by Microsoft developpers.
How do you know?
So who do you trust, the guys to whom you gave money to have their OS or the guys you've took their product for free?
I'll take the commit logs and src history can review.
The Windows religion requires faith that comprehensive audits are done. Give money and hope you get returns.
The science of open source allows you to verify for yourself provided you have the skill and time. There is still cost involved with the use of such products, but it's at least fully accountable.
Lose that misconception, please (Score:3, Insightful)
First, most "open source" code is written by employees working for a corporation.
Second, nobody reviews it outside a very small number of people. It's easy to miss things like well-hidden back doors. And that's not even getting into the politics of open source review and the insular cliques of developers - just try and get anyone to listen to you when you start saying you found a back door.
Third, it's cryptographic code. There are probably an uncountable number of "back doors" that could be incorporated
Re: (Score:1)
There are already admissions made by Theo and others that there *are* some security problems with the code in question, which have been addressed with commits on 12/15 and 12/16 of this year. Whether or not these are the "backdoors" originally referred to is unknown. Here's validation of my statement -- read the entire post from De Raadt, as it includes admissions as well as the commits themselves. And don't forget to read the very last paragraph of his post too.
http://marc.info/?l=openbsd-tech&m=129 [marc.info]
Sorry, but how..? (Score:1)
Please pardon my likely sheer ignorance (or even misunderstanding) on this topic, but how is it possible for someone to code a backdoor into encryption software in an open source project..? I mean, wouldn't someone notice..? Isn't that like someone just making another entrance to your house and then painting it over to match the brickwork..?
Unless, of course, all code is accepted in general good faith and there are very few eyes that are looking at this sort of thing.
Or it's open source code talking to clos
Re: (Score:2)
The backdoor in question might simply be a guaranteed or determinable byte-sequence in a stream, which could aid in the decoding of said stream. It need not be a simple --with-backdoor option passed on the command line... ;)
Re: (Score:2)
The backdoor in question might simply be a guaranteed or determinable byte-sequence in a stream, which could aid in the decoding of said stream. It need not be a simple --with-backdoor option passed on the command line... ;)
Except the output of the IPSEC stack has to interoperate with other IPSEC stacks. IPSEC basically takes TCP/IP data, encrypts it and sticks on some headers.... if it doesn't do that the correct way then it's not going to be able to talk to machines using a different stack. Even if it only corrupts a small number of packets, someone's eventually likely to notice that some are getting dropped.
Certainly it could generate poor random keys, or somehow leak private key bits into the key or random padding so that
Comment removed (Score:5, Insightful)
Re: (Score:1)
Bugs are often not obvious. As someone else pointed out above, the code may even look perfectly fine but can still exploit compiler quirks. Also, look at at http://www.ioccc.org/ [ioccc.org]
Re: (Score:2)
Check out the Underhanded C Contest [xcott.com]. Sure, a patch containing, "if(packet_csum=SEKRUT_FBI_BACKD00R_P4KT) { /* d0 3v1Lz */ }" would get noticed pretty quickly. But good security is really subtle; it's probably difficult, but not impossible, to make proper-looking code that actually screws up in just the right places. The main problem is that anything that subtle is as likely to get broken
Re: (Score:1)
Check out the Underhanded C Contest
I have checked it and yes all the code there has trivial coding bugs which are very easy to spot to professional coders.
Re: (Score:3)
Re: (Score:1)
Please pardon my likely sheer ignorance (or even misunderstanding) on this topic, but how is it possible for someone to code a backdoor into encryption software in an open source project..? I mean, wouldn't someone notice..?
Like how everyone saw the UnrealIRCD trojan [linux.com] as soon as it was inserted in the source? Oh wait...
Re: (Score:1)
Re: (Score:2)
Step 1. Contribute lots of shoddy obfuscated code that no-one will follow without a lot of effort. Comment code minimally with suggestions that what it's doing is so obvious to anyone who knows anything, it's hardly worth mentioning
Step 2. Conceal deep in your shoddy obfuscated code your backdoor. Do not call it Back_door_func.
Step 3. As long as your code works no-one will touch it or try to pick it apart. Hope no other coder is brave enough to suggests your code is beyond all understanding.
Re: (Score:2)
Except that with a project as high profile as OpenBSD, code that shoddy would never be accepted.
I hope.
Right? Anyone?
Re: (Score:2)
Every commit needs to be signed off by at least several more people as far as I know and a lot of commits to key parts need to be signed off by Mr Ego himself till this day.
Obfuscated? Shoddy? Forget that.
The same is the story with all BSDs. They can be used as a textbook on how to write code and in a lot of places you do not even need the (otherwise excellent) documentation to determine what is going on. It is just readable (TM).
Re: (Score:2)
The original email sort of gave hints to this, referencing side channel/key leaking vulnerabilities. Side channel attacks can be VERY esoteric and difficult to identify - Look at Adi Shamir's work with abusing the Pentium 4's HyperThreading implementation.
However, I believe within the first days of the audit, some of the code contributions from Netsec appeared to, if anything, be an attempt at eliminating a potential timing-based side channel attack.
Honestly, I still can't figure out why Theo even believes
Re:Sorry, but how..? (Score:4, Informative)
Read this for an idea, someone hacked in some well crafted code that appeared innocent, had the machine not been hacked it probably would have stayed
That code is neither innocent nor well-crafted. Setting uid to zero is not 'innocent' and using '&& (x = 0)' is not well-crafted since it will always evaluate to false. I don't know whether the compiler will generate a warning in that case, but it should, and while a brief look through the code might miss that it's using = instead of ==, any kind of code review worthy of the name would spot it and flame the developer who wrote it.
Link to the ACTUAL FREAKING POST (Score:5, Informative)
Since the useless summary did not include one
http://marc.info/?l=openbsd-tech&m=129296046123471&w=2 [marc.info]
You can trust the government (Score:1, Funny)
Don't know why everyone's so concerned? If the FBI put backdoors into BSD or any other operating system, then it's for a good purpose - to protect us. "Sure there are some problem but they are doing the best they can, and we should not criticize them." - B5 chick
Re: (Score:1)
I sincerely hope that was sarcasm.
Sad thing is I know an Art Teacher just like that. "Stop criticizing the cops." And I reply, "But the video shows them beating a citizen who had done nothing wrong (just walking his dog)." "I'm sure if the cops were beating him, they had a justified reason to do so." "....."
Anyway the FBI should not be spying on us via backdoors in our OSes (or phones or thermostats or cars) - it violates the LEGAL requirement to obtain a search warrant from a judge.
Re: (Score:1)
Actual Slashdot responses in the related article generally centered around the idea that this demonstrates that the police need tighter controls to ensure they don't abuse their powers, and that recording the police doing their job is a sign of a healthy society where the government isn't afraid of the scrutiny of the people who consent to be governed.
What about the law? (Score:5, Insightful)
If the FBI did this without a court order, wouldn't they have been in breech of laws regarding attempted wiretapping and/or unauthorized computer access?
If so, have we just accepted that the FBI, CIA, and NSA break laws with impunity, and that there's nothing we can do about it?
Re: (Score:2)
I'm not a lawyer (Score:1)
But I think it would only be against the law for law enforcement to use such backdoors. I don't know that any existing law prohibits law enforcement agencies from the creation of such backdoors for possible future use.
Re: (Score:2)
Re: (Score:2)
Even more difficult to believe, the claim is they did it in order to spy on another organization within the DOJ.
Legal or not, any truth to this would have ignited a political shitstorm within the DOJ.
Also note: If there were anything possibly illegal about this, the fact that the alleged target organization of the backdooring (EOUSA) is FULL OF LAWYERS, you can bet someone would've been torn a new legal asshole over this.
Re: (Score:3)
Probably not; AFAIK -- even assuming wiretapping laws would apply -- there is no law prohibiting the FBI from contracting others to build in a capacity that could be used for wiretapping. The only time they would need a warrant is to actually make use of the facility.
Re: (Score:2)
What's the difference between a spy and a criminal? A government badge. One of the mandates for the existence of organizations like the CIA are to break laws. Granted usually laws in other countries, but to break laws none the less. NSA to a lesser extent as their job is primarily to make and break codes.
Link directly to Theo's post (Score:5, Informative)
A link to Theo's post [marc.info] on the subject is much more informative.
Highlights:
Also:
Interesting approach to security,,, (Score:5, Funny)
You get what you pay for. (Score:4, Insightful)
Hah, that's just like the government contractor -- write a backdoor into a system that doesn't actually work. Since the so called announcement, and the source being available. If this back door were true, wouldn't there be a patch issued for it?
Personally, I think that the leak got it wrong, it's not about making OpenBSD insecure, it was to openly create the BSoD in another well known operating system.
Re: (Score:3)
Hah, that's just like the government contractor -- write a backdoor into a system that doesn't actually work.
Does this mean that the government can demand their money back?
Yeah, right... (Score:3)
Backdoors, who needs backdoors?
Forgetting to close an attack vulnerability on all but the software encryption implementation is a much more dramatic and questionable error. Anyone that has taken the trouble to add hardware acceleration to their encryption stands a good chance to have something to protect from undesired access.
But, by doing so they have exposed themselves to the vulnerability itself. Brilliant!
Thoughts (Score:3)
Re: (Score:3)
Certainly OpenBSD has a good track record and finding and fixing security flaws. But in this case, I wouldn't assume the flaw, if any, can be found quickly and fixed. The post alleging it was certainly not very detailed about it.
Re: (Score:2)
Re: (Score:2)
After Enigma during ww2 and its 'safe' use after ww2, Libyan and Iranian embassy leaks, weakened banking security, telco security ect.the lack of seeking a back door would be strange.
The one safe hobby OS that the US gov let slip away?
I don't know man but (Score:2)
was there someone behind him showing today's newspaper headlines when he made the statement? We just want to make sure...
Not convinced .. (Score:3, Interesting)
Paranoid mickey's take on it .. Interesting read.
http://mickey.lucifier.net/b4ckd00r.html [lucifier.net]
Suspicious timing (Score:1)
Has anyone else considered the timing of this?
Just as Wikileaks has made it fashionable to expose government wrongdoings and showed how feasible it is to get and handle information that government agencies are interested in, comes the allegation that the most secure system in the world isn't secure.
The vulnerability would specifically be one that the U.S. agencies can exploit. In other words, the agencies that serve the government that is most embarrassed by recent leaks seem to have more teeth now. At the
A disturbing question. (Score:2)
I know what the answer may be that in most cases there isn't any. Contributors are judged alone by their code no doubt, but nobody bothers to find out what ties the individual has.
Open-source is great at peer-review, resulting code quality has to be good due to sheer brute force of eyes looking it over. But you have to wonder, since it's perfectly possible to hide malicious code in plain sight, code that actually does
Of cousre he does (Score:2)
Its not like hes going to admit there could be a gaping hole in the code. But would be a lot more comforting to people that rely on it if they did a code audit like yesterday, so he doesn't have to use the word 'doubt'..
The Spine Defense (Score:5, Funny)
I think you must really have no spine if you except money from the FBI to backdoor crypto software.
"I needed the money to pay for my prosthetic spine!"
Re: (Score:2)
Until then, refrain from using any other programs and operating systems because the best anyone can say is that they think their code is secure.
Re: (Score:1)
You'll "wait"? You're going to stop using IPSEC until it's all been re-audited?