Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×
Encryption Government Security BSD

FBI Alleged To Have Backdoored OpenBSD's IPSEC Stack 536

Aggrajag and Mortimer.CA, among others, wrote to inform us that Theo de Raadt has made public an email sent to him by Gregory Perry, who worked on the OpenBSD crypto framework a decade ago. The claim is that the FBI paid contractors to insert backdoors into OpenBSD's IPSEC stack. Mr. Perry is coming forward now that his NDA with the FBI has expired. The code was originally added ten years ago, and over that time has changed quite a bit, "so it is unclear what the true impact of these allegations are" says Mr. de Raadt. He added: "Since we had the first IPSEC stack available for free, large parts of the code are now found in many other projects/products." (Freeswan and Openswan are not based on this code.)
This discussion has been archived. No new comments can be posted.

FBI Alleged To Have Backdoored OpenBSD's IPSEC Stack

Comments Filter:
  • Re:But but but (Score:5, Interesting)

    by snowraver1 ( 1052510 ) on Tuesday December 14, 2010 @08:45PM (#34555166)
    I wonder if Linux has a similar backdoor. I think that it would be quite likely that MS products have one.
  • by chill ( 34294 ) on Tuesday December 14, 2010 @08:51PM (#34555232) Journal

    Considering OpenBSD has performed extensive code audits and this is part of the core code, this is going to bring the argument about the importance of security code audits to the forefront.

    They have their place, but...10 years and by one of the most anal-retentive, paranoid coding groups out there. Ouch.

  • by Anonymous Coward on Tuesday December 14, 2010 @09:00PM (#34555318)


    I often like to point out an incomprehensible weakness of the protocol concerning the "padding" (known as covered channel): in both version 1 and 2 the packets, have a length which is a multiple of 64 bits, and are padded with a random number. This is quite unusual and therefore sparing a classical fault that is well known in encrypting products: a "hidden" (or "subliminal") channel. Usually , we "pad" with a verified sequence as for example, give the value n for the byte rank n (self describing padding). In SSH, the sequence being (by definition) randomized, it cannot be checked. Consequently, it is possible that one of the parties communicating could pervert / compromise the communication for example used by a third party who is listening. One can also imagine a corrupted implementation unknown by the two parties (easy to realize on a product provided with only binaries as generally are commercial products). This can easily be done and in this case one only needs to "infect" the client or the server. To leave such an incredible fault in the protocol, even though it is universally known that the installation of a covered channel in an encryption product is THE classic and basic way to corrupt the communication, seems unbelievable to me . It can be interesting to read Bruce Schneier's remarks concerning the implementation of such elements in products influenced by government agencies. (

    I will end this topic with the last bug I found during the portage of SSH to SSF (French version of SSH), it is in the coding of Unix versions before 1.2.25. The consequence was that the random generator produced ... predictable... results (this situation is regrettable in a cryptographic product, I won't go into the technical details but one could compromise a communication while simply eavesdropping). At the time SSH's development team had corrected the problem (only one line to modify), but curiously enough without sending any alert, not even a mention in the "changelog" of the product... one wouldn't have wanted it to be known, he wouldn't have acted differently. Of course there is no relationship with the link to the above article.

  • Re:But but but (Score:3, Interesting)

    by gman003 ( 1693318 ) on Tuesday December 14, 2010 @09:01PM (#34555346)
    They're still not even sure if the backdoor still works - the code gets edited often, and the subtle tricks that backdoors rely on can break quite easily that way.

    And it's not like closed-source would be any better - then, the FBI can just pay the company to slip one in. I'm not worried about my OpenBSD box - it's already far more secure than my Windows rigs are. Hell, I haven't even bothered updating it in years - it's still running 3.6.
  • by chill ( 34294 ) on Tuesday December 14, 2010 @09:08PM (#34555418) Journal

    No, but it was part of the post-Wassenaar agreement (Dec. 1998) that de-weaponized open source crypto. 10 years ago would have been around OpenBSD 2.8 (12/1/2000) which introduced AES and was the first release after the expiration of the RSA patent.

    v2.7 saw the introduction of hardware-accelerated IPSec only 6 months before.

    They were moving fast and furious on IPSec. This would have been an opportune time to spike them.

  • Re:But but but (Score:5, Interesting)

    by ratboy666 ( 104074 ) <> on Tuesday December 14, 2010 @09:25PM (#34555572) Journal

    It isn't necessarily obvious.

    Basically, the idea is that bits of the key leak. And how is this accomplished?

    For example - if a key bit is 0, you take one code path, if 1, another. Make the two paths different lengths. It may be possible to affect packet timing. Or... A function may end with "x - y" and then return "z". No leak? Not so clear, the carry/borrow may be leaking information to the caller (on x86 style hardware).

    Anyway, it probably isn't a "back door", just some means of determining enough key bits to make brute force practical is enough. And this sort of thing can be subtle. It can even be based on the machine code generated for certain sequences by a particular compiler (the "x-y; return z" sequence above, for example).

  • Smear Campaign? (Score:5, Interesting)

    by nurb432 ( 527695 ) on Tuesday December 14, 2010 @09:31PM (#34555624) Homepage Journal

    Good way to kill a project. Give the paranoids something to be paranoid about.

  • Re:But but but (Score:5, Interesting)

    by jon787 ( 512497 ) on Tuesday December 14, 2010 @11:44PM (#34556576) Homepage Journal

    Ah the old NSA DES conspiracy theory. The NSA suggested two changes to DES: 1) shorten the key 2) changed the S-boxes. They gave no public explanation for the latter and for years the story was that this somehow introduced a backdoor into the algorithm. The truth came out over a decade later:

    "Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes. According to Steven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret."

    Of course, they could still be lying, better keep the tinfoil hat on.

  • by Yvanhoe ( 564877 ) on Wednesday December 15, 2010 @04:01AM (#34557838) Journal
    They didn't, but they wanted too. Secret foreign relations were a thing that they thought characterised European autocracies. Later, the president Wilson in his 14 points for peace pointed secret diplomacy as a practice dangerous for peace.
  • by ( 447981 ) <> on Wednesday December 15, 2010 @04:27AM (#34557966) Homepage
    I'd be more than a little surprised if any part of the US government would in fact agree to let non-disclosure agreements expire automatically. That alone makes me suspicious that the truth content of these allegations is a little thin.

    For those of you who are interested in finding out the facts, start by reading the whole thread on openbsd-tech (eg [] ), it's only a handful of messages so far and I find Damien Miller's response at [] particularly enlightening. (You're using Damien's code right now, in some other window -- he's been a major OpenSSH developer for quite a while).

    Then again, I have to agree with Bob Beck (see [] ) that this is fairly likely to part of a personal vendetta of some sort, possibly against either the OpenBSD project or even something totally unrelated, using the OpenBSD project only as the attention-grabber in contexts such as /.

    At this point we have only allegations with some finger pointing, I for one look forward to any real information to surface. The best way to draw out the real information behind this is to do what Theo did - publish the allegations and let the involved parties explain themselves in public.

  • by Anonymous Coward on Wednesday December 15, 2010 @04:39AM (#34558036)

    1. Why the UDP port 4500 is enabled by default inside of the kernel (upper 1023)?
    2. Why is "#if NPF > 0 ... pf_pkt_addr_changed(m); ... #endif" against NetFilter auditory?

    It's suspected FBI's change to ipsec_output.c (you can ignore the IPv6 / INET6 changes):
    ipsec_output.c rev1.25 vs rev1.41 []

    "triggers decapsulation"? what is it?

    The revlog says "UDP encapsulation for ESP in transport mode (draft-ietf-ipsec-udp-encaps-XX.txt)"

    ipsec_output.c rev1.28 vs rev1.29 []
    if udpencap_port=4500 then "!udpencap_port" is false so that it doesn't m_freem(m);but it always does mi = m_inject(m, sizeof(struct ip), sizeof(struct udphdr),sizeof(struct udphdr),M_DONTWAIT);

    ipsec_output.c rev1.30 vs rev1.31 []
    then it does udpencap_enable = 1; /* enabled by default */ [] []
    says "XXX It is assumed that siz is less than the size of an mbuf at the moment."

    Assumption is unsafety.

    ipsec_output.c rev1.40 vs rev 1.41 []
    pf_pkt_addr_changed(m) against NPF (against filter i thought). []
    It erases the header when NPF(ilter) is enabled.
    Recommended [don't touch PF filter]: void pf_pkt_addr_changed(struct mbuf *m) { /* m-> = NULL; */ } [] its group is F-Secure Corporation, Microsoft, Cisco Systems and Nortel Networks.

    3.3./3.5 (Transport or Tunnel) Mode ESP Decapsulation: 1. The UDP header is removed from the packet. <-- imagine that the UDP packet is from the intruder, xD
    if the intruder's UDP header is removed then the intruder's information is removed :)
    so that OpenBSD removed the intruder's auditory

    it was my magic: "look for 'remove' from rfc3948.txt" (to suppose that 'remove' is something unauthorized).

    1. The UDP header is removed from the packet. <-- to correct it must be "The UDP header must be CHECKED during the decapsulation process."
    Never REMOVED!!!

    2.3. NAT-Keepalive Packet Format "The receiver SHOULD ignore a received NAT-keepalive packet." <-- it's another unauthorized.
    don't remove things, don't ignore things, don't hide things, don't discard things.

    ipsec_output.c IPsec comment []
    says "Called by the IPsec output transform callbacks, to transmit the packet or do further processing, as necessary." <-- what "further processing"? xD

    ipcomps_minlen comment []
    u_int32_t ipcomps_minlen; /* packets too short for compress */ from struct ipcompstat /* IP payload compression protocol (IPComp), see RFC 2393 */ []
    says "The IPComp header is removed from the IP datagram and the decompressed payload immediately follows the IP header." <-- ehh! removed NOT!!! CHECKED yes!!!

  • by ArsenneLupin ( 766289 ) on Wednesday December 15, 2010 @05:23AM (#34558286)

    So what he was saying is, that they are padding with a potentially unencrypted random number, that can be used to guess earlier and later random numbers, and thus break SSH. The random number is a hint for crackers / PRNG guessers.

    No, that a deliberately "broken" implementation of ssh (either on server or on client) could use the padding to leak the session key, and that without access to the code there would be no way to tell (... because the padding is "supposed" to be random...).

    Quite clever actually, and reminescent about the ways how the French subverted the Luxembourgish Luxtrust system.

    Luxtrust [] token are hardware crypto token containing a private key. The key (supposedly) is generated randomly by the token at initialization and never leaves the token, and can only be used to establish session keys and sign messages, where the critical calculation happens on the token. The key is used to secure banking transactions, so that for example, the French tax administration cannot spy on the communication between French citizens and their Luxembourgish bank.

    That's the theory. The catch is, the tokens are manufactured by the French company Gemalto [], and each token's random number generator will only ever "generate" private keys from a limited set (different for each token, of course). So, French tax administration can trivially infer the private key by looking up the public key in a table provided by Gemalto.

    The scheme is virtually undetectable, because:

    • The keyset is different for each token
    • Each token can only be initialized a very limited amount of times (much smaller than number of possible keys for that token)
    • The tokens supplied to BSI [] for audit didn't have this weakness. And moreover, the German tax authorities would be quite happy to listen in too :-)

    Result: Luxembourg spent millions on an inconvenient crypto scheme, which works neither on modern 64 bit compiters nor on mobiles, and which is useless for its purpose.

If a subordinate asks you a pertinent question, look at him as if he had lost his senses. When he looks down, paraphrase the question back at him.