Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Open Source Programming Security Unix BSD

FreeBSD's Close Call: How Flawed Code Almost Made It Into the Kernel (arstechnica.com) 60

"40,000 lines of flawed code almost made it into FreeBSD's kernel," writes Ars Technica, reporting on what happened when the CEO of Netgate, which makes FreeBSD-powered routers, decided it was time for FreeBSD to enjoy the same level of in-kernel WireGuard support that Linux does. The issue arose after Netgate offered a burned-out developer a contract to port WireGuard into the FreeBSD kernel (where Netgate could then use it in the company's popular pfSense router distribution): [The developer] committed his port — largely unreviewed and inadequately tested — directly into the HEAD section of FreeBSD's code repository, where it was scheduled for incorporation into FreeBSD 13.0-RELEASE. This unexpected commit raised the stakes for WireGuard founding developer Jason Donenfeld, whose project would ultimately be judged on the quality of any production release under the WireGuard name. Donenfeld identified numerous problems...but rather than object to the port's release, Donenfeld decided to fix the issues. He collaborated with FreeBSD developer Kyle Evans and with Matt Dunwoodie, an OpenBSD developer who had worked on WireGuard for that operating system...

How did so much sub-par code make it so far into a major open source operating system? Where was the code review which should have stopped it? And why did both the FreeBSD core team and Netgate seem more focused on the fact that the code was being disparaged than its actual quality?

There's more to the story, but ultimately Ars Technica confirmed the presences of multiple buffer overflows, printf statements that are still being triggered in production, and even empty validation function which always "return true" rather than actually validating the data. The original developer argued the real issue is an absence of quality reviewers, but Ars Technica sees a larger problem. "There seems to be an absence of process to ensure quality code review." Several FreeBSD community members would only speak off the record. In essence, most seem to agree, you either have a commit bit (enabling you to commit code to FreeBSD's repositories) or you don't. It's hard to find code reviews, and there generally isn't a fixed process ensuring that vitally important code gets reviewed prior to inclusion. This system thus relies heavily on the ability and collegiality of individual code creators.
Ars Technica published this statement from the FreeBSD Core Team: Core unconditionally values the work of all contributors, and seeks a culture of cooperation, respect, and collaboration. The public discourse over WireGuard in the past week does not meet these standards and is damaging to our community if not checked. As such, WireGuard development for FreeBSD will now proceed outside of the base system. For those who wish to evaluate, test, or experiment with WireGuard, snapshots will be available via the ports and package systems.

As a project, we remain committed to continually improving our development process. We'll also continue to refine our tooling to make code reviews and continuous integration easier and more effective. The Core Team asks that the community use these tools and work together to improve FreeBSD.

Ars Technica applauds the efforts — while remaining concerned about the need for them. "FreeBSD is an important project that deserves to be taken seriously. Its downstream consumers include industry giants such as Cisco, Juniper, NetApp, Netflix, Sony, Sophos, and more. The difference in licensing between FreeBSD and Linux gives FreeBSD a reach into many projects and spaces where the Linux kernel would be a difficult or impossible fit."
This discussion has been archived. No new comments can be posted.

FreeBSD's Close Call: How Flawed Code Almost Made It Into the Kernel

Comments Filter:
  • by QuietLagoon ( 813062 ) on Saturday March 27, 2021 @12:40PM (#61205668)
    From the article --- "But Netgate argued that Donenfeld had gone overboard with his negative assessment. " Really?
  • Good thing a 1,000 eyes were on the job.

    • by nagora ( 177841 )

      Good thing a 1,000 eyes were on the job.

      Unfortunately, after all the finger-pointing about this there's now only 999 eyes on the job.

    • The code was developed in private until it was made public at the last moment.

      But sure, make a strawman argument without bothering to understand the details, as is the Slashdot way.
      • by sjames ( 1099 ) on Saturday March 27, 2021 @02:58PM (#61206002) Homepage Journal

        There is some reason for concern. It was made public by posting it into the main repo. Surely it should have gone into a branch until reviewed and merged.

        Of course this could be a matter of a once good developer gone bad. Since this hasn't been a huge problem and even this time it was stopped before it got into the wild, a simple review rather than a major restructuring may be in order (and a reminder to be more vigilant).

    • Maybe 1,000 eyes should have been on the Arse Technica job, it's a garbled mishmash of accusations and half-truths from various online forums. For example the guy that got charged with various criminal actions is Kip Macy [go.com], not Nicholas Macy. And it goes downhill from there. I followed this when it broke about a week ago and shit, how hard is it to read a few followup posts rather than just skimming off the sensationalist initial claims and turning them into an article? There's plenty there to report on
      • In a followup, they kinda make a veiled reference to the Kip vs. Nicholas stuff right at the end, but it's pretty unclear unless you've read the original material that the article is built from what's going on.
  • by UnknowingFool ( 672806 ) on Saturday March 27, 2021 @12:51PM (#61205696)
    As codebases get larger, what is being done to ensure that code reviews are done? I mean I hear second hand about the mean comments from Linus Torvalds about RC code quality in Linux all the time. To paraphrase him, RC code should not have as many kernel breaking bugs as they do. And this is in Linux which has a lot of eyes. What about smaller teams?
    • by Anonymous Coward on Saturday March 27, 2021 @01:05PM (#61205750)
      Code reviews are exactly the problem. Most developers just slap shit together to meet deadline and assume that any errors will be fixed by the review. They do not have the luxury to spend a lot of time doing things "by the book" because if they do, the other developers will look much more productive than they are.
      • Seriously? What a shitty argument.

        If that's what happens when you do a code review, then that's the fault of whoever dreamt up such a ridiculous code review process - and even more ridiculous productivity metric. Ever consider looking at how productivity is (or isn't even) measured and figure out what is wrong with it? If code reviews identify tons of errors that should be fixed before being reviewed, then how about take that into consideration when looking at productivity?

        Or is Slashdot now so far up
      • by uncqual ( 836337 )

        In proprietary environments where I managed a dev team, my guidelines were along the lines of:

        1. You, the developer, should not commit code except to private environments until you think it's ready for the end customer -- but with the understanding that extensive stress testing which isn't practical to do for every commit to the general developer branches may find bugs that are not identifiable by simple code review and unit testing because they are architectural in nature - esp. if you didn't do the archit

      • by organgtool ( 966989 ) on Saturday March 27, 2021 @05:31PM (#61206366)
        I've been developing software for nearly two decades and I don't know a single developer who cuts corners because they expect things to be found during the code review. The vast majority of poor-quality code comes from managers promising features to customers or upper-management before ever consulting with the developers to see if the promised deadline is feasible. In those cases, the time to perform a code review is so short, the situation is basically just managers breathing down your neck to rubber-stamp an approval. You know this is happening because the manager will often slip up and ask you to "approve" the pull request rather than "review" it.
    • by AmiMoJo ( 196126 )

      Lack of automated testing? If passing some static tests was mandatory for commits to be accepted they could easily have picked up things like debug output left in or functions that always return true.

      • AFAIK this was all new code which would not have been exercised by existing functional tests. Best case, a static analysis tool might have flagged something but in that case a human would have had to intervene to interpret the results.

        From what I can tell is an outsider who runs an open source project that is pretty old at this point, this was a breakdown in procedure by a single developer. the code should have been on a branch or made available for public review by some means. It never should have been com

  • 1. FreeBSD is like a quarter century old. How many times have a problem like this arisen? Gates -of any kind, don't come for free (not even in free... bsd) so maybe the cost/benefit of current situation is still positive. Now they discovered a lousy developer? They retire his "commit bit" a life just goes on. On the other side of the equation, I don't think the core team gives that "commit bit" to the first passer-by. Remember: "almost" is not anything like "there".
    2. If you don't like it, and still pre

    • > I don't think the core team gives that "commit bit" to the first passer-by

      No, that would be too fair -- it's actually some elitist and corrupt mechanism through which incompetents get their commit bits and push their pet peeves and broken code.

      I still remember like 15 years ago when some little cunt decides to disable sysvipc, breaking a lot of programs. Just because he didn't like it. That was fortunately reverted soon afterwards.

      Or when another genius (which was kicked out soon afterwards and created

      • So exactly my point.

        Quite "cheaper" to counter-act than prevent, as you had to resort to something happening 15 years ago that was "reverted soon afterwards" and, if you don't like how things go on FreeBSD and you think you can do it better, you always can fork and demonstrate it.

        • Well, the "cat" kludge is still there, and it also made its way into MacOS.

          > if you don't like how things go on FreeBSD

          Is there anybody who likes it? Do you like it?

  • by TheNameOfNick ( 7286618 ) on Saturday March 27, 2021 @01:05PM (#61205752)

    Core unconditionally values the work of all contributors, and seeks a culture of cooperation, respect, and collaboration. The public discourse over WireGuard in the past week does not meet these standards and is damaging to our community if not checked.

    That statement has got to be their April Fools joke leaking early. The FreeBSD kernel is a participation trophy and the mean kids who insist on code quality are not allowed near it anymore?

  • by Stolpskott ( 2422670 ) on Saturday March 27, 2021 @01:16PM (#61205772)

    So the original developer writes crap code (whether as a result of burn-out and exhaustion, or just that is the level of talent they have), and the fault for that code almost making it into the kernel is a lack of review, without any responsibility on the coder?
    And then, once the issue has been identified and the process criticised, the people who gave that person the ability to post such crap code are more focussed on being upset about the criticism than they are about addressing the code and quality issues or the lack of a review process?

    Is this just crap management, or a sign of what can happen to a high-profile complex open source project when the system owner does not publicly chew out the people around them when they either screw up themselves or when they allow others to screw up?
    tl;dr: Would this have happened if someone with authority took the Linus Torvalds approach, publicly shaming the teams responsible, ruffling a few feathers and bruising egos in the process?

    • Self-discipline is not natural but an outgrowth of imposed discipline.

      Humanity has known this for countless thousands of years. Other animals impose discipline too. It's basic to advanced life forms.

      • Re: (Score:2, Insightful)

        by Ostracus ( 1354233 )

        Try suggesting abstinence as a form of birth control. Self-discipline suddenly becomes impossible.

    • by sjames ( 1099 )

      The coder's failure is a given. The concern is that it shouldn't be possible for a single failing coder to get code that close to being released.

    • by raymorris ( 2726007 ) on Saturday March 27, 2021 @05:08PM (#61206320) Journal

      > So the original developer writes crap code (whether as a result of burn-out and exhaustion, or just that is the level of talent they have), and the fault for that code almost making it into the kernel is a lack of review, without any responsibility on the coder?

      Yes. The problem is that crap code gets into the OS.
      If your system is dependent on every programmer being really good, all the time, and never making any mistakes, your system sucks. You need to fix the system.

      You could say this particular person can't contribute code anymore because their code sucks and you will have fixed 0.01% of the problem. All the other programmers, all of the other humans, are going to make mistakes too, because that's pretty much what we humans do all the time.

      Software development teams / systems are frequently analyzed on a four-level scale. 1-4 or 0-3, depending on which system you use to do the analysis. No systematic code review means the team is at the bottom of the scale, maturity 1 on a 1-4 scale.

      Moving up a notch, you have teams that routinely and systematically review code.
      A notch up from that, you do code reviews and keep track of the results so you can improve over time. For example you might might note that 35% of the bugs found are type foo, so you add a check for foo bugs directly in the IDE.
      A notch up from that, you're helping the industry get better - your tracking is good enough that other organizations learn from your stats or processes.

      So anyway yeah, their process is apparently maturity level zero, on a 0-3 scale.

      • by ljw1004 ( 764174 )

        Software development teams / systems are frequently analyzed on a four-level scale. 1-4 or 0-3, depending on which system you use to do the analysis. No systematic code review means the team is at the bottom of the scale, maturity 1 on a 1-4 scale. Moving up a notch, you have teams that routinely and systematically review code. A notch up from that, you do code reviews and keep track of the results so you can improve over time. A notch up from that, you're helping the industry get better - your tracking is good enough that other organizations learn from your stats or processes.

        That was interesting. I've not come across that before. Thanks for posting.

        • Thank you for your comment. A couple of systems that are commonly used include CMMI and SAMM if you want to check them out further. I've found them useful for helping to a) find areas that most need improvement and b) report on that in a way management responds to. You can tell management that there is "technical debt" and they might kinda understand that, but a looking at the development process through a maturity model allows one to communicate in a way is clearly well-thought-out. It's also referencing

      • CMMI and similar are quite passé, for many good reasons. But the general concept of having an auditable design control process is not. It need be possible to understand what piece of code is intended to do and how it was tested, without a lot of effort. There needs to be a process of assessing readiness of code before it is allowed into a release, that is more robust that relying on both the competence and good intentions of any individual.

    • Unless you have proper process in place to ensure quality of code, your entire codebase becomes as bad as your worst developer (one buffer overflow in the kernel and the attacker owns it, no matter how many other parts have been well written). Publicly shaming the developer is not going to accomplish much, perhaps detract them from participating in the project, which might sounds like a good thing, but there will be more and more even worse developers, not yet publicly shamed, who will take their place.

      I am

  • Lack of code review (especially for security-sensitive code) is unfortunately nothing new for FreeBSD: https://vez.mrsk.me/freebsd-de... [vez.mrsk.me]
  • Lets see... (Score:5, Interesting)

    by bferrell ( 253291 ) on Saturday March 27, 2021 @01:57PM (#61205882) Homepage Journal

    The protocol bursts onto the scene a while back as the great hope for secure communications/VPNs, better that ANYTHING that has come before... And only runs on a VERY few OSes, with almost no review other than the designer.

    A somewhat testy sponsor of of VPN/firewall appliances get's tired of waiting for the protocol to appear and hastily commissions an implementation of the protocol for a platform of their choosing that has NO SUPPORT from the designer. Said implementation is arguably "not good" (and it's not, don't get me wrong, jumbo frames cause a kernel panic?! Really?).

    Pissing contest ensues.

    Explain why this is surprising?

    Yes, I left out all of the tabloid stuff about the personality of the dev commissioned. It's not relevant to the process.

    • And only runs on a VERY few OSes, with almost no review other than the designer.

      Actually it came to the scene with Windows, Linux, and Mac support and went through significant analysis before even getting to code reviews which were done for inclusion directly into the Linux kernel.

      I think the bigger problem is BSD lacks an active developer base.

    • I'm fairly convinced that someone could come up with a weird twist on an existing protocol and with some coordination tout it as the latest and greatest elite protocol and suddenly every basement warrior would have to be running it because its obviously better than everything else ever, and by god they have 1 Gbps fiber at home and look at the speeds they get on their VPNs or whatever.

      I use OpenVPN regularly and have never felt like it was a huge performance bottleneck, but to watch the chatter on Reddit in

  • bigger issue (Score:1, Interesting)

    by gillbates ( 106458 )
    With this, and the RMS FSF controversy, it seems that someone, somewhere, wants free software to become a dirty word.
    • I imagine something like this happens all the time. Free software well and good, but the FreeBSD project expects some type of quality control process beyond what may be the standard for most potentials, talent non withstanding? You have your own project, open source is out of the question, so you keep your tree private. If you don't know people, the "right" people who are on the project or to just help out, the first real review to FreeBSD's standards is by FreeBSD?

      With this, and the RMS FSF controversy, it seems that someone, somewhere, wants free software to become a dirty word.

      No, I think the exact opposite is the ef

    • "Open Source" code payed for by corporations has always been utter, absolute crap -- as opposed to real Free Software written by "itch a scratch" or enthusiast volunteers or to closed source code which was later released under some free licence.

      Just days ago I had a look at some sane (scanner) drivers from the hplip package (HP Linux Printing bla bla) -- absolutely disgusting crap, zero error checking, naive mistakes you only see in student homeworks. It's absolutely clear that nobody ever tested or reviewe

  • Why is wireguard cramming itself in the kernel where ever they can? Dont have other VPN software like OpenVPN in the kernel
    • by Anonymous Coward

      It belongs in the kernel because 1 Gbit/s is over 80000 individual packets per second. You don't want to switch from kernel to user space and back that often. Context switches are expensive. It's a bit like reading files one byte at a time. Wireguard is designed to handle the packets with only a small amount of code in the kernel. User space code deals with the complicated things.

      • Interesting cause other software can transfer gigabit + without being in the kernel
        • by carton ( 105671 )

          Yup, "faster VPN in kernel" is an outdated bias.

          Intel DPDK handles ~50x more pps per core partly by running in userland instead of kernel. This is the basis of Netgate's high-throughput VPN product, TNSR [tnsr.com], and a competing (more open) project, DANOS [danosproject.org].

          High-throughput VPN is an especially targeted use-case of this new type of userland networking. It's not quite normal userland like /dev/tap or BSD sockets, but I'm not sure it's really different in any fundamental or abstract way rather than simply more modern.

        • Of course it can. With additional overhead. And that's really the problem. Userland VPNs perform poorly.

          Also even userland VPNs rely on some form of kernel support for their interfaces. Good luck getting OpenVPN to work without TAP/TUN compiled in the kernel at the very least. Beyond those slower protocols you can look to classics: The entire L2TP protocol is embedded in the kernel, as is PPTP. Actually it would be a more interesting exercise to find a high performance VPN protocol which isn't implemented a

  • by bill_mcgonigle ( 4333 ) * on Saturday March 27, 2021 @04:05PM (#61206178) Homepage Journal

    It's no secret that Netgate is *very* late with WireGuard. Some random guy on the forums had it working nearly a year ago, which is what I'm running on my pfSense boxes (and why I'm still using pfSense, frankly).

    It's also the audited userland version, so I'm glad I didn't hurry up and upgrade to the 2.5.0 release with in-kernel Wireguard. What's worse is that the Netgate staff has been shittalking the userland version, yet for me it's been totally stable, if a bit of a pain in the ass to get set up the first time.

    The real thorn is that Opnsense, the community fork of pfSense that has gone its own way, has had Wireguard for, what, a couple years now? And they take patches in a timely manner (one of my pfSense fixes took > 3 years to commit). But there's no easy way to migrate in either direction, so it's not like the userbases are entirely interchangable.

    And, yes, I send Netgate >$1000 orders every so often, contribute code, and file decent bugs, so it's not like I have it out for those guys. I just wish they would listen to their users a bit more.

    At least they backed down from deprecating their own hardware from a few years ago which was "the plan" for some bullshit reason centering on AES-NI-ish things. Cache-timing attacks made their theoretical attack seem esoteric and difficult by comparison.

    I get that it's not easy to make money in this business but it would be better to build a robust community and fandom to increase sales.

  • Who cares? At least it doesn't have systemd on it - so the problem isn't a problem at all. Quit complaining
  • by Anonymous Coward
    When Linus Torvalds pulled WireGuard into the linux kernel [slashdot.org] it was called out for excellence for being "just 4,000 lines of code, versus OpenVPN's 100,000 lines of code." How the heck has it increased by an order of magnitude?
  • If you don't know your history of how Netgate hijacked the pfSense project for profit, or how sinister the shills running Netgate are, then wake up and do some homework. If you believe in the values of a community driven distribution, with integrity, then start avoiding the insidious trolling of the Netgate team and move onward and upward. Leave it to Jim Thompson to become so desperately insecure that he attacks the integrity of the FreeBSD project rather than consulting his changelogs in a mirror. Live fr

  • What's it going to take to turn coding into a regulated profession, where someone's license can get yanked and they can be fined for egregious violations of documented standards?

    --
    .nosig

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...