Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Open Source Unix BSD News

FreeBSD 10 To Use Clang Compiler, Deprecate GCC 711

An anonymous reader writes "Shared in last quarter's FreeBSD status report are developer plans to have LLVM/Clang become the default compiler and to deprecate GCC. Clang can now build most packages and suit well for their BSD needs. They also plan to have a full BSD-licensed C++11 stack in FreeBSD 10." Says the article, too: "Some vendors have also been playing around with the idea of using Clang to build the Linux kernel (it's possible to do with certain kernel configurations, patches, and other headaches)."
This discussion has been archived. No new comments can be posted.

FreeBSD 10 To Use Clang Compiler, Deprecate GCC

Comments Filter:
  • by jps25 ( 1286898 ) on Sunday May 13, 2012 @01:16PM (#39986869)

    The GPL.

  • by vlm ( 69642 ) on Sunday May 13, 2012 @01:22PM (#39986931)

    Aside from the more or less irrelevant licensing issue, clang is all about the source analysis tools, refactoring, rewriting support. Uses less memory and time. Both caused by lack of optimization.

  • by bonch ( 38532 ) * on Sunday May 13, 2012 @01:23PM (#39986939)

    Clang and FreeBSD aren't proprietary software. They're BSD-licensed open source. That code doesn't magically disappear when a company uses it.

  • by BasilBrush ( 643681 ) on Sunday May 13, 2012 @01:24PM (#39986947)

    Avoiding te GPL is the main reason. But Clang also has many technical superiorities to GCC too. Wikipedia gives a quick outline of them.
    http://en.wikipedia.org/wiki/Clang [wikipedia.org]

  • by perpenso ( 1613749 ) on Sunday May 13, 2012 @01:30PM (#39986997)

    What's wrong with GCC?

    Some people argue that LLVM/Clang offers better code generation, compile time warnings, and code analysis. Some compiler developers think the gcc code has become too bloated and complicated. Even gcc devs have described the gcc code as "cumbersome".

    There are various efforts to get Linux building under LLVM/Clang. Especially for embedded environments.

  • by beelsebob ( 529313 ) on Sunday May 13, 2012 @01:41PM (#39987097)

    1) It compiles slower than clang at -O0
    2) It produces slower code than clang at -O3 and -Os
    3) It's error and warning messages are not as good
    4) It's not as modular as clang, which can be used in parts, to produce useful tools like CSA
    5) The GPL.

  • by Alex Belits ( 437 ) * on Sunday May 13, 2012 @01:43PM (#39987121) Homepage

    One example would be integrating the compiler with its own custom tools.

    The only valid way of integrating compiler with custom tools is calling the compiler from them (everything else is shit design made by shit developers). That was done with gcc for as long as gcc exists.

  • by rubycodez ( 864176 ) on Sunday May 13, 2012 @02:03PM (#39987309)
    the topic of this thread is "What's Wrong With GCC?", not "What's Wrong with the GPL"
  • by ommerson ( 1485487 ) on Sunday May 13, 2012 @02:03PM (#39987313)

    One of the key design objectives of Clang is that it is highly modular, and implemented in such a way that various compilation stages are self-contained, and have clean APIs and data structures. This allows development tools such as IDEs to link directly against the stages of the compilation pipeline then need to implement syntax highlighting, code completion, refactoring tools and so on.

    Apple's XCode does precisely this, and licensing and lack of modularity in the GCC source tree would have been major factors in their choice to support Clang and LLVM development.

    The traditional way of implementing these functions in IDEs has been to effectively re-implement the front-end of the compiler (often not completely). This is a big deal when developing in C++ against the STL/Boost/TR1 when you find that code completion can't grok template properly. This is something that XCode and Visual Studio (which takes a similar approach) are both capable of doing.

  • by jedidiah ( 1196 ) on Sunday May 13, 2012 @02:14PM (#39987387) Homepage

    Of course the GPL will be brought up sooner or later as a sort of bogeyman.

    I never did get why any sort of freeware community would shill for corporate abusers.

    If you need to care about something like the GPL then you are violating basic modularity principles taught in the first year of any CS program.

  • by mister_playboy ( 1474163 ) on Sunday May 13, 2012 @02:28PM (#39987473)

    There's a reason even the shining monument of GPL (Linux) uses GPLv2...

    Even if Linus did want to move the kernel to GPLv3 (he doesn't), he would have to get every kernel contributor to agree to the license change, AFAIK.

  • by 0100010001010011 ( 652467 ) on Sunday May 13, 2012 @02:54PM (#39987701)

    iXsystems [ixsystems.com]. Juniper Networks [juniper.net]. Apple [apple.com]

    I'm willing to bet that all three have some proprietary stuff that they're not feeding back. It doesn't mean that they completely ignore the community. Apple owns CUPS now. iXsystems picked up FreeNAS development.

    GPLv3 wouldn't probably make it anywhere into these companies.

  • by Anonymous Coward on Sunday May 13, 2012 @03:14PM (#39987851)

    Not really, he'd just deprecate everything prior to however many years he could verify change history on, relicense the new stuff and either remove or rewrite what was 'contested'. It wouldn't be the first time, and given the talk about dropping 'legacy isa' driver support, it seems probable this will happen at some point (this particular item being annoying since most software won't run on legacy linux versions, due to gcc/glibc/kernel/userspace hell, and no real upgrade path since the kernel bloat has gotten ridiculous, the constant api changes have made driver verification impossible, and feature breakage has made it a crapshoot for both non-x86 and x86 hardware to function stably on any particular set of hardware (As an example I've been having hardlocks on both RV and RS200 class hardware, both of which worked stably about 4-6 years ago, but both of which now randomly freeze on some subset of operations. And unless you're a dev yourself, good luck getting someone to trace it down on hardware that old. Everything other than radeons, NV03 and above geforces, and i830 class hardware (in the i945/* bridge driver) has had 3d support dropped due to the removal of DRI1 support, kernel modesetting is now required for either most or all of the drivers that are left (which means using a framebuffer which on most hardware is *SLOW* if you do a lot of console based applications, especially on the legacy videocards!), oh and just to top things off, 8 bit pseudocolor has been broken for 5 years, only showing greyscale, which means anybody with 8 bit color hardware (say the sparc crowd?) can't see what they're doing if they're running an up-to-date x-windows install. (Not that that's a problem since linux support for 32 bit SPARC hardware has been broken from 2.2.19 for SMP boxes and 2.4/2.6.something for UP boxes on the rest. Not counting LEON hardware.)

  • by unixisc ( 2429386 ) on Sunday May 13, 2012 @03:18PM (#39987883)

    Of course, its a pity, because even if if you Tivoized GPLv2 code you still had to share your source so people could learn from it, or use and modify it on other (or jailbroken) hardware, whereas now people are moving to BSD-style licenses with no such benefits... but if the FSF want to let the perfect be the enemy of the good, declare jihad on Tivoization and have a tilt at the patent windmill, that is their right.

    This is absolutely the case! When TiVo was complying w/ GPLv2, the FSF suddenly discovered a major objection to their practice - namely, that they were putting the code in read-only devices, and declared a jihad on the company. However, even GPLv3 doesn't explicitly say that GPL software cannot be put on a Read-only memory (which would again violate the GNU's Freedom #3) or copy-protect memory (which could prevent the device that contains the software from getting copied) or anything else about the devices that the software can reside on.

    As you very well put it, it's one more of those cases of the perfect being the enemy of the good, and in the process, the FSF waging a war on its own licensees, namely TiVo. Given that track record, which company in its right mind, even if they endorsed the liberation of software, would want to get into bed w/ the FSF?

  • by zixxt ( 1547061 ) on Sunday May 13, 2012 @03:36PM (#39987995)

    1) It compiles slower than clang at -O0
    2) It produces slower code than clang at -O3 and -Os
    3) It's error and warning messages are not as good
    4) It's not as modular as clang, which can be used in parts, to produce useful tools like CSA
    5) The GPL.

    Got facts to back this up? Every benchmark I have seen as showed GCC producing faster code than Clang on 90% of the time, Phoronix benchmarks has in the last week showed this to be true.

  • by ZorroXXX ( 610877 ) <[hlovdal] [at] [gmail.com]> on Sunday May 13, 2012 @03:38PM (#39988029)
    Static for functions or variables with file level linkage makes them "private" to that file. E.g. in this case, several source files can define global variables named world_type without collisions. That is provided all declare them static. One of the files might ommit, but if two or more source files declare non-static global variables named world_type then the linker will (correctly) complain when linking.
  • by TheRaven64 ( 641858 ) on Sunday May 13, 2012 @03:51PM (#39988147) Journal
    I'm just on my way back from BSDCan and the FreeBSD DevSummit. At the DevSummit, there was a Vendor Summit, for companies that use FreeBSD in their products. Not all were there (Sony, for example, was absent), but companies like Fusion IO, Yahoo, IX Systems, Juniper, Apple, and so on all sent people. There were about 40 companies represented in total, for a developer meeting with about 70 attendees.
  • by TheRaven64 ( 641858 ) on Sunday May 13, 2012 @03:56PM (#39988193) Journal
    There are companies shipping FreeBSD-based products using MIPS, ARM, and PowerPC, as well as x86[-64]. ARM support in LLVM is good (ARM and Apple both work on it). MIPS support is mostly there if you use an external assembler, but the integrated assembler is broken - some MIPS people are working on it. PowerPC just has three guys working on it, but everything except some thread-local storage models and position-independet code on 32-bit work. We're probably going to flip the switch for x86[-64] to default to compiling the base system with clang this week (I meant to do it yesterday, but I was only near my computer at the same time as I was near beer, and thought that this was a commit that should be done sober). For other architectures, it may take a little bit longer.
  • Re:Not a bad idea (Score:5, Informative)

    by rubycodez ( 864176 ) on Sunday May 13, 2012 @04:01PM (#39988257)
    Linux depends on gcc-specific extensions, and not just typedefs but exact layout in memory (which C doesn't specify), in-line assembly syntax http://www.ibm.com/developerworks/linux/library/l-gcc-hacks/ [ibm.com]
  • by colinrichardday ( 768814 ) <colin.day.6@hotmail.com> on Sunday May 13, 2012 @04:14PM (#39988413)

    According to Kernighan and Ritchie, the static modifier restricts the scope of externally declared variables to the rest of the source file. AC might not want to accept the GPL and BSD definitions of free/unecumbered to non-software contexts.

  • by TheRaven64 ( 641858 ) on Sunday May 13, 2012 @04:15PM (#39988431) Journal
    At -O2 and -O3, clang and gcc are within 10% for the vast majority of code, with no overall winner. There are a few corner cases, however:
    • The autovectorisation support in LLVM is a very elegant design, but is very new code and so still performs worse than GCC in a lot of cases (about 70% of the autovectorisation test suite is faster with GCC).
    • Clang has no in-tree support for OpenMP, so anything using OpenMP (vaguely competently) will be faster with gcc because clang will fall back to the single-threaded version.
    • GCC's Objective-C support is just embarrassing, and (on non-Apple platforms) performance can be an order of magnitude better with clang, with a 20-50% speedup being pretty common.
  • by russotto ( 537200 ) on Sunday May 13, 2012 @04:40PM (#39988629) Journal

    Here is what I personally don't get, maybe someone can explain it to me, but WTF was it with RMS and the TiVo?

    Tivo did exactly what RMS started the Free Software Foundation to prevent (The Printer Story [gnu.org]). What did you expect would happen?

  • by smash ( 1351 ) on Monday May 14, 2012 @12:26AM (#39991557) Homepage Journal

    One reason for that is because the GCC team won't accept Apple's patches for new versions of Objective-C. Apple want to move Objective-C forward, GCC has become a barrier to that, so they support CLANG/LLVM development. The version of GCC included is simply for legacy support and will be removed in due course once CLANG support for C++ is good enough.

    Clang/LLVM also gives you nifty stuff for interfacing with the IDE, far better compilation errors/warnings, faster compile times, etc.

  • by adri ( 173121 ) on Monday May 14, 2012 @02:38AM (#39992099) Homepage Journal

    Really? cddl and gpl software have their own directories. You can build totally functional kernels, completely with the tools given to you by the build system, without:

    * any binary blob drivers (and yes, ath(4) doesn't need a binary blob, so you can get wireless support!);
    * any GPL or CDDL code (so no ZFS and no ext2fs, for example.)

    Please re-inform yourself of the reality of FreeBSD's distribution and structure before making such uninformed comments.

    -adrian (@freebsd.org)

  • by jbolden ( 176878 ) on Monday May 14, 2012 @07:05AM (#39993079) Homepage

    We don't have to guess which model works best, at this point we have historical data. Your model failed with respect to X. MIT created and maintained an X that they released via. the MIT license. All the UNIX vendors then took this MIT code and intermixed it with their custom code creating value add X's that were specific to their platform, and closed source. The effect was that the X that existed in the public domain was worthless for end users, and the X's that were worthwhile were closed. X itself couldn't progress because it fragmented so all the interesting stuff existed in other layers. Years later when there was a desire for a workable open X, the XFree86 project had to start, essentially from scratch and this took years. We still haven't gotten all the features that existed in those proprietary Xs 2 decades ago.

    That is the classic example of why BSD style licensing doesn't work. The primary maintainer is not unchanging.

    Conversely the GPL has a long history of successful multi corporate contributions over time. The historical data simply refutes your theory of what should work.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...