Please create an account to participate in the Slashdot moderation system


Forgot your password?
Software BSD Apple

Apple's Grand Central Dispatch Ported To FreeBSD 205

bonch writes "Apple's Grand Central Dispatch, which was recently open sourced, has been ported to FreeBSD and is planned to be included by default in FreeBSD 8.1. Also known as libdispatch, the API allows the use of function-based callbacks but will also support blocks if built using FreeBSD's clang compiler package. There's already discussion of modifying BSD's system tools to use the new technology." The port was originally unveiled last month at the 2009 Developer Summit in Cambridge. Slides from that presentation are available via the Dev Summit wiki.
This discussion has been archived. No new comments can be posted.

Apple's Grand Central Dispatch Ported To FreeBSD

Comments Filter:
  • by jpedlow ( 1154099 ) on Friday October 16, 2009 @05:59PM (#29773495)
    My first question was "So...what does this do?" Apparently it is a more efficient way of scheduling threads on multi-core systems [] apple's site says this: "Grand Central Dispatch (GCD) in Mac OS X Snow Leopard addresses this pressing need. It’s a set of first-of-their-kind technologies that makes it much easier for developers to squeeze every last drop of power from multicore systems. With GCD, threads are handled by the operating system, not by individual applications. GCD-enabled programs can automatically distribute their work across all available cores, resulting in the best possible performance whether they’re running on a dual-core Mac mini, an 8-core Mac Pro, or anything in between. Once developers start using GCD for their applications, you’ll start noticing significant improvements in performance. " So this seems good then.
  • by turgid ( 580780 ) on Friday October 16, 2009 @06:09PM (#29773589) Journal

    That sounds like marketing-speak. That's the whole point of preemptively scheduled native threads.

    GCD-enabled programs can automatically distribute their work across all available cores

    Been there, done that on Solaris and Linux 10 years ago in plain old C. No magic required, just

    #include <pthread.h>

    and away you go. In fact, I spent an hour one day writing some C to automatically multi-thread those embarrassingly parallel array operations.

    man 3 pthread_create - the world is your lobster.

  • Re:OpenMP (Score:3, Interesting)

    by fred fleenblat ( 463628 ) on Friday October 16, 2009 @06:34PM (#29773793) Homepage

    it seems like the ability to share work across machines, not just cores, would be a critical difference.

  • by norton_I ( 64015 ) <> on Friday October 16, 2009 @07:32PM (#29774275)

    GCD is a mechanism to let one central authority dispatch threads across multiple cores, for all running applications (including the OS).

    This is what most people talk about, and what is most obvious from the name, but it is not the interesting part of GCD.

    The interesting part of GCD is blocks and tasks, and it is useful to the extent which it makes expressing parallelism more convenient to the programmer.

    The "central management of OS threads" is marketing speak for a N-M scheduler with an OS wide limit on the number of heavyweight threads. This is only useful because OS X has horrendous per-thread overhead. On Linux, for instance, the correct answer is usually to create as many threads as you have parallel tasks and let the OS scheduler sort it out. Other operating systems (Solaris, Windows) have caught up to Linux on this front, but apparently not OS X. If you can get the overhead of OS threads down to an acceptable level, it is always better to avoid multiple layers of scheduling.

  • Re:No. Really? (Score:2, Interesting)

    by phantomcircuit ( 938963 ) on Friday October 16, 2009 @08:23PM (#29774575) Homepage

    So any modifications to ZFS that they included in their shipped product had to be distributed? GEEE THANKS FOR THAT APPLE and isnt webkit based on khtml?

  • by bill_mcgonigle ( 4333 ) * on Friday October 16, 2009 @09:50PM (#29775075) Homepage Journal

    Apple maintains their own gcc fork which supports blocks/closures.

    why won't gcc take the patches?

  • by SuperKendall ( 25149 ) on Saturday October 17, 2009 @12:39AM (#29775681)

    It's a win/win, but the mini-Stallmans will never see it that way.

    To the contrary: I am a huge fan of Stallman's philosophy and see Apple's work as win/win.

    The people that are complaining are the looney Apple Haters, who try and find any point possible by which to attack Apple - never realizing until it is to late the latest position they are attacking from makes no sense.

    Please do not taint all those of us who respect the GPL with the same brush the Haters paint themselves in corners with.

  • by jcr ( 53032 ) <> on Saturday October 17, 2009 @02:21AM (#29775939) Journal

    The probability that Apple migrates away from gcc is approaching 1 at great speed.

    That writing's been on the wall for quite a long time now. GCC has been a severe limitation on what they could do with Xcode for far too long. With LLVM, I'm expecting shortly to get away from the edit/compile/debug cycle and have a pause/edit/resume cycle instead.

    Right now in Quartz Composer, you can write GL SLANG shaders that are compiled on the fly as you type. It's amazing to see the effects of changes immediately.


  • by Anonymous Coward on Saturday October 17, 2009 @06:25AM (#29776495)

    It is not the context switch that is the bottleneck for the performance. Dependencies are. I give you an easy example. Assume you have 2 Cores and 4 tasks. Two tasks, lets call them A and B, are trivially parallel. They don't depend on anything and they are no requirement for anything. And then there is task D that depends on task C. Assume all tasks need the same time t and context switches are for free.

    You would just throw A, B and C at the processor at the same time, because you say number of threads doesn't cost much. Because D depends on C you will start this after C is finished (probably in the same thread, but that is not important here). Because there are only 2 cores the 3 starting tasks are finished after 3/2 t. After that D is started and needs another t. So the sum of time is:

    5/2 t

    Now we make a slightly smarter aproach. We use a central manager and we tell him all the tasks at the beginning and we tell him about the dependencies. The manager also knows about the number of cores. Let's assume we have implemented a quite trivial heuristic: the priority of a task equals the number of tasks that depend on it (higher priority means it is scheduled earlier).

    So the manager would schedule task C first, and because another core is available he schedules task A and doesn't schedule B because there is no capacity for it (w.l.o.g.). After time t C and A are finished and the manager schedules B and D. Their computation takes another t. So the total processing time is:

    2 t < 2.5 t

  • by Shinobi ( 19308 ) on Saturday October 17, 2009 @11:43AM (#29777883)

    It goes further and deeper(and uglier) than that. Remember when Apple released the G5's? Apple actually submitted patches to GCC, but they were declined, with GCC's official stance being "It would reduce GCC's portability". However, a few weeks later, IBM submitted some patches for GCC, that were summarily accepted. IBM's patch package contained many of Apple's patches for GCC.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (9) Dammit, little-endian systems *are* more consistent!