Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Open Source Operating Systems BSD

FreeBSD 11.0 Released (freebsdfoundation.org) 121

Long-time Slashdot reader basscomm writes, "After a couple of delays, FreeBSD 11 has been released. Check out the release notes here." The FreeBSD Foundation writes: The latest release continues to pioneer the field of copyfree-licensed, open source operating systems by including new architecture support, performance improvements, toolchain enhancements and support for contemporary wireless chipsets. The new features and improvements bring about an even more robust operating system that both companies and end users alike benefit greatly from using.
FreeBSD 11 supports both the ARMv8 and RISC-V architectures, and also supports the 802.11n wireless networking standard. In addition, OpenSSH has been updated to 7.2p2, and OpenSSH DSA key generation has been disabled by default, so "It is important to update OpenSSH keys prior to upgrading."
This discussion has been archived. No new comments can be posted.

FreeBSD 11.0 Released

Comments Filter:
  • Best wait a few weeks to make sure it's really released for real this time. This has been an embarrassing episode.

    • by Anonymous Coward

      The FreeBSD project never released version 11.0 before. Some early images were uploaded, tested and replaced. Anyone who is at all familiar with operating systems knows not to install and used unannounced ISO files. What is embarrassing is apparently some people are too impatient to wait for a release to be tested and announced before trying to use it. I have zero sympathy for them.

      • by Anonymous Coward

        /second. FreeBSD is deliberate. A few people jumped the gun. Hell even I did, when I saw the new images show up last night. I took a chance and figured they were official. Quite frankly, there are operations that would have just pushed the new release out, this late in the game. You have to commend these guys for doing it right. The guys who want to jump the gun and get burned are just whining that they guessed wrong.

      • Its the open source equivalent of downloading and burning a "Golden Master.

  • Focus (Score:2, Insightful)

    by Anonymous Coward

    The new features and improvements bring about an even more robust operating system that both companies and end users alike benefit greatly from using.

    I like these guys. They know what's important to focus their attentions.

    Others, well let's say they are more concerned with bells and whistles and eye candy.

    • by arth1 ( 260657 )

      With that-which-should-not-be-mentioned killing linux, jumping ship to BSD becomes more and more attractive.

      Unfortunately, there are still some pieces missing, like lack of high performance file systems (zfs is high functionality, not high performance) and lack of software for many RAID controllers, or kernel-hooked utilities we take for granted these days like inotify, selinux and cgroups (when used by the admin to control resources, not as a crutch for that-which-should-not-be-mentioned).

      • by dbIII ( 701233 )

        and lack of software for many RAID controllers

        Not such a huge deal now since ZFS exists to do the RAID calculations.
        RAID controllers just do not have much processing power or memory so getting the systems CPU to do the work almost always gives you better performance once you are talking about RAID6 or raidz2. I'm not even sure that they will give you better performance with mirroring.

        • I told my IT department something similar and they were on the floor laughing at me for my supposed stupidity.

          They use raid cards even for their home media servers as appearrently it is still believed the CPU does IRQ for each read or write. I explained pita and Dma solved that in 1997 and a modern CPU has I/0 in the CPU.

          My co-workers raid card at home was a windows 7 system too. Oh and he had ssds so no trim on that either! .... Sorry for rant but software raid has a very very bad wrap. People still feel i

          • by dbIII ( 701233 )

            I told my IT department something similar and they were on the floor laughing at me for my supposed stupidity

            They are very much out of date. Ask them what processors are in the RAID cards, they will not have a clue but they may look things up and get up to date.

            A current Xeon can do a bit more than a PowerPC CPU that came out eight years ago. ZFS (and several other things) do exactly the same RAID calculations as on those cards only on significantly faster hardware.

            • Add to that, recent x86 chips can do XOR in the DMA engine, so you actually have hardware RAID built into your CPU with a direct connection to the DRAM. Replacing it with one that's dangled off the PCIe bus doesn't sound too sensible.
              • by swb ( 14022 )

                I generally agree that software RAID is fine, but will counter that even on generic Dell boxes with PERC RAID-6 controllers and SATA disks in RAID6, it's pretty easy to see writes do 250 MB/sec.

                The reality is your disks are off the PCIe bus no matter what.

                I don't think that it's controller performance per se that's the limitation anymore, it's that software defined RAID usually has a much richer feature set, whether its SSD caching, tiering or block level striping or the ability to serve storage off the net

                • by dbIII ( 701233 )

                  The reality is your disks are off the PCIe bus no matter what.

                  The reality is that something has to do the RAID calculations and a single core 1GHz 32bit PowerPC is not as good at it as a Xeon with several cores each at least three times faster. You have to do those calcs before shovelling data to the disks, and if you can do it even before shoving it through the PCIe bus even better.
                  With RAID6/raidz2 it's not going to take a lot before that single core 1GHz 32bit PowerPC can't keep the disks fed at full sp

                  • by swb ( 14022 )

                    Current SAS-12 LSI cards have a dual core 1.2Ghz CPU, which leads me to believe the bottleneck may have been eased and for conventional in-server applications probably isn't a real bottleneck.

                    Mind you, I'm not disagreeing that there's more value in software raid, especially as disk counts go high, but for many in-server storage setups, especially spinning rust, you're going to exhaust disk throughput way before you exhaust the controller throughput.

                    • by dbIII ( 701233 )

                      which leads me to believe the bottleneck may have been eased

                      So long as you don't have to replace a failed disk and do a rebuild of the array to repopulate that disk.
                      In that situation beyond a small array it's going to want as much CPU as it can get and not be bound by the disk speed.

                    • by swb ( 14022 )

                      But any array rebuild is inherently bound by the speed of a replaced disk as the remaining members can provide the data needed to reconstitute the replaced disk far faster than the disk itself can write it. The performance of the card in calculating parity info to do this hasn't been a significant issue that I can remember since the days of 5 x 1GB RAID 5 arrays some 20 years ago, in fact most cards have options to reduce rebuild rates below 100% to reduce performance drags on ongoing access. The drag isn

                    • by dbIII ( 701233 )
                      For what it's worth I've seen plenty of "resilvers" take around a quarter of the time an array rebuild with a replaced drive used to take on that same hardware before it had ZFS and used the CPU instead.
                      Maybe those LSI RAID cards are crap compared with others but they definitely choke on doing the calculations.

                      nobody really builds out single server storage anyway

                      Apart from the people who do. There seems to be a lot of those "nobodies" in scientific computing. There seem to be a lot in other areas too.

                • On the plus side for RAID cards is battery-backed write caches and freeing the OS from a certain level of overhead -- you can just unload your write on the card in very few cycles and let it deal with sorting out the parity and actually managing disk writes.

                  This is not an issue for ZFS. RAID-Z doesn't have the RAID-5 write hole.

                  You also get the advantage of a redundant OS boot LUN.

                  FreeBSD can happily boot from a RAID-Z pool (though you do need to duplicate the bootloader on all disks, but that's under 1MB).

                • I will say a RAID card is ESSENTIAL for any type of database or financial processing or warehouse operation because of reliability and batteries in case of a failure.

                  When the power goes off on a computer not all of the data can be swapped out to the disk on time.

                  But for a home media server on Windows 7? Come on, 2001 was a long time ago :-)

                  I would say these days with LUN people use eSAN for anything really important anyway and use just the CPU to boot the server. TRIM support is important for anyone using S

                  • by dbIII ( 701233 )
                    It would be essential if you don't have a file system designed to cope with pulling the plug. However there are several that are.
                    I have not bothered to replace the batteries in my older RAID cards because there is no longer any need, just as it's now better to run them in JBOD mode.
                    A lot of those SAN devices use ZFS and do the heavy lifting in software.
        • by arth1 ( 260657 )

          Not such a huge deal now since ZFS exists to do the RAID calculations.

          That doesn't help much if you have to go through a RAID controller, because that's the only way you can get a huge number of drives from particular vendors.

          ZFS is great, but a hardware controller will often support things like:

          - Boot partition is also on a RAID.

          - Ability to add a drive to a RAID, expanding it.

          - Global and local hot-spares.

          - No system load choke during rebuild. That's probably the #2 issue with zfs - systems are near unusably slow during rebuilds. Granted, the rebuilds take less time (at l

          • by dbIII ( 701233 )
            Several errors but here is the HUGE one.

            Sure, you can still use a hardware RAID in BSD, but if you have to shut the system off to do maintenance because there is no RAID controller software that can run from userspace

            There is "mfiutils" for the LSI stuff, even the older 3ware stuff has RAID controller software that can run from userspace on FreeBSD - via a web browser or via the 3dm2 tool. There is other stuff for other vendors RAID cards but I'm not familiar with it.

          • by epine ( 68316 )

            I'm late to this party, but just so anyone who stumbles upon this thread by some quirk of Google future, the views expressed above are not reliable. It's not apparent that the author knows much of anything about the ZIL or the SLOG. There are trade-offs involved with ZFS, no question. But none of these are anywhere as inane as this post would seem to have it.

            If the vast majority of your work load is synchronous write, you do have to provide a SLOG with as much write bandwidth as the rest of your pool. E

  • Tell me that there's actually been a way to do it all along, but now there's just a better way.

    • by Anonymous Coward

      The FreeBSD slogan/motto/something is "the power to serve". Wireless standards just aren't a high priority for server operating systems.

      • In which case, they should have a 'downstream' team of PC-BSD/TrueOS driver writers who make sure that the OS works w/ laptops and AIOs that use WiFi and other desktop peripherals, such as printers.
        • by adri ( 173121 )

          No, it's been in there since freebsd-9. i just taught more things about 11n and bugfixed what was there.

    • What I'm wondering is whether they support WiFi ootb for Intel WiFi chipsets. I have a Dell Inspiron I7, and PC-BSD was unable to recognize the WiFi. Hopefully, 'TrueOS 11.0' would not have that shortcoming - although I'd have to get that DVD to do a new install
    • Nope a mistake. They meant to say 802.11n support added for additional WiFi drivers [freebsd.org] in their errata.

      This reminds me of Kernel 2.6.0 mentioning supporting SMP mistakenly by the media.

      Boy the mcses at work were laughing and mention Server 2003 rules and had that for years! Grrr

      • Nope a mistake. They meant to say 802.11n support added for additional WiFi drivers in their errata.

        Thanks for that. I was sure that this was not the whole story. *BSD might be behind Linux in many things, but not that far behind.

    • I was a bit surprised about that, as my FreeBSD 10 machine has had working 802.11n since 10.0 (support was in head for a long time, but Adrian doesn't like to do MFCs so it took a while to make it into a release).
  • by FunkSoulBrother ( 140893 ) on Monday October 10, 2016 @04:35PM (#53050377)

    Anyone seeing these? Any adblock/noscript rules that defeat them?

    • I think I saw those that a while back. uBlock element picker worked just fine on it, I think, so two clicks and its gone.

      uBlock is more or less a replacement for ABP, Noscript and Request Policy, although I think there are a few areas that NS covers that it doesn't (like anti-XSS). GPL, lighter memory footprint than ABP and it comes configured with Easylist out of the box. Very easy to use once you understand what the boxes in the GUI represent and you can effortlessly switch and combine whitelisting
  • by Billly Gates ( 198444 ) on Monday October 10, 2016 @04:49PM (#53050493) Journal

    I know Hyper-V support has been improved from 10.3 as Azure has that custom port that MS contributed back.

    But KMS/Quemu interests me as any 2016 IT professional uses virtualization and VMare Workstation is discontinued and in life support mode and sucks greatly.

    • If you don't have specific requirements necessitating another solution (a longer support window, perhaps), I really can't recommend Qubes highly enough. You can run conventional Xen HVMs alongside Qubes AppVMs which 1. are damned fast, 2. can utilize shared templates for disk space efficiency and easier updates, 3. have tools for quickly and securely sharing files or clipboard contents and 4. allow you to intelligently mix and manage color-coded windows in a single task bar. These features are available wi
      • I am talking about KMS and Freebsd as a host :-)

        Yes part of KVM was ported to the FreeBSD kernel. For now stuck on Windows 10 with Hyper-V for games. KMS provided GPU and hardware pass thru

        • Ok, yeah GPU passthrough is still a no-go for Qubes right now. It sounded like you might've been in the market for a general workhorse hypervisor, but 3d gaming is the one area in which it's definitely lacking.

          On the other hand, I should protest that Qubes does make for a superior casual gaming and retro/oldschool gaming experience. There's something to be said for running 2d Steam games (ones that don't have Linux compatibility), Fallout 2 and other oldschool Windows games in their own window sitting in
          • I may take a look? I want to leave Windows but am so dependent on it. I do not want to deal with wine and need nested nested virtualization for my mcse Hyper-V labs to learn clustering. Yeah laugh at me on last bit.

            VMware workstation which is slow on my PC is being discontinued leaving me with just Hyper-V on Windows 10 ... But hey MS contributed awesome Hyper-V guest from azure that made it to FreeBSD 11 :-)

            • I switched over from Virtualbox (I only ever tried VMware player, which I found pretty underwhelming), so I can't provide you with any useful comparisons. It's all Xen under the hood, if you've any experience with that.

              AppVMs, which use PV drivers (maybe PVH? I forget) and various performance tricks, are stupid fast to the point where you'll regularly forget that they're not native. Or maybe ~5 second boot times are the norm on most platforms now, but they certainly weren't on Virtualbox (even with PV dr
    • I'm a bit confused by KMS/Qemu. Do you mean KVM or KMS? Kernel Mode Switching (KMS) has been in FreeBSD since 10.0 and the graphics drivers are now very close to parity with Linux. If you mean KVM, FreeBSD ships with bhyve, the BSD Hypervisor, which is a legacy-free Type II hypervisor. It happily runs Linux and FreeBSD as guests and will run Windows Vista or newer with a little bit of fiddling. VirtualBox also works well on FreeBSD, if you want something more desktop friendly and FreeBSD works as a Xen
      • I meant KMS. From what I see it is no longer actively developed and quite behind the Linux version (correct me if I am wrong).

        This is just for desktop use at home as a workstation. I need type 1 speed and guest support and embedded or nested virtualization to learn some labs with clustering with other solutions like Hyper-V and VMware ESX. Last, I want to run my steam games too with full hardware acceleration.

        Linus tech tips got me interested in this with Unraid which is a distro of Linux KMS/QEMU with rea [youtube.com]

    • For FreeBSD, they have a different VM solution called Bhyve (pronounced Beehive). That, and also, they have jails for Debian and Gentoo distros of Linux.
  • I've been a long-time Linux user, but I'm not religious about it, and I've always been curious about the BSDs.

    Can someone give me an elevator pitch, especially about FreeBSD, seemingly the most popular of the BSDs? All the (server) software I use on a regular basis runs on FreeBSD.

    Before someone says "just try it," there's sooo much cool stuff to try (currently learning Clojure and Raspberry Pi stuff), so I need a reason to try it.

    Gimme some.

    • Personally, I am a big fan of Jails over VMs. It's a much smaller footprint and essentially has as good of a sandbox. I haven't run it in a production environment, but by logically breaking out my jails into more discreet functions like I would with VMs (in my home environment), I feel like I have excellent control and reasonable security I would otherwise not have if all the apps were on the same platform.
      • One thing - does v11 now have SteamOS jails? This was supposed to be there - previous versions had jails for Debian and Gentoo (but not Fedora)
    • by Anonymous Coward

      It does what linux does just slightly more conveniently and elegantly, fewer problems at the margins. ZFS on freebsd has somewhat fewer problems than on linux, its also better integrated with other os pieces. Jails on freebsd are easier to use and work "better" (in my experience and in somewhat limited cases) than lxc containers. It will allegedly do high volume service faster than linux (haven't tried it have no need) and it is MUCH better documented than linux. I think for someone with motivation to

    • Comment removed based on user account deletion
    • Kernel and Network stacks aside, I have found (some) enlightenment comparing BSD and GNU coreutils:

      cat, for example:
      • http://git.savannah.gnu.org/cgit/coreutils.git/plain/src/cat.c
      • https://github.com/freebsd/freebsd/blob/master/bin/cat/cat.c

      ls is interesting:

      • http://git.savannah.gnu.org/cgit/coreutils.git/plain/src/ls.c
      • https://github.com/freebsd/freebsd/blob/master/bin/cat/cat.c

      Then again, this comparison has also driven me closer to suckless. [suckless.org]

      • http://git.suckless.org/sbase/tree/cat.c
      • http://git.suckle
    • by dbIII ( 701233 )

      Can someone give me an elevator pitch

      If you want something like ZFS it's a couple of years ahead of linux.
      Otherwise it's fairly similar and slightly better ft on older hardware than current versions of linux.
      Not much point in trying it without one of those two reasons since it's really very similar to what you are used to.

      As a final note, the ports collection on FreeBSD appears to be the Gentoo linux dream achieved. Just tick boxes instead of choosing compile flags.

      • As a final note, the ports collection on FreeBSD appears to be the Gentoo linux dream achieved. Just tick boxes instead of choosing compile flags.

        Note that, if you want to compile ports yourself, it's now *strongly* recommended to use Poudriere rather than compiling individual ports stand-alone. Poudriere can compile any subset of the ports tree you want and give you a consistent package set. Poudriere builds ports in a jail so they won't ever accidentally be affected by other things on your system and will always only have the dependencies that are explicitly set. You can then install the packages alongside the upstream ones or, if you want to do

    • Any specific answer would be very point-in-time (hardware and software support etc changes).

      Philosophy-wise, though, there is a difference that is a constant. In FreeBSD, the people who write the kernel are the same people who write libc and the rest of the base system (or at least package and test it). So the core OS feels less like something assembled with glue and duct tape, and more like a single polished product. It's even more obvious when you look at the source - same coding standards everywhere, sam

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...