Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Operating Systems Software BSD

When VMware Performance Fails, Try BSD Jails 361

Siker writes in to tell us about the experience of email transfer service YippieMove, which ditched VMware and switched to FreeBSD jails. "We doubled the amount of memory per server, we quadrupled SQLite's internal buffers, we turned off SQLite auto-vacuuming, we turned off synchronization, we added more database indexes. We were confused. Certainly we had expected a performance difference between running our software in a VM compared to running on the metal, but that it could be as much as 10X was a wake-up call."
This discussion has been archived. No new comments can be posted.

When VMware Performance Fails, Try BSD Jails

Comments Filter:
  • by OrangeTide ( 124937 ) on Monday June 01, 2009 @09:54PM (#28176677) Homepage Journal

    Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.

    When applied to a problem it seems to create more performance issues than it solves. But it can make managing lots of services easier. I think that's the primary goal to these VMware-like products.

    Things like Xen take a different approach and seem to have better performance for I/O intensive applications. But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.

    VMware is more like how the Mainframe world has been slicing up mainframes into little bits to provide highly isolated applications for various services. VMware has not caught up to the capabilities and scalability to things IBM has been offering for decades though. Even though the raw CPU performance of a PC is better than a mid-range mainframe at 1% of the cost (or less). But scalability and performance are two separate things, even though we would like both.

  • by gbr ( 31010 ) on Monday June 01, 2009 @10:03PM (#28176731) Homepage

    We had performance issues with VMWare Server as well, especially in the disk I/O area. Converting to XenServer from Citrix solved the issues for us. We have great speed, can virtualize other OS's, and management is significantly better.

  • by xzvf ( 924443 ) on Monday June 01, 2009 @10:11PM (#28176799)
    This is slightly off the server virtualization topic, but I had a similar experience with LTSP and some costly competitors. Using LTSP we were able to put up 5X the number of stable Linux desktops on the same hardware. I'd tell every organization out there to do a pilot bake-off as often as possible. It won't happen all the time, but I suspect that more often than not, the free open solution, properly setup will beat the slickly marketed, closed proprietary solution.
  • by kriston ( 7886 ) on Monday June 01, 2009 @10:14PM (#28176825) Homepage Journal

    The new buzzword of Virtualization has reached all corners of the US Government IT realm. Blinded by the marketing hype of "consolidation" and "power savings" agencies of the three-letter variety are falling over themselves awarding contracts to "virtualize" the infrastucture. Cross-domain security be damned, VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time, for every application, in every environment, for no reason.

    This is the recovery from the client-server binge-and-purge of the 1990s.

    Here we go again.

  • by xzvf ( 924443 ) on Monday June 01, 2009 @10:20PM (#28176891)
    XenServer is a great product and has many skilled developers. The "from Citrix" really gives me a queasy feeling. I know the products are solid and innovative, but so many people I hear out in the wild, scream and run from Citrix. It might be behind the reason Ubuntu and Red Hat are backing KVM for virtualization. Even to the point where RH bought Qumarant (KVM "owners").
  • by ErMaC ( 131019 ) <ermac@@@ermacstudios...org> on Monday June 01, 2009 @10:23PM (#28176913) Homepage

    So I would love to RTFA to make sure about this, but their high-performance web servers running on FreeBSD jails are down, so I can't...

    But here's what I do know. FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago. That means that either:
    A) They were running on the Hosted VMware Server product, whose performance is NOT that impressive (it is a Hosted Virtualization product, not a true Hypervisor)
    or B) They were running the unsupported OS on ESX Server, which means there was no VMware Tools available. The drivers included in the Tools package vastly improve things like storage and network performance, which means no wonder their performance stunk.

    But moreover, Jails (and other OS-virtualization schemes) are different tools entirely - comparing them to VMware is an apples-to-oranges comparison. Parallels Virtuozzo would be a much more apt comparison.

    OS-Virtualization has some performance advantages, for sure. But do you want to run Windows and Linux on the same physical server? Sorry, no luck there, you're virtualizing the OS, not virtual machines. Do you want some of the features like live migration, high availability, and now features like Fault Tolerance? Those don't exist yet. I'm sure they will one day, but today they don't, or at least not with the same level of support that VMware has (or Citrix, Oracle or MS).

    If you're a company that's trying to do web hosting, or run lots of very very similar systems that do the same, performance-centric task, then yes! OS Virtualization is for you! If you're like 95% of datacenters out there that have mixed workloads, mixed OS versions, and require deep features that are provided from a real system-level virtualization platform, use those.

    Disclosure: I work for a VMware and Microsoft reseller, but I also run Parallels Virtuozzo in our lab, where it does an excellent job of OS-Virtualization on Itanium for multiple SQL servers...

  • by _merlin ( 160982 ) on Monday June 01, 2009 @10:25PM (#28176931) Homepage Journal

    FreeBSD Jails are the same thing as Solais Zones, just on FreeBSD. Since FreeBSD is about evil daemons, they need an evil-sounding marketing name for it. More seriously, they probably just didn't want to bring on the wrath of lawyers for trademark infringement.

  • by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Monday June 01, 2009 @10:27PM (#28176945)

    Well, in one case it does: when you're trying to run a different operating system simultaneously on the same machine. But in most "enterprise" scenarios, you just want to set up several isolated environments on the same machine, all running the same operating system. In that case, virtualization is absofuckinglutely insane.

    Operating systems have been multi-user for a long, long time now. The original use case for Unix involved several users sharing a large box. Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?

    Hypervisors have become more and more complex, and a plethora of APIs for virtualization-aware guests has appeared. We're reinventing the kernel-userland split, and for no good reason.

    Technically, virtualizaiton is insane for a number of reasons:

    • Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical
    • TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.
    • A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest
    • Memory management is an absolute clusterfuck. From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made.

      In having to set aside memory for each guest, we're returning to the OS9 memory mangement model. Not only are we reinventing the wheel, but we're reinventing a square one covered in jelly.

    FreeBSD's jails make a whole lot of sense. They allow several users to have their own userland while running under the same kenrel --- which vastly improves, well, pretty much everything. Linux's containers will eventually provide even better support.

  • by Anonymous Coward on Monday June 01, 2009 @10:38PM (#28177033)

    Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.

    Virtualization makes many things a lot easier. Testing, rollback, provisioning, portability & backup.

    The success of virtualization is due to failures of the software industry to have good separation between applications & operating systems. The one-application-per-server trend is the result, which leads to a lot of idle capacity.

  • by ckaminski ( 82854 ) <slashdot-nospam.darthcoder@com> on Monday June 01, 2009 @10:42PM (#28177063) Homepage
    What are you talking about? ESX has supported REAL SANS since almost day one. I've been able to GREAT things on a single vmware server, in one instance I managed 25 2GB J2EE app VMs on a quad core XEON (2005 era). In another I managed 168 sparsely used testing VMs (2x quad core). But I've ALWAYS had trouble with databases and Citrix, in particular, with VMware.

    Storage is only part of the issue. Having to run 10-160 schedulers *IS* the issue. Vmware doesn't utilize efficiencies in this arena like Xen or Jails, or OpenVZ or Solaris Containers can.
  • by ckaminski ( 82854 ) <slashdot-nospam.darthcoder@com> on Monday June 01, 2009 @10:50PM (#28177097) Homepage
    I broke VMware ESXs upper CPU limit of 168 vcpus with 104 running VMs. About 20 of which were under any significant load. 24ghz of CPUs and 32 GB of memory. Pretty damn impressive, if you ask me.
  • by MichaelSmith ( 789609 ) on Monday June 01, 2009 @11:12PM (#28177237) Homepage Journal
    I really should have kept a copy of those "don't feed the trolls" ascii art pictures people used to post on usenet. It would have come in handy here.
  • by wrench turner ( 725017 ) on Monday June 01, 2009 @11:13PM (#28177243)

    Running multiple services on one OS requires that when you must reboot a server because of an OS bug or mis-configuration all of the services are brought down... Same if it crashes or hangs. As compelling as that is I've never used a hypervisor in 30 years on 10's of thousands of servers.

    I do routinely use chroot jails on thousands of servers to isolate the application from the host OS. This way I do not need to re-qualify any tools when we implement an OS patch.

    Check it out: http://sourceforge.net/projects/vesta/ [sourceforge.net] :-)

  • by Mr. Flibble ( 12943 ) on Monday June 01, 2009 @11:20PM (#28177277) Homepage

    So we go back to where we started from: chroot and jails. What really is the benefit of extended virtualization? I haven't "embraced" it as I am supposed to do.

    I can see where it makes sense if you want to merge several servers that do absolutely nothing all day into a single machine but a decent migration plan will run all those services on a single 'non-virtual' server. Especially when those machines are getting loaded, the benefits of virtualization quickly break down and you'll have to pay for more capacity anyway.

    This is exactly what VMware lists as best practice for using virtualization. If a server is maxing out, it should not be virtualized as it is not a good candidate. However, if you have a number of servers that are under utilized, then the advantage of turning them into VMs become clear. VMware has a neat feature called Transparent Page Sharing, where VMs using the same sections of memory with the same bitmaps across the same images are all condensed down into the same single pages of memory in the ESX server. This means that your 10 (or more) windows 2003 server images "share" the same section of RAM, this frees up the "duplicate" RAM across those images. I have seen 20% of RAM saved by this, IIRC it can go above 40%.

    As far as high availability goes: again, low cost HA doesn't work that well. I guess it's beneficial to management types that count the costs of but don't see the benefit in leaving a few idle machines running.

    If you mean VMware HA, I find it works quite well, granted the new version in Vsphere (aka Virtual Center 4) is much better as it supports full redundancy.

    Then you have virtualized your whole rack of servers into a quarter rack single blade solution and a SAN that costs about the same than just a rack of single servers but you can't fill the rack because the density is too high. And like something that recently happened at my place: the redundant SAN system stops communicating with the blades because of a driver issue and the whole thing comes crashing down.

    You are assuming that the people don't have this already. I have been to a number of data centers that have racks and racks of under-utilized machines that also have SAN storage. VMware Consolidation is a way of consolidating the hardware you already have to run your ESX hosts. You use a program called VMware Conveter to do P2V (Physical to Virtual) to convert the real hardware machines to VMs, then you reclaim that hardware and install ESX on it, freeing up more resources. You don't always have to run out and buy new hardware!

    VMs are great when the hardware is under-utilized, I do not recommend VMs that max out, and neither does VMware.

  • by Thundersnatch ( 671481 ) on Monday June 01, 2009 @11:21PM (#28177283) Journal

    It would be nice to see some sort of virtual SAN integrated into the VMs

    Something like this [hp.com] you mean? Turns the local storage on any VMware host into part of a full-featured, clustered, iSCSI SAN. Not cheap though (about $2500 per TB)

  • by billybob_jcv ( 967047 ) on Monday June 01, 2009 @11:24PM (#28177305)
    Sorry, but I think you're missing several important points. In a company with several hundred physical servers and limited human resources, no one has the time to fool around with tuning a kernal and several apps to all run together in the same OS instance. We need to build standard images and deploy them very quickly, and then we need a way to easily manage all of the applications. We also need to be able to very quickly move applications to different HW when they grow beyond their current resources, we refresh server HW or there is a HW failure. High Availability is expensive, and it is just not feasible for many midrange applications that are running on physical boxes. Does all of this lead to less than optimal memory & I/O performance? Sure - but if my choice is hiring 2 more high-priced server engineers, or buying a pile of blades and ESX licenses, I will bet buying more HW & SW will end up being the better overall solution.
  • by bertok ( 226922 ) on Monday June 01, 2009 @11:33PM (#28177375)

    I've seen similar hideous slowdowns on ESX before for database workloads, and it's not VMware's fault.

    This kind of slowdown is almost always because of badly written chatty applications that use the database one-row-at-a-time, instead of simply executing a query.

    I once benchmarked a Microsoft reporting tool on bare metal compared to ESX, and it ran 3x slower on ESX. The fault was that it was reading a 10M row database one row at a time, and performing a table join in the client VB code instead of the server. I tried running the exact same query as a pure T-SQL join, and it was something like 1000x faster - except now the ESX box was only 5% slower instead of 3x slower.

    The issue is that ESX has a small overhead to switching between VMs, and also a small overhead for estabilishing a TCP connection. The throughput is good, but it does add a few hundred microseconds of latency, all up. You get similar latency if your physical servers are in a datacenter environment and are seperated by a couple of a switches or a firewall. If you can't handle sub-millisecond latencies, it's time to revisit your application architecture!

  • by Anonymous Coward on Monday June 01, 2009 @11:41PM (#28177439)

    "Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical"

    Wrong - transparent page sharing and linked cloning address both of these "problems," which BTW also exist in a physical world. Keeping the kernels separate is a good thing when dealing with the typical shit applications that get installed in the average datacenter. (Yes, I know TPS and linked clones are only available on one product.)

    "TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem."

    Wrong - Hardware virtualization (AMD-V and Intel VT) address this nicely. (And also paravirt to a lesser extent.)

    "A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest"

    WTF are you even talking about there? Get at it from where?

    "From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself."

    Wrong - tools installed in the guest give the host a window into the VM, which the hypervisor can use to make smart decisions about memory allocation.

    "FreeBSD's jails make a whole lot of sense."

    Maybe for FreeBSD apps, but what percentage of datacenter apps run on FreeBSD? Maybe 10 percent? (Probably far less.)

    "Operating systems have been multi-user for a long, long time now. The original use case for Unix involved several users sharing a large box. Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?"

    Virtualization is not about users sharing the box, it's about applications co-existing on the box, even if those applications require 50 different operating systems. Jails and virtualization solve very different problems. Besides, nobody says that you can't use jails where appropriate and virtualization where appropriate.

  • by mlts ( 1038732 ) * on Tuesday June 02, 2009 @12:33AM (#28177737)

    Virtualization isn't just about performance:

    If you have physical machines connected to the same SAN, both VMWare's products and Microsoft's Hyper-V support running failover clustering. This way, if one of the machines goes down, the VM and its services keep running with perhaps a small delay (in milliseconds) while the handoff to the other machine takes place.

    The advantage of failover clustering at this level as opposed to the application level is that not all applications have the ability to use clustering, and a lot of companies may have database utilities that support clustering, but the app running on it may have issues, and making code for a handover at the app level might be impossible, especially if the utility is a niche item.

    Another advantage of virtualization is the ability to do a hardware upgrade without affecting anything in the VM. For example, I have had a VM running on my Linux box that was a DHCP server and DNS cache. When it became time to move to a new machine because the 1999 vintage Linux box used too much power, I just copied the VM disk files to the new box, upgraded the client side OS drivers, and called it done. To stuff running in the VM, all it would notice is that it run faster, but everything else would be exactly the same. Similar with filesystem manipulation. If I want to move the VM to a new disk, I just turn it off, move the files to the new volume, turn it back on. The VM doesn't care one bit that its virtual disks are on a new SATA drive than an old IDE.

    OS snapshotting comes to mind too. One of the uses I use a VM for is snapshots and rollbacks. For example, I tend to browse the Web in a VM under VirtualPC. This way, if some malicious code makes it past Firefox/Adblock/NoScript, it will only affect the VM (barring a hole that allows code to affect the hypervisor executable), and a simple click on the close window button dumps all changes. Another use is system upgrades. Say I do a major upgrade or patch of an application and the app won't start. I can rollback to a previous snapshot, turn the VM on, and be able to resume production with that machine without running out of time in the service window.

    Filesystem manipulation. If you have the software, backing up VMs becomes easy. You have either a tape drive, a disk to hold stuff on, or both. The VM can be happily running, and at the same time, its filesystem can be snapshotted outside the OS and backed up.

    There is a penalty for using a VM, and that is performance. However, this is mitigated significantly by having drivers in the client OS. For example, Hyper-V has a very fast virtual network switch, and it supports a virtual 100baseT adapter. So, just by installing the client drivers, a VM can communicate with others on the same virtual switch a lot faster than without.

    Another penalty is unknown security issues. There is always the concern that a hypervisor can be compromised, and malicious code that is running in a VM can get the machine access of the hypervisor (whose drivers for a lot of tasks might be running with root or kernel level authority). This can be mitigated by making sure the guest operating systems are secure.

  • by moon3 ( 1530265 ) on Tuesday June 02, 2009 @01:59AM (#28178121)
    Virtualization came to life not to solve developer's voes, but to enable firms to sqeeze even more servers in a single rack (this is where the money is made on VM). You can run 8 or more virtual servers on one metal. The performance is terrible, of course, but developers hardly even use those resources, most the these servers are idling. Hosting companies are very happy with the results.
  • by Bert64 ( 520050 ) <bert AT slashdot DOT firenzee DOT com> on Tuesday June 02, 2009 @04:42AM (#28179041) Homepage

    More so in the windows world than unix...

    I have always run a large number of services on a single unix system, sometimes splitting them up through the use of chroot, always isolating them from each other by running as different users... The only benefit i see from virtualization would be having each machine as a simple container that can be moved around different physical hardware, but then again copying a full linux install from one disk to another is not that hard, and unless you have a heavily customized kernel it should boot up just fine on a different machine.

    Having individual apps isolated, so they could be moved to dedicated machines if the load increased would be useful, but chroot could buy that, on the other hand nothing i'm doing right now stresses 5 year old hardware so i could move the whole install to a newer more powerful machine.

  • by Znork ( 31774 ) on Tuesday June 02, 2009 @08:09AM (#28180285)

    I used to use ESX, but the built in virtualization in RHEL does it better these days. ESX performance is nice enough, but paravirt xen tech outperforms it by 3x on some things (scripts, exec, syscall intensive stuff).

    It's also much, much cheaper.

    Then again, I don't run any virtualized Windows, so your mileage may vary.

  • by Anonymous Coward on Tuesday June 02, 2009 @09:21AM (#28180989)

    The article doesn't mention, that I found, what the OS with VMWare was - if linux, there is a longstanding NFS bug in linux which resolves by setting the NFS to run over TCP rather than UDP. I've had to argue many times with engineers who are stuck in UDP always providing better performance than TCP for NFS - sure, but not for linux. Sniff the NFS traffic and watch the lock up as retransmits take over the stream.

    Not saying that VMWare is not part of the performance issue - just that I find the SQL over NFS, if the OS was linux, very suspicious.

  • by jra ( 5600 ) on Tuesday June 02, 2009 @05:11PM (#28187939)

    They sound a fair amount like what I understand OpenVZ to be about as well; does the comparison hold there, too?

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...