Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Operating Systems Software BSD

When VMware Performance Fails, Try BSD Jails 361

Siker writes in to tell us about the experience of email transfer service YippieMove, which ditched VMware and switched to FreeBSD jails. "We doubled the amount of memory per server, we quadrupled SQLite's internal buffers, we turned off SQLite auto-vacuuming, we turned off synchronization, we added more database indexes. We were confused. Certainly we had expected a performance difference between running our software in a VM compared to running on the metal, but that it could be as much as 10X was a wake-up call."
This discussion has been archived. No new comments can be posted.

When VMware Performance Fails, Try BSD Jails

Comments Filter:
  • by gmuslera ( 3436 ) on Monday June 01, 2009 @10:15PM (#28176841) Homepage Journal
    If you really need all the performance you can get for a service, don't virtualize it, or at least check that what you can get is enough, Virtualization have a lot of advantages, but dont give you the full resources of the real machine is running into (and if well how much you lose depend on the kind of virtualization you use, still wont be full). Maybe the 10x number could be VMWare fault or just a reasonable consequence of how is doing virtualization (maybe taking into account disk IO performance you could explain a good percent of that number).
  • by diamondsw ( 685967 ) on Monday June 01, 2009 @10:20PM (#28176889)

    Amazing! Not running several additional copies of an operating system with all of the needless overhead involved is faster! Who would have guessed?

    Sometimes a virtual machine is far more "solution" than you need. If you really want the same OS with lots of separated services and resource management... then run a single copy of the OS and implement some resource management. Jails are just one example - I find Solaris Containers to be much more elegant. Of course, then you have to be running Solaris...

  • by Anonymous Coward on Monday June 01, 2009 @10:29PM (#28176965)

    Virtualization, as far as it is applied to server software, is a kludge. It is a way of making software work on one machine which would otherwise have conflicting OS and security requirements. The virtualization layer provides the abstraction and isolation which should be provided by the OS but isn't. The reason is API complexity: Virtualization deals with a relatively small and low level API. The operating system has a much broader API with much more complex dependencies, so it is much harder to secure and test for incompatibilities.

    It should not surprise anyone that removing redundancies (to save money) likely also decreases fault tolerance. On the other hand, it is often beneficial to remove "accidental" redundancy and add redundancy back in with a plan.

  • by ckaminski ( 82854 ) <slashdot-nospam.darthcoder@com> on Monday June 01, 2009 @10:45PM (#28177079) Homepage
    Consolidate several lightly used, different services onto ONE server? Have you ever managed multiple applicatoins in a heterogenous environment? Consolidating applications causes operational complexity that is inappropriate in a lot of instances. While service isolation is easy on Unix platforms, it's not on Windows.
  • by jgtg32a ( 1173373 ) on Monday June 01, 2009 @10:57PM (#28177149)
    Your sig isn't logically sound just because the Jews win because you lost doesn't mean you win when the Jews loose.
    Just something I thought I'd point out.

    On another side note what happened to the fine art of trolling, people these days just throw a bunch of racial slurs together and think they're all that. In my day it took a certain finesse to troll properly, you had to be well informed on the issue and then speak truths on the issue and then interpret those truths in a way that will set people off.

    oh, well
    get off of my lawn
  • by syousef ( 465911 ) on Monday June 01, 2009 @11:07PM (#28177203) Journal

    Virtualization DOES make sense, when you're trying to solve the right problem. Do not blame the tool for the incompetence of those using it. It's no good using a screwdriver to shovel dirt and then blaming the screwdriver.

    Virtualization is good for many things:
    - Low performance apps. Install once, run many copies
    - Excellent for multiple test environments where tests are not hardware dependant
    - Infrequently used environments, like dev environments, especially where the alternate solution is to provide physical access to multiple machines
    - Demos and teaching where multiple operating systems are required
    - Running small apps that don't run on your OS of choice infrequently

    Virtualization is NOT good for:
    - High performance applications
    - Performance test envrionemnts
    - Removing all dependence on physical hardware
    - Moving your entire business to

    Your specific concerns:
    # Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical

    Actually this depends on your virtualization solution

    # TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.

    So is hard disk access from multiple virtual operating systems contending for the same disk (unless you're going to have one disk per guest OS...even then are you going through one controller?) Resource contention is a trade-off. If all your systems are going to be running flat out simultaneously virtualization is a bad solution.

    # A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest

    You can often mount the virtual disks in a HOST OS. No different to needing software to access multiple partitions. As long as the software is available, it's not as big an issue.

    # Memory management is an absolute clusterfuck. From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made

    A lot of operating systems are becoming virtualization aware, and can be scheduled cooperatively to some degree. That doesn't mean your concern isn't valid, but there is hope that the problems will be reduced. However once again if all your virtual environments are running flat out, you're using virtualization for the wrong thing.

  • by syousef ( 465911 ) on Monday June 01, 2009 @11:10PM (#28177225) Journal

    Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.

    Screw-drivers are an excellent tool. However if you're in a position to buy tools for your company, you should know enough to show me the door if I try to sell you a screw driver to shovel dirt.

    Right tool. Right job.

    In any industry:

    Poor management + slick marketing = Disaster

  • by larry bagina ( 561269 ) on Monday June 01, 2009 @11:18PM (#28177271) Journal
    sysjail is vulnerable to race conditions
  • by Kjella ( 173770 ) on Monday June 01, 2009 @11:35PM (#28177393) Homepage

    It's CYA in practise. Here's the usual chain of events:

    1. Business makes requirements to vendor: We want X capacity/response time/whatever
    2. Vendor to business side: Well, what will you do with it?
    3. Business makes requirements to vendor: Maybe A, maybe B with maybe N or N^2 users
    4. Vendor to business side: That was a lot of maybes. But with $CONFIG you'll be sure

    Particularly if the required hardware upgrades aren't part of the negotiations with the vendor, then it's almost a certainty.

  • by gdtau ( 1053676 ) on Monday June 01, 2009 @11:41PM (#28177431)

    "What really is the benefit of extended virtualization?

    1) The ability to deploy a system image without deploying physical hardware. All those platforms you are meant to have, but don't: a build machine, an acceptance test machine, a pre-production test machine. And if you've done all the development and testing on a VM then changing the machine when it moves from production from a VM to being real hardware doesn't seem worth the risk.

    2) IT as a territorial dispute. You are the IT Director for a large enterprise. You want everything in good facilities, what after the last time a cleaner unplugged the server that generates customer quotes, bringing revenue to a screaming halt. The owner of the quotes server will barely come at that. They certainly won't hand over sysadmin control. Their sysadmins like whitebox machines (the sysadmin's brother assembles them), but you'll never have parts on the shelf for that if it breaks. So get them to hand over a VM image, which you run on hardware of your choice, and which you can backup and restore for them.

    3) Single hardware image. No more getting a "revised" model server and finding that the driver your OS needs isn't available yet (or better still, won't ever be available for that OS, since the manufacturer really only supports new hardware in their forthcoming releases). And yeah, the server manufacturer has none of the previous model in stock.

    And of course there's minor stuff. Like being able to pull up a shiny clean enterprise image to replicate faults.

    You'll notice the lack of the word "silver bullet" above. Because virtualisation isn't. But it does have a useful role, so the naysayers aren't right either.

    I'm waiting for the realisation that merely combining images onto one physical machine does not do much to lower costs. For a directly-administered Windows OS the sysadmin's time was costing you more than the hardware. Now that the hardware is gone can you really justify maybe $50kpa/5 = $10pa per image for sysadmin overhead? This is particularly a problem for point (2) above, as they are exactly the people likely to resist the rigorous automation needed to get sysamdin per image overhead to an acceptable point (the best practice point is about $100 per image -- the marginal cost of centrally-administered Linux servers. You'll notice that's some hundreds of times less than worst-practice sysadmin overhead).

    I'll also be a bit controversial and note that many sysadmins aren't doing themselves any favours here. How often do you read on Slashdot of time-consuming activities just to get a 5% improvement. If that 5% less runtime costs you 5% more sysadmin time then you've already increased costs by a factor of ten.

  • by mysidia ( 191772 ) on Monday June 01, 2009 @11:47PM (#28177485)

    ESXi is free, and they could have used that. The overhead for most I/O is a fraction that of VMware server's.

    If they did this so long ago that ESXi wasn't available for free, then their basis for discussing problems with VMware is way outdated too, a lot changes in 14 months....

    VMware Server simply has many issues: layering the VM filesystem on top of a bulky host filesystem. Relying on a general purpose os to schedule VM execution, memory fragmentation, slow memory ops, contention for memory and disk (VS inappropriate host OS caching/swapping), etc.

    And it bears repeating: Virtualization is not a magic pill.

    You can't deploy the technology and have it just work. You have to understand the technology, make good design decisions starting at the lowest level (your hardware, your network, storage design, etc), configure, and deploy it properly.

    It's not incredibly hard to deploy virtualization properly, but it still takes expertise, and it's not going to work correctly if you don't do it right.

    Your FreeBSD jail mail server might not work that well either, if you chose a notoriously-inefficient MTA written in Java that only runs on top of XWindows.

  • by funwithBSD ( 245349 ) on Monday June 01, 2009 @11:52PM (#28177521)

    The company I work for has just about every Midrange VM solution you can imagine: Citrix, ESX (Seperate Windows and Linux clusters), Solaris Containers, and AIX VIO/Lpars. That is more or less the order of stability, btw.

    of all the solutions, AIX is the most consistent and stable. Cheap is what they are not, but in our case they are Blue Dollars. It does exactly what it is billed to do, day in, day out.

    Solaris 10 Zones a royal bastard to patch, but otherwise perfectly stable. (quite frankly, they are really just jails, just a little more configurable I suppose)

    ESX is stable enough, depending on hardware. Certainly easier than anything but perhaps the HMC.

    Citrix is the worst of the lot. But with so much invested, they don't want to do anything else.

  • by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Monday June 01, 2009 @11:54PM (#28177523)

    Wrong - transparent page sharing and linked cloning address both of these "problems,"

    Inefficiently as fuck, by the way

    Keeping the kernels separate is a good thing when dealing with the typical shit applications

    Uh, why? Even shit applications don't replace or extend the kernel

    Maybe for FreeBSD apps

    FreeBSd runs Linux apps just fine last time I checked

  • by rachit ( 163465 ) on Monday June 01, 2009 @11:59PM (#28177551)

    Can we ask you then, why are you running it on VMware Server? Use ESXi. Its free. VMware Server's I/O performance is no where near as good as ESXi.

  • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Tuesday June 02, 2009 @12:21AM (#28177679) Homepage Journal
    Oh, I forgot to mention another much-loved jail use: giving applications their own customized execution environment. Suppose you have some legacy app that requires, say, some ancient version of Perl and a database connector from 1998. Jails are a great way to sandbox that crufty old environment without forcing those limitations onto the rest of your apps.
  • by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Tuesday June 02, 2009 @12:27AM (#28177709)

    VMware virtual machines share the memory that is identical

    Inefficient as fuck. Whereas if they'd just been processes running under the same OS, the kernel would already know they were sharing the same page.

  • by BitZtream ( 692029 ) on Tuesday June 02, 2009 @12:50AM (#28177809)

    Wow, how about you make it more obvious that you have no clue what you're talking about.

    ESX and ESXi are bare metal hypervisors that run on the hardware directly. They do not require any OS.

    Management of the system can be done with the VMWare infrastructure client GUI which runs on Windows.

    The management interface is a SOAP service and the API is public, you can admin via a perl script if you want, and indeed VMware has made command line tools (written in perl IIRC) that access the soap interface. These tools are all available as pre-packaged virtual machines if you want, based on Linux VMs and can be downloaded directly from the web server on the ESX or ESXi server.

    Now if you want to bitch about the fact that I can't use FBSD as a host for a virtual machine then by all means, but your complaints are just those of ignorance.

    I've used FreeBSD since 2.2, and I'm guessing from your post that you're one of those people that still tries to use FreeBSD as a desktop machine. While it obviously can be done, and with enough effort it can be rather usable, FreeBSD really isn't intended for the desktop PC role, you may want to consider using an OS more suited for the task, let FreeBSD remain the bad ass server that it is and let OS X and Windows be the desktop OSes that they are.

  • by OrangeTide ( 124937 ) on Tuesday June 02, 2009 @01:37AM (#28178059) Homepage Journal

    I disagree. I consider Xen to be a kernel which other kernels are modified to run inside of, it is just a guest kernel making requests(read system calls) to a hypervisor(a special sort of kernel) that then translates it into requests to the host kernel. But mostly I feel this way because of the way I/O is handled in Xen is very much unlike the way VMware does it (go find my resume, I used to be an ESX developer at VMware).

    Because Xen was originally designed to function without special hardware extensions to support virtualization it is a virtual machine in the same sense that Unix is a virtual machine(processes were literally virtual machines from day 1 in Unix). Xen just jams one more layer above processes.

    BSD Jails are just a more Unix way of virtualizing a set of processes than Xen is. Xen requires an entire kernel to encapsulate the virtualization, BSD jails do not. In my opinion that is where they differ the most, but that difference is almost unimportant.

  • Re:-1, Flamebait (Score:1, Insightful)

    by Anonymous Coward on Tuesday June 02, 2009 @02:03AM (#28178141)

    You really seem to like being right (or at least telling other people they're wrong).

    I'd like to point out that ridiculing people and being obnoxious actually lowers the quality of discussion instead of improving it. If you still want to do that, please at least make sure you don't start the ranting in totally inappropriate places like you did now.

  • by aarggh ( 806617 ) on Tuesday June 02, 2009 @02:10AM (#28178169)

    I would actually say that the day ESXi became free, it made server completely obsolete for ANYTHING other than initial testing or building.

    As you stated, this article really on every level is a ridicuously poorly designed implimentation, I don't get into flame wars as to what's the better OS, etc, etc, so far as I'm concerned whatever is best at doing what I need it to is the solution I aim for, and with ESX I must admit I have been extremely happy with the time and resource savings, as well as the GREATLY reduced management overhead. Throw in the HA, DRS, vMotion, and disaster recovery, and I now sleep a lot better at night, and get far fewer calls!

  • by jdfox ( 74524 ) on Tuesday June 02, 2009 @07:39AM (#28180055)
    I'm curious: if you're not interested in something as "low end" as systems administration, then why would you be interested in a Slashdot discussion on VMware and BSD jails? :-)

    And nobody's asking you to memorize what LTSP stands for. Just double-click the text in Firefox, right-click and choose search. So much quicker and more effective than asking everyone to spell out abbreviations. It's a win-win!
  • by Salamander ( 33735 ) <jeff@ p l . a t y p.us> on Tuesday June 02, 2009 @08:01AM (#28180215) Homepage Journal

    We're reinventing the kernel-userland split, and for no good reason.

    Thank you for saying that. The purpose of a multi-tasking multi-user OS is to allow running multiple applications with full isolation from one another. If we need some other piece of software - like a VM hypervisor - to do that, then the OS has failed in its duty. But wait, some people say, it's not just about multiplexing hardware, it's about migration and HA and deploying system images easily. These are also facilities the OS should be providing. Again, if we need some other piece of software then the OS has failed.

    One could argue that we've evolved to a point where the functions of an OS have been separated into two layers. One layer takes care of multiplexing the hardware; the other takes care of providing an API and an environment for things like filesystems. Better still, you get to mix and match instances of each layer. OK, fine. Given the Linux community's traditional attitude toward layering (and microkernels, which this approach also resembles) it's a bit inconsistent, but fine. That interpretation does raise some interesting questions, though. People are putting a lot of thought into where drivers should live, and since some drivers are part of "multiplexing the hardware" then it would make sense for them to live in the hypervisor with a stub in the guest OS - just as is being done, I know. But what about the virtual-memory system? That's also a form of hardware multiplexing, arguably the most important. If virtualization is your primary means of isolating users and applications from one another, why not put practically all of the virtual-memory functionality into the hypervisor and run a faster, simpler single-address-space OS on top of that?

    If we're going to delegate operating-system functionality to hypervisors, let's at least think about the implications and develop a coherent model of how the two interact instead of the disorganized and piecemeal approaches we see now.

  • by Bigmilt8 ( 843256 ) on Tuesday June 02, 2009 @08:38AM (#28180563)
    You wasted your time. I'm a DBA with a programming background. Virtualization is not suitable for mid- to large- database environments. Database software is designed to handle all IO and memory issues internally. The virtualization software just gets in the way.
  • I hate to sound condescending.. but system administration is considered the lower end of the technology community.

    You don't sound condescending; you sound ignorant. Routine system maintenance is low end, getting to play with new (to commodity hardware) virtualization techniques and ZFS and SANs and HA systems isn't quite the same as staring blankly at a re-purposed desktop.

    Put another way: it's cool that you like writing drivers, but if they suck, I'm the one who gets to blackball your company on purchase orders.

  • by Mysticalfruit ( 533341 ) on Tuesday June 02, 2009 @11:08AM (#28182645) Homepage Journal
    I hear ya...

    We've deployed a number of ESXi boxes with mixed results. In a nutshell, if you have a underutilized server that's lightly loaded, it's a great candidate to be a VM.

    Anything that needs performance should be considered off the list for a VM, unless you can convince the consumers that the speed penalty is worth the ease of management.

    * Web server that servers your lunch menu and maybe your HR vacation scheduling system: VM
    * Build machine that pounds through 7 GB of source and takes 10hrs of solid compiling to produce output: not a good choice for a VM.
    * Domain Controller: VM

    I like VM's because they're portable and machine agnostic. The whole virtual infrastructure stuff is polished and works.

    However, people tend to think that virtualization is the right path for everthing, which it is not.

    It has its place in the IT toolbox with everything else. Who knows, maybe Intel and vmware will hatch an offspring that looks like ESXi but with hardware provisioning... Though I'm sure IBM's lawyers would shit kittens!

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...