Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Software BSD

When VMware Performance Fails, Try BSD Jails 361

Siker writes in to tell us about the experience of email transfer service YippieMove, which ditched VMware and switched to FreeBSD jails. "We doubled the amount of memory per server, we quadrupled SQLite's internal buffers, we turned off SQLite auto-vacuuming, we turned off synchronization, we added more database indexes. We were confused. Certainly we had expected a performance difference between running our software in a VM compared to running on the metal, but that it could be as much as 10X was a wake-up call."
This discussion has been archived. No new comments can be posted.

When VMware Performance Fails, Try BSD Jails

Comments Filter:
  • Back to the Future? (Score:5, Informative)

    by guruevi ( 827432 ) on Monday June 01, 2009 @09:55PM (#28176689)

    So we go back to where we started from: chroot and jails. What really is the benefit of extended virtualization? I haven't "embraced" it as I am supposed to do.

    I can see where it makes sense if you want to merge several servers that do absolutely nothing all day into a single machine but a decent migration plan will run all those services on a single 'non-virtual' server. Especially when those machines are getting loaded, the benefits of virtualization quickly break down and you'll have to pay for more capacity anyway.

    As far as high availability goes: again, low cost HA doesn't work that well. I guess it's beneficial to management types that count the costs of but don't see the benefit in leaving a few idle machines running.

    Then you have virtualized your whole rack of servers into a quarter rack single blade solution and a SAN that costs about the same than just a rack of single servers but you can't fill the rack because the density is too high. And like something that recently happened at my place: the redundant SAN system stops communicating with the blades because of a driver issue and the whole thing comes crashing down.

  • Re:-1, Flamebait (Score:5, Informative)

    by eosp ( 885380 ) on Monday June 01, 2009 @10:04PM (#28176747) Homepage
    Well, the BSDs all have chroot as well. However, jails have their own sets of users (you can have root in one jail but not in the system at large) and the kernel makes more division between the data structures from jails (and the host system) than chroot does. In addition, ps(1) can only show in-jail processes, network configuration changes are impossible, and kernel modifications (modules and securelevel changes) are banned.
  • by mvip ( 1060000 ) on Monday June 01, 2009 @10:05PM (#28176753)
    We're working on it. The irony is that this is the only server that is still running as a VM (because it is a hosted VPS).
  • Sounds about right (Score:5, Informative)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Monday June 01, 2009 @10:05PM (#28176755) Homepage Journal

    We use jails a lot at my work. We have a few pretty beefy "jail servers", and use FreeBSD's ezjail [erdgeist.org] port to manage as many instances as we need. Need a new spamfilter, say? sudo ezjail-admin create spam1.example.com 192.168.0.5 and wait for 3 seconds while it creates a brand new empty system. It uses FreeBSD's "nullfs" filesystem to mount a partially populated base system read-only, so your actual jail directly only contains the files that you'd install on top of a new system. This saves drive space, makes it trivially easy to upgrade the OS image on all jails at once (sudo ezjail-admin update -i), and saves RAM because each jail shares the same copy of all the base system's shared libraries.

    For extra fun, park each jail on its own ZFS filesystem and take a snapshot of the whole system before doing major upgrades. Want to migrate a jail onto a different server? Use zfs send and zfs receive to move the jail directory onto the other machine and start it.

    The regular FreeBSD 7.2 jails already support multiple IP addresses and any combination of IPv4 and IPv6, and each jail can have its own routing table. FreeBSD 8-CURRENT jails also get their own firewall if I understand correctly. You could conceivably have each jail server host its own firewall server that protects and NATs all of the other images on that host. Imagine one machine running 20 services, all totally isolated and each running on an IP not routable outside of the machine itself - with no performance penalty.

    Jails might not be the solution to every problem (you can't virtualize Windows this way, although quite a few Linux distros should run perfectly), but it's astoundingly good at the problems it does address. Now that I'm thoroughly spoiled, I'd never want to virtualize Unix any other way.

  • Solaris Zones also (Score:4, Informative)

    by ltmon ( 729486 ) on Monday June 01, 2009 @10:16PM (#28176853)

    Zones are the same concept, with the same benefit.

    An added advantage Solaris zones have is flavoured zones: Make a Solaris 9 zone on a Solaris 10 host, a Linux zone on a Solaris 10 host and soon a Solaris 10 zone on an OpenSolaris host.

    This has turned out much more stable, easy and simply effecient than our Vmware servers, which we now only have for Windows and other random OS's.

  • by gfody ( 514448 ) on Monday June 01, 2009 @10:22PM (#28176905)

    Most of the performance issues and I think also the issue faced in TFA have to do with IO performance when using virtual hard drives especially of the sparse-file, auto-growing variety. If they would configure their VMs to have direct access to a dedicated volume they would probably get their 10x performance back in DB applications.

    It would be nice to see some sort of virtual SAN integrated into the VMs

  • by mrbill ( 4993 ) <mrbill@mrbill.net> on Monday June 01, 2009 @10:26PM (#28176937) Homepage
    The I/O performance on the free "VMWare Server" product *sucks* - because it's running on top of a host OS, and not on the bare metal.
    I'm not surprised that FreeBSD Jails had better performance. VMWare Server is great for test environments and such, but I wouldn't ever use it in production.
    It's not at all near the same class of product as the VMWare Infrastructure stuff (ESX, ESXi, etc.)

    VMWare offers VMWare ESXi as a free download, and I/O performance under it would have been orders of magnitude better.
    However, it does have the drawback of requiring a Windows machine (or a Windows VM) to run the VMWare Infrastructure management client.
  • by BagOBones ( 574735 ) on Monday June 01, 2009 @10:29PM (#28176971)

    If you are separating similar work loads like web apps and databases you are probably better off running them within the same os and database server and separating them via security as the poster realized.

    However if you have a variety of services that do not do the same thing you can really benefit from separating them in virtual machines and have them share common hardware.

    Virtualization also gives you some amazing fault tolerance options that are consistent across different OS and services that are much easier to manage than individual OS and service clustering options.

  • > they probably just didn't want to bring on the wrath of lawyers for trademark infringement.

    FreeBSD jails predate Solaris zones by five years.

  • by 00dave99 ( 631424 ) on Monday June 01, 2009 @10:33PM (#28176997)
    XenServer has some good features, but you really can't compare VMware Server with XenServer. I have many customers that were impressed to be able to run 4 or 5 VMs on VMware Server. Once we got them moved to ESX on the same hardware they couldn't believe that they were running 20 to 25 VMs on the same hardware. That being said back end disk configuration is the most important design consideration on any virutalization product.
  • by zonky ( 1153039 ) on Monday June 01, 2009 @10:37PM (#28177025)
    ESXi does also have many limitations around supported hardware. That said, there are some good resources around running ESXi on 'white box' hardware.

    http://www.vm-help.com//esx40i/esx40_whitebox_HCL.php [vm-help.com]

  • by Eil ( 82413 ) on Monday June 01, 2009 @10:39PM (#28177047) Homepage Journal

    But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.

    Actually, Xen is not at all similar to a BSD jail, no matter how you look at it. Xen does full OS virtualization from the kernel and drivers on down to userland. A FreeBSD is basically chroot on steroids. The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.

    I guess the thing that bugged me about the most about TFA was the fact that they were using VMWare Server and actually expecting to get decent performance out of it. Somebody should have gotten fired for that. VMWare server is great for a number of things, but performance certainly isn't one of them. If they wanted to go with VMWare, they should have shelled out for ESX in the beginning instead of continually trying to go the cheap route.

  • Re:UML FTW! (Score:4, Informative)

    by solafide ( 845228 ) on Monday June 01, 2009 @10:43PM (#28177067) Homepage
    UML is possibly the worst-maintained part of the Linux kernel. Don't try building it in any recent kernel. It won't compile.
  • by Gothmolly ( 148874 ) on Monday June 01, 2009 @10:44PM (#28177075)

    I work for $LARGE_US_BANK in the performance and capacity management group, and we constantly see the business side of the house buy servers that end up running at 10-15% utilization. Why? Lots of reasons - the vendor said so, they want "redundancy", they want "failover" and they want "to make sure there's enough". Given the load, if you lose 10-20% overhead due to VM, who cares ?

  • by BagOBones ( 574735 ) on Monday June 01, 2009 @10:46PM (#28177081)

    After looking more closely at the article it sounds like they where trying to use VMWare Server instead of ESX, which explains a lot. If that was the case they were then carring the overhead of the host OS, VMLayer and the multiple guest OS. Not something you do with high performance apps.

  • by Anonymous Coward on Monday June 01, 2009 @10:51PM (#28177105)

    But here's what I do know. FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago.

    Really? VMware tools for freebsd have been available for years. You can even run them on openbsd (with freebsd compatibility mode enabled).

    There's even this slashdot [slashdot.org] story from 2004 about freebsd 4.9 being supported as an esx guest.

  • Re:Well, duh! (Score:3, Informative)

    by Gothmolly ( 148874 ) on Monday June 01, 2009 @11:01PM (#28177171)

    A real hypervisor like used by IBM on their p-series frames doesn't impose this penalty. You're thinking of an emulator.

  • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Monday June 01, 2009 @11:07PM (#28177207) Homepage Journal

    I'm not too up on sysjail, but it looks like it's implemented on top of systrace while jails are explicitly coded into the kernel. That probably made sysjail easier to write, but the FreeBSD work has paid off now that they're starting to virtualize the whole network stack so that each jail can have its own firewall and routing.

    More to the point: the sysjail project is no longer maintained [sysjail.bsd.lv].

  • by mysidia ( 191772 ) on Monday June 01, 2009 @11:14PM (#28177251)

    Totally unnecessary. If you want a 'virtual SAN', you can of course create one using various techniques. The author's biggest problem is he's running VMware Server 1, probably on top of Windows, and then tried VMware Server 1 on top of Ubuntu.

    Running one OS on top of another full-blown OS, with several layers of filesystem virtualization, no wonder it's slow; a hypervisor like ESX would be more appropriate.

    VMware Server is great for small-scale implementation and testing. VMware server is NOT suitable for mid to large-scale production grade consolidation loads.

    ESX or ESXi is VMware's solution for such loads. And by the way, a free standalone license for ESXi is available, just like a free license is available for running standalone VMware server.

    And the I/O performance is near-native. With ESX4, on platforms that support I/O virtualization , Vt-d/IOMMU, in fact, the virtualization is hardware-assisted.

    The VMware environment should be designed and configured by someone who is familiar with the technology. A simple configuration error can totally screw your performance. In VMware Server, you really need to disable memory overcommit and shut off page trimming, or you'll be sorry -- and there are definitely other aspects of VMware server that make it not suitable at all (at least by default) for anything large scale.

    It's more than "how much memory and CPU" you have. Other considerations also matter, many of them are the same considerations for all server workloads... e.g. how many drive spindles do you have at what access latency, what's your total IOPs?

    In my humble opinion, someone who would want to apply a production load on VMware server (instead of ESX) is not suitable briefed on the technology, doesn't understand how piss-poor VMware server's I/O performance is compared to ESXi, or just didn't bother to read all the documentation and other materials freely available.

    Virtualization isn't a magic pill that lets you avoid properly understand the technology you're deploying, make bad decisions, and still always get good results.

    You get FreeBSD jails up and running, but you basically need to be skilled at FreeBSD, and understand how to properly deploy that OS in order to do it.

    Otherwise, your jails might not work correctly, and someone else could conclude that FreeBSD jails suck, stick with OpenVZ VPSes or Solaris logical domains.

  • by Anonymous Coward on Monday June 01, 2009 @11:24PM (#28177299)

    Zones are just the operating system partitioned, so it doesn't make sense to run linux in a zone. You can however, run a linux branded zone, which emulates a linux environment, but it's not the same as running linux in a zone. It's running linux apps in solaris.

    LDOMS are hardware virtualization, so you can run Linux in them. Only some servers are supported, though.

    Just thought i better clarify.

  • by Feyr ( 449684 ) on Monday June 01, 2009 @11:25PM (#28177317) Journal

    seconded. last time i tried, vmware server couldn't handle a single instance of a lightly loaded db server. moving to esx we're running 6 VM on that same hardware and the initial server has near-native performances

    in short. use the right tool for the right job, or you have no right to complain

  • by aarggh ( 806617 ) on Monday June 01, 2009 @11:43PM (#28177459)

    In my opinion it always comes down to the fact that shelling out some money for a good product always beats trying to stuff around with a "free" one that's hard to configure and maintain. I run 4 ESX farms, and have NO problem rolling out virtually any type of server from Oracle/RHEL, to Win2k3/2k8, and everything inbetween. I simply make sure I allocate enough resources, and NEVER over commit. I did a cost analysis ages back trying to convince management we needed to go down the virtualisation path to guarantee business continuity.

    In the end it took the failure of our most critical CRM server crashing and me importing an Acronis backup of it into ESX that convinced them beyond a shadow of a doubt.

    I would say to anyone, something for $15-20K that gives:

    Fault-tolerance
    Fail-over
    Easy server roll-outs
    Simple network re-configuration
    Almost instant recoverability of machines

    Is more than worth the cost! The true cost of NOT doing it can be the end of a business, or as I have seen, several days of data/productivity lost!

    Performance issues? Reliability issues? I have none at all, the only times i've had issues are poorly developed .NET apps, IIS, etc, which I then dump the stats and give them to the developers to get them to clean up their own code. And more than once I've had to restore an entire server because someones scripts deleted or screwed entire data structures, and in a case like that, being able to restore a 120GB virtual in around 30mins from the comfort of my desk or home really beats locating tapes, cataloging them, restoring, etc, etc.

    I have Fibre SAN's (with a mix of F/C, SAS, and SATA disks) and switches, so the SAN just shrugs off any attempt to I/O bind it! The only limitation I can think of is the 4 virtual NIC's, it would be good for some of our products to be able to provide a much higher number.

    No comparison in my opinion.

  • by DaemonDazz ( 785920 ) on Monday June 01, 2009 @11:51PM (#28177513)

    Actually, Xen is not at all similar to a BSD jail, no matter how you look at it. Xen does full OS virtualization from the kernel and drivers on down to userland. A FreeBSD is basically chroot on steroids. The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.

    Precisely.

    Similar products in the Linux space are Linux Vserver [linux-vserver.org] (which I use) and OpenVZ [openvz.org].

  • by coffee_bouzu ( 1037062 ) on Tuesday June 02, 2009 @12:00AM (#28177559)
    Comparing XenServer and VMware Server is like comparing apples and oranges. While VMware Server is impressive, it is very much like an emulator: It runs on top of another operating system and has to work harder to execute privileged commands. VMware ESX is a bare-metal hypervisor that is better optimized to do virtualization. While it is still doing "emulation", It is a much better comparison to XenServer than VMware Server is.

    TFA is slashdotted at the moment, so I don't know if VMware Server or ESX is being compared. Either way, the advantage of virtualization is not performance, it is flexibility. The raw performance may be less, but it gives you the ability to do things that just aren't possible with a physical machine. The ability to hot migrate from one physical machine to another in the event of hardware failure or replacement and the ability to have entire "machines" dedicated to single purposes without needing an equal number of physical machines are, at best, more difficult if not impossible when not using virtualization.

    Don't get me wrong, I'm no VMware fanboy. It certainly has its rough edges and is certainly not perfect. However, virtualization as a technology has undeniable benefits in certain situations. Absolute performance just isn't one of them right now.
  • by JakFrost ( 139885 ) on Tuesday June 02, 2009 @12:55AM (#28177831)

    I've worked for many of the Fortune 10 (DB, GS, CS, JP, MS, etc.) banks on the Windows server side and they are all going full steam ahead for virtualization with VMWare or Xen exactly because they have been buying way too much hardware for their backend applications for the last decade. The utilization on all of these servers hardly hits 5-10% and the vast majority of time these systems sit idle. The standard has always been rackmount servers with multiple processor/core systems with gigs of memory all sitting around being unused, mostly Compaq/HP systems with IBM xSeries servers and some Dells thrown in for good measure.

    The reason that this over-capitization has been the requirement of the business line departments to choose only from four or five server models for their backend application. These standard configs are usually configured in rackmount spaces 1U, 2U, 3U, and 4U sizes and with nearly maxed out specs for each size and the size of the server determines the performance you get. You have a light web server you get a blade or a pizza box, you have a light backend application you get a 2U server with two processors or four cores even though you might have a single threaded app that was ported from MS-DOS a few years ago, you want something beefier you get the 4U server with 4 processors, 8 cores and 16 GB of RAM even though your application only runs two threads and allocates 512MB of ram maximum. I've monitored thousands of these servers through IBM Director, InsightMangager, and NetIQ for performance and 99% of the time these servers are at 2% processor and memory utilization and only once in a while for a short amount of time one or two of the cores get hit with a low-mid work load for processing and then go back to doing nothing. These were the Production servers.

    Now consider the Development servers, where a bank has 500 servers dedicated for developer usage with the same specs as the production boxes and at any one time maybe a few of those servers get used for testing while the other few hundred sit around doing nothing while the developers get a new release ready for weeks at a time. The first systems to get virtualized were the development servers because they were so underutilized that it was unthinkable.

    (Off topic: Funny and sad story from my days in 2007 at a top bank (CS) helping with VMWare virtualzation onto HP Blades and 3Par SAN storage for ~500 development servers. The 3Par hardware and firmware was in such a shitty state that it crashed the entire SAN frame multiple times crashing hundreds of development servers at the same time during heavy I/O load. The 3Par would play the blame game against other vendors accusing Brocade for faulty SAN fibre switches, Emulex for faulty hardware and drivers, HP Blade and IBM Blade for faulty server, and the Windows admins for incompetence. Only to find that it was their SAN interface firmware causing the crashes.)

    VMWare solves the problem of running commercial backend applications on Windows servers since each application is so specific due to the requirements of the OS version, service pack, hotfixes, patches, configurations that the standard is always one-server to one-application and nobody every wanted to mix them because any issue would always be blamed on the other vendor's application on the server. There were always talks from management about providing capacity to businesses that is scalable instead of providing them with single servers with a single OS. That was five years ago and people wanted to use Windows Capacity Management features but they were a joke since they were based on per-process usage quotas and the of course nobody wanted to mix two different apps on the same box so those talks went nowhere.

    That is until VMWare showed up and showed a real way to isolate each OS instance from another while it also allowed us to configure capacity requirements on each instance while letting us package all those shitty single threaded backend applications each running on a separate server onto on

  • by rachit ( 163465 ) on Tuesday June 02, 2009 @01:06AM (#28177895)

    The only limitation I can think of is the 4 virtual NIC's, it would be good for some of our products to be able to provide a much higher number.

    ESX 4 (very recently released) supports 10 NICs.

  • by syousef ( 465911 ) on Tuesday June 02, 2009 @02:25AM (#28178215) Journal

    No it doesn't. The parent is clearly talking/complaining about VMware, Xen, Kvm type virtualization, and guest OS instances for all those require their own kernel. He isn't talking about jails/container solutions (FreeBSD Jails, OpenVZ, Solaris Containers, etc) or none of his points would make any sense

    So the parent is using the wrong kind of virtualisation, then blaming the tool. My screw driver won't shovel dirt very well. Bad screw driver. He found a solution that better fitted what he was trying to do and is implying that VMWare is therefore bad. When you misuse a tool then conclude the tool is bad, it's quite valid to point out there are tools out there better suited to the job.

    ot without shutting the guest down first. If you mount a filesystem on a disk/partition twice and that filesystem is not a specially designed cluster filesystem, and the two OS instances are not part of the same cluster, then you WILL get data corruption. The parent's point is valid !

    One of us is misreading. I thought the parent was complaining that he couldn't access files on the virtualized OS without starting up his guest. I was pointing out there are solutions to mount virtual partitions. (I've personally only done it on MS Virtual server as a workaround when Vista restore didn't work).

    If he wants to access the file system read-write both through guest and host at the same time what he's doing is silly. Any 2 systems accessing the same partition read-write, virtualized or not, will cause problems unless the file system is written specifically to accommodate that. (I'm not aware of any that do off the top of my head).

    You should have stopped at your list of what virtualization is good and not good for. You let yourself down after that.

    I respectfully disagree.

  • by raju1kabir ( 251972 ) on Tuesday June 02, 2009 @02:27AM (#28178223) Homepage

    Management of the system can be done with the VMWare infrastructure client GUI which runs on Windows.

    This is one of the many reasons I gave up on VMWare - the management tools are primitive in that they only really work via the GUI and can't easily be scripted or accessed in an efficient manner. Trying to do significant management via the CLI was nigh unto impossible.

    For people who have a Windows machine and excellent connectivity to the server room at all times, maybe that's okay.

    For someone who is frequently out and about, it's not - using a prepaid SIM card on a Moroccan train I can easily reconfigure my Xen servers in the USA. With VMWare I haven't a prayer.

  • by mvip ( 1060000 ) on Tuesday June 02, 2009 @03:34AM (#28178707)
    Simple. It's a hosted VPS. We don't own or operate the core server.
  • by HateBreeder ( 656491 ) on Tuesday June 02, 2009 @04:11AM (#28178909)

    Great... but what's LTSP?

    Why do sysadmins assume that everyone else is also a sysadmin who bothers to memorize all these stupid acronyms?

    Sure, I googled it, and I hope you meant "Linux Terminal Server Project". But Why not just say so immediately?! Most people won't bother listening to what you have to say if they need too use a search engine to figure out key pieces of information just to understand the context of your words!

  • by RivieraKid ( 994682 ) on Tuesday June 02, 2009 @05:58AM (#28179463)
    Actually, they are bare-metal hypervisors. In the latest versions and probably the last couple of versions too, the Linux management console you speak of is in fact a VM running on the ESX hypervisor.
  • by Junta ( 36770 ) on Tuesday June 02, 2009 @08:04AM (#28180235)

    Windows *is* required for many ESX/ESXi environments. Specifically, if you want many of the features, you must run VirtualCenter, which requires a Windows server. Live migration is a feature they currently tie to that product and *don't* expose via a public API straight to an ESX(i) hypervisor.

    In terms of the 'perils' of a full blown OS 'over' another OS, that may not be as big of a deal. Xen and VMWare ESX have similar strategies of a management OS that runs as a privileged guest, true. They feel they best serve by being in full control. However, if configured right, Linux hosted virtualization guests, for example, can acheive very good IO with 'normal' looking guest devices. They can achieve great IO performance with paravirtualized adapters. A 'full-blown' os can be just as optimized to fulfill the role as hypervisor fine, there is no theoretical reason why not.

    In terms of dismissal of FreeBSD as a desktop platform, I think that unwise. I personally don't do FreeBSD, but I do use Linux and I achieve great personal benefit.

  • by mysidia ( 191772 ) on Tuesday June 02, 2009 @08:29AM (#28180455)

    Bad reason. Switch to a hosting provider with a clue, or help them get a clue. (Also, In modern versions of VMware Server, at least, the license specifically in section (9).1(b) forbids selling the use of a Virtual machine, and copies of VMware Server aren't sold for that purpose.)

    From the VMware server EULA:

    ...provided such services may not consist of services to a third party that provide primarily computing or processing power (such as utility computing or grid computing) or any computer application-based service that is traded, rented, leased or sold on a Virtual Machine basis...

    Anyways, this is just like other misconfigs a hosting provider can make. What would you do if they learned they were using RAID0 for your data storage, leaving their facility unlocked with no password security on the VM console, or you learned that their data center workers were regularly using the server's CD-ROM tray as a cup holder, and leaving pitchers of kool-aid on top of the server?

    Just about any hypervisor, Xenserver, Hyper-V, etc, all blow the doors off (non-ESX) VMware Server.

    And running on brand new, highly beefy hardware won't save you, in fact, the peformance penalty gets bigger; VMware Server does not scale.

  • by MyDixieWrecked ( 548719 ) on Tuesday June 02, 2009 @08:40AM (#28180607) Homepage Journal

    Use ESXi. Its free.

    Since ESXi became free, I've installed it on several servers at work. The problem is that its hardware requirements are pretty specific. It won't install on just any PC. It would be nice if I could install it on some of the older servers we have kicking around (DL-140s) or some decommissioned desktops, but it just doesn't support those pieces of hardware.

    The primary reason that Server's disk I/O performance is so horrid is that your VM's disk(s) is being stored as a file on the host OS's filesystem. That causes extra layers of system calls to access files in the guest OS. Between talking to the virtual disk that has to get translated to physical disk, plus the guest OS's filesystem, which appears to the guest as a contiguous physical disk, may become fragmented on the host OS; double fragmentation can occur which causes SERIOUS performance penalties. ESX has a specialized vmfs that it uses to store your images which is designed for VM performance.

    Personally, I would recommend Xen over ESX if you don't have the proper hardware and/or don't want to pay licensing fees. Although it's got a higher learning curve, it's easier to automate (especially since ESXi got rid of CLI support) and there are a plethora of free tools and documentation around. Being that ESX isn't free, it's harder to find support in forums other than VMWare's own site.

    Also, VMWare Server's performance in general leaves a lot to be desired. I would *never* use it for production systems. I've had it installed on machines with 6x15K SaS drives (this is before ESXi) and 8 cores and it would start to choke after about 4 or 5 VMs. Plus VMWare Server doesn't handle multi-core VMs very well. Incredible performance issues arise and you're better off creating your application to scale out to multiple single-core VMs rather than make them dual-core. Server also doesn't handle memory nearly as well as the Type1 hypervisors that Xen and ESX use.

  • by asdf7890 ( 1518587 ) on Tuesday June 02, 2009 @08:43AM (#28180635)

    Don't forget, depending on the type of windows licenses you have, if it is per-processor based, this means I can run all 10 of my VMs on only 2 lic's from Microsoft. (Because each VM only uses 1 of the 2 cores). Getting 8 "free" Windows 2003 server lic's is a pretty damn good deal.

    Erm, I'm pretty sure it doesn't work like that - I recommend that you go find and analyze the small-print to make sure you are covered in case someone comes round to audit!

    My understanding is that each virtual CPU that Windows runs on would be considered a CPU for Windows licensing terms so if you have 2 1-to-2-CPU Win2K3 licenses then you are licensed to run Windows 2K3 in two VMs and no more (or use one license on the host and one in a VM). If you run 10 VMs each with Windows as the OS then you need 10 Windows licenses (if you buy each separately) or at least 10 CPU license (if you use some sort of bulk purchase arrangement for per-CPU lics).

    Also, the "1 or 2 CPU" term in a lot of MS licenses only covers one or two CPUs in the same machine, not running with the same license on two separate single CPU machines (physical or virtual). They don't count cores (just physical CPU packages) so you would be OK with a "1-2 CPU" license on a machine with two quad-core CPUs, but I don't know how this extends to VMs (they are likely to see 4 vCPUs in a VM as 4 CPUs not 4 cores on one CPU, irrespective of what arrangement of physical CPUs/cores the host machine has).

    It is a while since I reviewed the licensing terms for Retail/OEM Windows Server releases (at work we are a small MS dev shop, but our Windows servers and desktops came with there own lics where needed (or run Linux in the case of file servers and VMWare host machines) and the OS installations and those we use (on physical boxes or VMs) for testing are "licensed" via our MSDN subs), so I could be wrong here. But I don't think I am...

  • by gollito ( 980620 ) on Tuesday June 02, 2009 @12:47PM (#28184165) Homepage
    That may have been true with old licensing but if you purchase any new licenses they all come with "virtual machine" licenses of some sort.

    Windows Enterprise allows you to install that copy of Windows four times on the same physical hardware. If you buy Datacenter (which is licensed per socket on the physical machine) you can install as many copies as that physical hardware can handle.

    And yes this licensing applies to any hypervisor not just Microsofts Hyper-V. (link [microsoft.com])
  • by IBitOBear ( 410965 ) on Wednesday June 03, 2009 @04:03AM (#28192917) Homepage Journal

    Okay, I have been through this at work several times recently. There are two major slow-downs in the default (but reasonably bullet-proof) VMWare machines running on a Linux _host_.

    1) If you are doing NAT attachment _don't_ use the vmware NAT daemon. It pulls each packet into userspace before deciding how to forward/nat it. So don't use the default nat device (e.g. vmnet8). Add a new custom made "host only" adapter (e.g. vmnet9-or-more) by adding another adapter, and then use regular firewalling (ip_forward = 1 and iptables rules) so that the packets just pass through the Linux kernel and netfilters once. (you can use vmnet1 in a pinch but blarg! 8-)

    1a) If you want/need to use the default nat engine (e.g. vmnet8) then put the nat daemon into a real-time scheduling group with "chrt --rr --pid 22 $(pgrep vmnet-natd)". Not quite a good as staying in the kernel all the way to your physical media.

    1b) if you do item one, don't use the vmware-dhcpd, configure your regular dhcpd/dhcpd3 etc daemon because it will more easily integrate with your system as a whole.

    (in other words, vmware-dhcpd is not magic, and vmware-natd is _super_ expensive)

    2) VMWare makes a /path/to/your/machine/machine_name.vmem file, which is a memory mapped file that represents the RAM in the guest. This is like having the whole vm living forever in your swap space. It's great there if you want to take a lot of snapshots and want to be more restart/crash/sleep safe. It _sucks_ for performance. If you use "mainmem.usenamedfile=FALSE" in your .vmx files. (you have to edit the files by hand). This will move the .vmem file into your temp directory and unlink it so it's anonymous and self-deleting. It slows down snapshots but...

    2a) Make _SURE_ your /tmp file system is a mounted tmpfs with a size=whatever mount option that will let the tmpfs grow to at least 10% larger than the (sum of the) memory size of (all of the) vritual machine(s) you are going to run at once. This will cause the "backing" of the virtual machine RAM to be actual RAM and you will get rational machine RAM speed.

    2b) If you want/need to, there is a tmpDirectory=/wherever diretive to say where those files go. It gangswith the usenamedfile=FLASE and you can set up dedicated tmpfs files to back the machines specially/separately.

    2c) If you want/need the backing or have a "better" drive you want to use with real backing, you can use the above in variations to move this performance limiter onto different spindle than your .vmd (virtual disk files).

    3) No matter what, your virtual memory file counts against your overcommit_ratio (/proc/sys/vm/overcommit_ratio) compared to your ram. It defaults at 50% for _all_ the accounted facilities system-wide. If you have 4Gig RAM and try to run a 3G vm while leaving your overcommit_ratio at 50, you will suffer some unintended consequences in terms of paging/swapping pressure. Ajust your ratio to like 75 or 80 percent if your total VM memory size is 60 to 65 percent of real ram. _DONT_ set this nubmer to more than 85% unless you have experimented with the system stability at higher numbers. It can be _quite_ surprising.

    Anyway, that's the three things (in many parts) you need to know to make VMWare work its best on your linux host OS. It doesn't matter what the Guest OS is, always consider the above.

    Disclaimer: I don't work for VMWare etc, this is all practical knowledge and trial-n-error gained knowledge. I offer no warranty, but it will work...

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...