When VMware Performance Fails, Try BSD Jails 361
Siker writes in to tell us about the experience of email transfer service YippieMove, which ditched VMware and switched to FreeBSD jails. "We doubled the amount of memory per server, we quadrupled SQLite's internal buffers, we turned off SQLite auto-vacuuming, we turned off synchronization, we added more database indexes. We were confused. Certainly we had expected a performance difference between running our software in a VM compared to running on the metal, but that it could be as much as 10X was a wake-up call."
excellent sales story (Score:5, Interesting)
Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.
When applied to a problem it seems to create more performance issues than it solves. But it can make managing lots of services easier. I think that's the primary goal to these VMware-like products.
Things like Xen take a different approach and seem to have better performance for I/O intensive applications. But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.
VMware is more like how the Mainframe world has been slicing up mainframes into little bits to provide highly isolated applications for various services. VMware has not caught up to the capabilities and scalability to things IBM has been offering for decades though. Even though the raw CPU performance of a PC is better than a mid-range mainframe at 1% of the cost (or less). But scalability and performance are two separate things, even though we would like both.
free beats fee most of the time (Score:5, Interesting)
Re:free beats fee most of the time (Score:4, Informative)
Great... but what's LTSP?
Why do sysadmins assume that everyone else is also a sysadmin who bothers to memorize all these stupid acronyms?
Sure, I googled it, and I hope you meant "Linux Terminal Server Project". But Why not just say so immediately?! Most people won't bother listening to what you have to say if they need too use a search engine to figure out key pieces of information just to understand the context of your words!
Re:free beats fee most of the time (Score:5, Funny)
Welcome to Slashdot, "News for Nerds". You may find that its readers tend to use lots of initialisms, acronyms and computer slang, especially when discussing computing issues. If you like everything spelled out and linked for you, then you might prefer to read CNET [cnet.com] instead.
:-)
BTW (by the way), CNET doesn't appear to stand for anything but CNET.
Re:free beats fee most of the time (Score:4, Insightful)
And nobody's asking you to memorize what LTSP stands for. Just double-click the text in Firefox, right-click and choose search. So much quicker and more effective than asking everyone to spell out abbreviations. It's a win-win!
Re: (Score:3, Funny)
Re:free beats fee most of the time (Score:4, Insightful)
I hate to sound condescending.. but system administration is considered the lower end of the technology community.
You don't sound condescending; you sound ignorant. Routine system maintenance is low end, getting to play with new (to commodity hardware) virtualization techniques and ZFS and SANs and HA systems isn't quite the same as staring blankly at a re-purposed desktop.
Put another way: it's cool that you like writing drivers, but if they suck, I'm the one who gets to blackball your company on purchase orders.
Re: (Score:3, Funny)
Re:excellent sales story (Score:5, Informative)
Most of the performance issues and I think also the issue faced in TFA have to do with IO performance when using virtual hard drives especially of the sparse-file, auto-growing variety. If they would configure their VMs to have direct access to a dedicated volume they would probably get their 10x performance back in DB applications.
It would be nice to see some sort of virtual SAN integrated into the VMs
Re: (Score:3, Interesting)
Storage is only part of the issue. Having to run 10-160 schedulers *IS* the issue. Vmware doesn't utilize efficiencies in this arena like Xen o
Re: (Score:2)
If you have a real SAN sure, IO performance is probably not your problem. If not then you might just try to use a sparse-file virtual hard disk and experience incredibly bad IO performance. My experience is really only with VirtualBox where the virtual disk is the only thing available in the UI and setting up direct disk access is advanced, text-based config - I'm not sure if it's like this with ESX - but I think it would be nice if instead of that whole virtual hard disk crap if the VM host was also a SAN
Re:excellent sales story (Score:5, Informative)
Totally unnecessary. If you want a 'virtual SAN', you can of course create one using various techniques. The author's biggest problem is he's running VMware Server 1, probably on top of Windows, and then tried VMware Server 1 on top of Ubuntu.
Running one OS on top of another full-blown OS, with several layers of filesystem virtualization, no wonder it's slow; a hypervisor like ESX would be more appropriate.
VMware Server is great for small-scale implementation and testing. VMware server is NOT suitable for mid to large-scale production grade consolidation loads.
ESX or ESXi is VMware's solution for such loads. And by the way, a free standalone license for ESXi is available, just like a free license is available for running standalone VMware server.
And the I/O performance is near-native. With ESX4, on platforms that support I/O virtualization , Vt-d/IOMMU, in fact, the virtualization is hardware-assisted.
The VMware environment should be designed and configured by someone who is familiar with the technology. A simple configuration error can totally screw your performance. In VMware Server, you really need to disable memory overcommit and shut off page trimming, or you'll be sorry -- and there are definitely other aspects of VMware server that make it not suitable at all (at least by default) for anything large scale.
It's more than "how much memory and CPU" you have. Other considerations also matter, many of them are the same considerations for all server workloads... e.g. how many drive spindles do you have at what access latency, what's your total IOPs?
In my humble opinion, someone who would want to apply a production load on VMware server (instead of ESX) is not suitable briefed on the technology, doesn't understand how piss-poor VMware server's I/O performance is compared to ESXi, or just didn't bother to read all the documentation and other materials freely available.
Virtualization isn't a magic pill that lets you avoid properly understand the technology you're deploying, make bad decisions, and still always get good results.
You get FreeBSD jails up and running, but you basically need to be skilled at FreeBSD, and understand how to properly deploy that OS in order to do it.
Otherwise, your jails might not work correctly, and someone else could conclude that FreeBSD jails suck, stick with OpenVZ VPSes or Solaris logical domains.
Re:excellent sales story (Score:5, Informative)
seconded. last time i tried, vmware server couldn't handle a single instance of a lightly loaded db server. moving to esx we're running 6 VM on that same hardware and the initial server has near-native performances
in short. use the right tool for the right job, or you have no right to complain
Re:excellent sales story (Score:4, Funny)
TFA wasn't running ESXi? Thanks, now I can skip the read entirely. Silly TFA.
Anyway, isn't "virtualization" so last year? "Local cloud" is the groove.
Re: (Score:3, Insightful)
I would actually say that the day ESXi became free, it made server completely obsolete for ANYTHING other than initial testing or building.
As you stated, this article really on every level is a ridicuously poorly designed implimentation, I don't get into flame wars as to what's the better OS, etc, etc, so far as I'm concerned whatever is best at doing what I need it to is the solution I aim for, and with ESX I must admit I have been extremely happy with the time and resource savings, as well as the GREATLY
Re:excellent sales story (Score:5, Insightful)
Wow, how about you make it more obvious that you have no clue what you're talking about.
ESX and ESXi are bare metal hypervisors that run on the hardware directly. They do not require any OS.
Management of the system can be done with the VMWare infrastructure client GUI which runs on Windows.
The management interface is a SOAP service and the API is public, you can admin via a perl script if you want, and indeed VMware has made command line tools (written in perl IIRC) that access the soap interface. These tools are all available as pre-packaged virtual machines if you want, based on Linux VMs and can be downloaded directly from the web server on the ESX or ESXi server.
Now if you want to bitch about the fact that I can't use FBSD as a host for a virtual machine then by all means, but your complaints are just those of ignorance.
I've used FreeBSD since 2.2, and I'm guessing from your post that you're one of those people that still tries to use FreeBSD as a desktop machine. While it obviously can be done, and with enough effort it can be rather usable, FreeBSD really isn't intended for the desktop PC role, you may want to consider using an OS more suited for the task, let FreeBSD remain the bad ass server that it is and let OS X and Windows be the desktop OSes that they are.
Re: (Score:3, Informative)
Windows *is* required for many ESX/ESXi environments. Specifically, if you want many of the features, you must run VirtualCenter, which requires a Windows server. Live migration is a feature they currently tie to that product and *don't* expose via a public API straight to an ESX(i) hypervisor.
In terms of the 'perils' of a full blown OS 'over' another OS, that may not be as big of a deal. Xen and VMWare ESX have similar strategies of a management OS that runs as a privileged guest, true. They feel they
Re: (Score:3, Interesting)
It would be nice to see some sort of virtual SAN integrated into the VMs
Something like this [hp.com] you mean? Turns the local storage on any VMware host into part of a full-featured, clustered, iSCSI SAN. Not cheap though (about $2500 per TB)
Re:excellent sales story (Score:5, Informative)
Actually, Xen is not at all similar to a BSD jail, no matter how you look at it. Xen does full OS virtualization from the kernel and drivers on down to userland. A FreeBSD is basically chroot on steroids. The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.
I guess the thing that bugged me about the most about TFA was the fact that they were using VMWare Server and actually expecting to get decent performance out of it. Somebody should have gotten fired for that. VMWare server is great for a number of things, but performance certainly isn't one of them. If they wanted to go with VMWare, they should have shelled out for ESX in the beginning instead of continually trying to go the cheap route.
Re: (Score:2)
First of all, you have to admit that the product line names are confusing. You'd expect a product with the word "server" in its title to be useful for, well, servers. Second, even ESX is still less efficient than just using a kernel to isolate different processes. That's what it's there for, after all.
Re:excellent sales story (Score:5, Informative)
In my opinion it always comes down to the fact that shelling out some money for a good product always beats trying to stuff around with a "free" one that's hard to configure and maintain. I run 4 ESX farms, and have NO problem rolling out virtually any type of server from Oracle/RHEL, to Win2k3/2k8, and everything inbetween. I simply make sure I allocate enough resources, and NEVER over commit. I did a cost analysis ages back trying to convince management we needed to go down the virtualisation path to guarantee business continuity.
In the end it took the failure of our most critical CRM server crashing and me importing an Acronis backup of it into ESX that convinced them beyond a shadow of a doubt.
I would say to anyone, something for $15-20K that gives:
Fault-tolerance
Fail-over
Easy server roll-outs
Simple network re-configuration
Almost instant recoverability of machines
Is more than worth the cost! The true cost of NOT doing it can be the end of a business, or as I have seen, several days of data/productivity lost!
Performance issues? Reliability issues? I have none at all, the only times i've had issues are poorly developed .NET apps, IIS, etc, which I then dump the stats and give them to the developers to get them to clean up their own code. And more than once I've had to restore an entire server because someones scripts deleted or screwed entire data structures, and in a case like that, being able to restore a 120GB virtual in around 30mins from the comfort of my desk or home really beats locating tapes, cataloging them, restoring, etc, etc.
I have Fibre SAN's (with a mix of F/C, SAS, and SATA disks) and switches, so the SAN just shrugs off any attempt to I/O bind it! The only limitation I can think of is the 4 virtual NIC's, it would be good for some of our products to be able to provide a much higher number.
No comparison in my opinion.
Re: (Score:3, Informative)
The only limitation I can think of is the 4 virtual NIC's, it would be good for some of our products to be able to provide a much higher number.
ESX 4 (very recently released) supports 10 NICs.
Re:excellent sales story (Score:5, Interesting)
I used to use ESX, but the built in virtualization in RHEL does it better these days. ESX performance is nice enough, but paravirt xen tech outperforms it by 3x on some things (scripts, exec, syscall intensive stuff).
It's also much, much cheaper.
Then again, I don't run any virtualized Windows, so your mileage may vary.
Re:excellent sales story (Score:5, Insightful)
ESXi is free, and they could have used that. The overhead for most I/O is a fraction that of VMware server's.
If they did this so long ago that ESXi wasn't available for free, then their basis for discussing problems with VMware is way outdated too, a lot changes in 14 months....
VMware Server simply has many issues: layering the VM filesystem on top of a bulky host filesystem. Relying on a general purpose os to schedule VM execution, memory fragmentation, slow memory ops, contention for memory and disk (VS inappropriate host OS caching/swapping), etc.
And it bears repeating: Virtualization is not a magic pill.
You can't deploy the technology and have it just work. You have to understand the technology, make good design decisions starting at the lowest level (your hardware, your network, storage design, etc), configure, and deploy it properly.
It's not incredibly hard to deploy virtualization properly, but it still takes expertise, and it's not going to work correctly if you don't do it right.
Your FreeBSD jail mail server might not work that well either, if you chose a notoriously-inefficient MTA written in Java that only runs on top of XWindows.
Re: (Score:3, Informative)
Actually, Xen is not at all similar to a BSD jail, no matter how you look at it. Xen does full OS virtualization from the kernel and drivers on down to userland. A FreeBSD is basically chroot on steroids. The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.
Precisely.
Similar products in the Linux space are Linux Vserver [linux-vserver.org] (which I use) and OpenVZ [openvz.org].
Re:excellent sales story (Score:5, Insightful)
I disagree. I consider Xen to be a kernel which other kernels are modified to run inside of, it is just a guest kernel making requests(read system calls) to a hypervisor(a special sort of kernel) that then translates it into requests to the host kernel. But mostly I feel this way because of the way I/O is handled in Xen is very much unlike the way VMware does it (go find my resume, I used to be an ESX developer at VMware).
Because Xen was originally designed to function without special hardware extensions to support virtualization it is a virtual machine in the same sense that Unix is a virtual machine(processes were literally virtual machines from day 1 in Unix). Xen just jams one more layer above processes.
BSD Jails are just a more Unix way of virtualizing a set of processes than Xen is. Xen requires an entire kernel to encapsulate the virtualization, BSD jails do not. In my opinion that is where they differ the most, but that difference is almost unimportant.
Re:excellent sales story (Score:5, Insightful)
Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.
Screw-drivers are an excellent tool. However if you're in a position to buy tools for your company, you should know enough to show me the door if I try to sell you a screw driver to shovel dirt.
Right tool. Right job.
In any industry:
Poor management + slick marketing = Disaster
Re:excellent sales story (Score:5, Interesting)
Virtualization isn't just about performance:
If you have physical machines connected to the same SAN, both VMWare's products and Microsoft's Hyper-V support running failover clustering. This way, if one of the machines goes down, the VM and its services keep running with perhaps a small delay (in milliseconds) while the handoff to the other machine takes place.
The advantage of failover clustering at this level as opposed to the application level is that not all applications have the ability to use clustering, and a lot of companies may have database utilities that support clustering, but the app running on it may have issues, and making code for a handover at the app level might be impossible, especially if the utility is a niche item.
Another advantage of virtualization is the ability to do a hardware upgrade without affecting anything in the VM. For example, I have had a VM running on my Linux box that was a DHCP server and DNS cache. When it became time to move to a new machine because the 1999 vintage Linux box used too much power, I just copied the VM disk files to the new box, upgraded the client side OS drivers, and called it done. To stuff running in the VM, all it would notice is that it run faster, but everything else would be exactly the same. Similar with filesystem manipulation. If I want to move the VM to a new disk, I just turn it off, move the files to the new volume, turn it back on. The VM doesn't care one bit that its virtual disks are on a new SATA drive than an old IDE.
OS snapshotting comes to mind too. One of the uses I use a VM for is snapshots and rollbacks. For example, I tend to browse the Web in a VM under VirtualPC. This way, if some malicious code makes it past Firefox/Adblock/NoScript, it will only affect the VM (barring a hole that allows code to affect the hypervisor executable), and a simple click on the close window button dumps all changes. Another use is system upgrades. Say I do a major upgrade or patch of an application and the app won't start. I can rollback to a previous snapshot, turn the VM on, and be able to resume production with that machine without running out of time in the service window.
Filesystem manipulation. If you have the software, backing up VMs becomes easy. You have either a tape drive, a disk to hold stuff on, or both. The VM can be happily running, and at the same time, its filesystem can be snapshotted outside the OS and backed up.
There is a penalty for using a VM, and that is performance. However, this is mitigated significantly by having drivers in the client OS. For example, Hyper-V has a very fast virtual network switch, and it supports a virtual 100baseT adapter. So, just by installing the client drivers, a VM can communicate with others on the same virtual switch a lot faster than without.
Another penalty is unknown security issues. There is always the concern that a hypervisor can be compromised, and malicious code that is running in a VM can get the machine access of the hypervisor (whose drivers for a lot of tasks might be running with root or kernel level authority). This can be mitigated by making sure the guest operating systems are secure.
Re: (Score:3, Interesting)
Re: (Score:2, Insightful)
Just something I thought I'd point out.
On another side note what happened to the fine art of trolling, people these days just throw a bunch of racial slurs together and think they're all that. In my day it took a certain finesse to troll properly, you had to be well informed on the issue and then speak truths on the issue and then interpret those truths in a way that will set people off.
oh,
Re: (Score:2, Interesting)
Re:excellent sales story (Score:5, Funny)
+----------+
| PLEASE |
| DO NOT |
| FEED THE |
| TROLLS |
+----------+
| |
| |
Re:excellent sales story (Score:5, Interesting)
More so in the windows world than unix...
I have always run a large number of services on a single unix system, sometimes splitting them up through the use of chroot, always isolating them from each other by running as different users... The only benefit i see from virtualization would be having each machine as a simple container that can be moved around different physical hardware, but then again copying a full linux install from one disk to another is not that hard, and unless you have a heavily customized kernel it should boot up just fine on a different machine.
Having individual apps isolated, so they could be moved to dedicated machines if the load increased would be useful, but chroot could buy that, on the other hand nothing i'm doing right now stresses 5 year old hardware so i could move the whole install to a newer more powerful machine.
Interesting (Score:2, Funny)
Back to the Future? (Score:5, Informative)
So we go back to where we started from: chroot and jails. What really is the benefit of extended virtualization? I haven't "embraced" it as I am supposed to do.
I can see where it makes sense if you want to merge several servers that do absolutely nothing all day into a single machine but a decent migration plan will run all those services on a single 'non-virtual' server. Especially when those machines are getting loaded, the benefits of virtualization quickly break down and you'll have to pay for more capacity anyway.
As far as high availability goes: again, low cost HA doesn't work that well. I guess it's beneficial to management types that count the costs of but don't see the benefit in leaving a few idle machines running.
Then you have virtualized your whole rack of servers into a quarter rack single blade solution and a SAN that costs about the same than just a rack of single servers but you can't fill the rack because the density is too high. And like something that recently happened at my place: the redundant SAN system stops communicating with the blades because of a driver issue and the whole thing comes crashing down.
Re: (Score:2)
Comparing ESX and Zones? Seems like a horridly thought out comparison
Re:Back to the Future? (Score:5, Insightful)
Re:Back to the Future? (Score:5, Interesting)
Running multiple services on one OS requires that when you must reboot a server because of an OS bug or mis-configuration all of the services are brought down... Same if it crashes or hangs. As compelling as that is I've never used a hypervisor in 30 years on 10's of thousands of servers.
I do routinely use chroot jails on thousands of servers to isolate the application from the host OS. This way I do not need to re-qualify any tools when we implement an OS patch.
Check it out: http://sourceforge.net/projects/vesta/ [sourceforge.net] :-)
Re:Back to the Future? (Score:5, Interesting)
So we go back to where we started from: chroot and jails. What really is the benefit of extended virtualization? I haven't "embraced" it as I am supposed to do.
I can see where it makes sense if you want to merge several servers that do absolutely nothing all day into a single machine but a decent migration plan will run all those services on a single 'non-virtual' server. Especially when those machines are getting loaded, the benefits of virtualization quickly break down and you'll have to pay for more capacity anyway.
This is exactly what VMware lists as best practice for using virtualization. If a server is maxing out, it should not be virtualized as it is not a good candidate. However, if you have a number of servers that are under utilized, then the advantage of turning them into VMs become clear. VMware has a neat feature called Transparent Page Sharing, where VMs using the same sections of memory with the same bitmaps across the same images are all condensed down into the same single pages of memory in the ESX server. This means that your 10 (or more) windows 2003 server images "share" the same section of RAM, this frees up the "duplicate" RAM across those images. I have seen 20% of RAM saved by this, IIRC it can go above 40%.
As far as high availability goes: again, low cost HA doesn't work that well. I guess it's beneficial to management types that count the costs of but don't see the benefit in leaving a few idle machines running.
If you mean VMware HA, I find it works quite well, granted the new version in Vsphere (aka Virtual Center 4) is much better as it supports full redundancy.
Then you have virtualized your whole rack of servers into a quarter rack single blade solution and a SAN that costs about the same than just a rack of single servers but you can't fill the rack because the density is too high. And like something that recently happened at my place: the redundant SAN system stops communicating with the blades because of a driver issue and the whole thing comes crashing down.
You are assuming that the people don't have this already. I have been to a number of data centers that have racks and racks of under-utilized machines that also have SAN storage. VMware Consolidation is a way of consolidating the hardware you already have to run your ESX hosts. You use a program called VMware Conveter to do P2V (Physical to Virtual) to convert the real hardware machines to VMs, then you reclaim that hardware and install ESX on it, freeing up more resources. You don't always have to run out and buy new hardware!
VMs are great when the hardware is under-utilized, I do not recommend VMs that max out, and neither does VMware.
Re:Back to the Future? (Score:4, Informative)
Don't forget, depending on the type of windows licenses you have, if it is per-processor based, this means I can run all 10 of my VMs on only 2 lic's from Microsoft. (Because each VM only uses 1 of the 2 cores). Getting 8 "free" Windows 2003 server lic's is a pretty damn good deal.
Erm, I'm pretty sure it doesn't work like that - I recommend that you go find and analyze the small-print to make sure you are covered in case someone comes round to audit!
My understanding is that each virtual CPU that Windows runs on would be considered a CPU for Windows licensing terms so if you have 2 1-to-2-CPU Win2K3 licenses then you are licensed to run Windows 2K3 in two VMs and no more (or use one license on the host and one in a VM). If you run 10 VMs each with Windows as the OS then you need 10 Windows licenses (if you buy each separately) or at least 10 CPU license (if you use some sort of bulk purchase arrangement for per-CPU lics).
Also, the "1 or 2 CPU" term in a lot of MS licenses only covers one or two CPUs in the same machine, not running with the same license on two separate single CPU machines (physical or virtual). They don't count cores (just physical CPU packages) so you would be OK with a "1-2 CPU" license on a machine with two quad-core CPUs, but I don't know how this extends to VMs (they are likely to see 4 vCPUs in a VM as 4 CPUs not 4 cores on one CPU, irrespective of what arrangement of physical CPUs/cores the host machine has).
It is a while since I reviewed the licensing terms for Retail/OEM Windows Server releases (at work we are a small MS dev shop, but our Windows servers and desktops came with there own lics where needed (or run Linux in the case of file servers and VMWare host machines) and the OS installations and those we use (on physical boxes or VMs) for testing are "licensed" via our MSDN subs), so I could be wrong here. But I don't think I am...
Re:Back to the Future? (Score:5, Insightful)
"What really is the benefit of extended virtualization?
1) The ability to deploy a system image without deploying physical hardware. All those platforms you are meant to have, but don't: a build machine, an acceptance test machine, a pre-production test machine. And if you've done all the development and testing on a VM then changing the machine when it moves from production from a VM to being real hardware doesn't seem worth the risk.
2) IT as a territorial dispute. You are the IT Director for a large enterprise. You want everything in good facilities, what after the last time a cleaner unplugged the server that generates customer quotes, bringing revenue to a screaming halt. The owner of the quotes server will barely come at that. They certainly won't hand over sysadmin control. Their sysadmins like whitebox machines (the sysadmin's brother assembles them), but you'll never have parts on the shelf for that if it breaks. So get them to hand over a VM image, which you run on hardware of your choice, and which you can backup and restore for them.
3) Single hardware image. No more getting a "revised" model server and finding that the driver your OS needs isn't available yet (or better still, won't ever be available for that OS, since the manufacturer really only supports new hardware in their forthcoming releases). And yeah, the server manufacturer has none of the previous model in stock.
And of course there's minor stuff. Like being able to pull up a shiny clean enterprise image to replicate faults.
You'll notice the lack of the word "silver bullet" above. Because virtualisation isn't. But it does have a useful role, so the naysayers aren't right either.
I'm waiting for the realisation that merely combining images onto one physical machine does not do much to lower costs. For a directly-administered Windows OS the sysadmin's time was costing you more than the hardware. Now that the hardware is gone can you really justify maybe $50kpa/5 = $10pa per image for sysadmin overhead? This is particularly a problem for point (2) above, as they are exactly the people likely to resist the rigorous automation needed to get sysamdin per image overhead to an acceptable point (the best practice point is about $100 per image -- the marginal cost of centrally-administered Linux servers. You'll notice that's some hundreds of times less than worst-practice sysadmin overhead).
I'll also be a bit controversial and note that many sysadmins aren't doing themselves any favours here. How often do you read on Slashdot of time-consuming activities just to get a 5% improvement. If that 5% less runtime costs you 5% more sysadmin time then you've already increased costs by a factor of ten.
XenServer worked for us (Score:4, Interesting)
We had performance issues with VMWare Server as well, especially in the disk I/O area. Converting to XenServer from Citrix solved the issues for us. We have great speed, can virtualize other OS's, and management is significantly better.
XenServer from Citrix -- eewww (Score:5, Interesting)
Re:XenServer worked for us (Score:5, Informative)
Re:XenServer worked for us (Score:5, Interesting)
Re:XenServer worked for us (Score:5, Insightful)
The company I work for has just about every Midrange VM solution you can imagine: Citrix, ESX (Seperate Windows and Linux clusters), Solaris Containers, and AIX VIO/Lpars. That is more or less the order of stability, btw.
of all the solutions, AIX is the most consistent and stable. Cheap is what they are not, but in our case they are Blue Dollars. It does exactly what it is billed to do, day in, day out.
Solaris 10 Zones a royal bastard to patch, but otherwise perfectly stable. (quite frankly, they are really just jails, just a little more configurable I suppose)
ESX is stable enough, depending on hardware. Certainly easier than anything but perhaps the HMC.
Citrix is the worst of the lot. But with so much invested, they don't want to do anything else.
Re:XenServer worked for us (Score:5, Informative)
TFA is slashdotted at the moment, so I don't know if VMware Server or ESX is being compared. Either way, the advantage of virtualization is not performance, it is flexibility. The raw performance may be less, but it gives you the ability to do things that just aren't possible with a physical machine. The ability to hot migrate from one physical machine to another in the event of hardware failure or replacement and the ability to have entire "machines" dedicated to single purposes without needing an equal number of physical machines are, at best, more difficult if not impossible when not using virtualization.
Don't get me wrong, I'm no VMware fanboy. It certainly has its rough edges and is certainly not perfect. However, virtualization as a technology has undeniable benefits in certain situations. Absolute performance just isn't one of them right now.
Sounds about right (Score:5, Informative)
We use jails a lot at my work. We have a few pretty beefy "jail servers", and use FreeBSD's ezjail [erdgeist.org] port to manage as many instances as we need. Need a new spamfilter, say? sudo ezjail-admin create spam1.example.com 192.168.0.5 and wait for 3 seconds while it creates a brand new empty system. It uses FreeBSD's "nullfs" filesystem to mount a partially populated base system read-only, so your actual jail directly only contains the files that you'd install on top of a new system. This saves drive space, makes it trivially easy to upgrade the OS image on all jails at once (sudo ezjail-admin update -i), and saves RAM because each jail shares the same copy of all the base system's shared libraries.
For extra fun, park each jail on its own ZFS filesystem and take a snapshot of the whole system before doing major upgrades. Want to migrate a jail onto a different server? Use zfs send and zfs receive to move the jail directory onto the other machine and start it.
The regular FreeBSD 7.2 jails already support multiple IP addresses and any combination of IPv4 and IPv6, and each jail can have its own routing table. FreeBSD 8-CURRENT jails also get their own firewall if I understand correctly. You could conceivably have each jail server host its own firewall server that protects and NATs all of the other images on that host. Imagine one machine running 20 services, all totally isolated and each running on an IP not routable outside of the machine itself - with no performance penalty.
Jails might not be the solution to every problem (you can't virtualize Windows this way, although quite a few Linux distros should run perfectly), but it's astoundingly good at the problems it does address. Now that I'm thoroughly spoiled, I'd never want to virtualize Unix any other way.
Re: (Score:2)
Re: (Score:3, Informative)
I'm not too up on sysjail, but it looks like it's implemented on top of systrace while jails are explicitly coded into the kernel. That probably made sysjail easier to write, but the FreeBSD work has paid off now that they're starting to virtualize the whole network stack so that each jail can have its own firewall and routing.
More to the point: the sysjail project is no longer maintained [sysjail.bsd.lv].
Re: (Score:3, Insightful)
Re:Sounds about right (Score:4, Insightful)
Re: (Score:2)
What's the diff between jail and zone? (Score:3)
FTA, "Jails are a sort of lightweight virtualization technique available on the FreeBSD platform. They are like a chroot environment on steroids where not only the file system is isolated out but individual processes are confined to a virtual environment - like a virtual machine without the machine part."
Not knowing much about FreeBSD and it's complementary software, what is the difference between FreeBSD Jail and Solaris Zones?
A Solaris Zone could also be described the same way.
Vip
One runs on Solaris, one runs on BSD (Score:5, Interesting)
FreeBSD Jails are the same thing as Solais Zones, just on FreeBSD. Since FreeBSD is about evil daemons, they need an evil-sounding marketing name for it. More seriously, they probably just didn't want to bring on the wrath of lawyers for trademark infringement.
Re:One runs on Solaris, one runs on BSD (Score:5, Informative)
> they probably just didn't want to bring on the wrath of lawyers for trademark infringement.
FreeBSD jails predate Solaris zones by five years.
Re: (Score:3, Funny)
> they probably just didn't want to bring on the wrath of lawyers for trademark infringement.
FreeBSD jails predate Solaris zones by five years.
And soon they will be called Soracle Meditation Gardens.
Re: (Score:3, Interesting)
They sound a fair amount like what I understand OpenVZ to be about as well; does the comparison hold there, too?
Government IT is being poisoned by virtualization (Score:5, Interesting)
The new buzzword of Virtualization has reached all corners of the US Government IT realm. Blinded by the marketing hype of "consolidation" and "power savings" agencies of the three-letter variety are falling over themselves awarding contracts to "virtualize" the infrastucture. Cross-domain security be damned, VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time, for every application, in every environment, for no reason.
This is the recovery from the client-server binge-and-purge of the 1990s.
Here we go again.
Virtualization != Performance (Score:5, Insightful)
Solaris Zones also (Score:4, Informative)
Zones are the same concept, with the same benefit.
An added advantage Solaris zones have is flavoured zones: Make a Solaris 9 zone on a Solaris 10 host, a Linux zone on a Solaris 10 host and soon a Solaris 10 zone on an OpenSolaris host.
This has turned out much more stable, easy and simply effecient than our Vmware servers, which we now only have for Windows and other random OS's.
Re: (Score:3, Informative)
Zones are just the operating system partitioned, so it doesn't make sense to run linux in a zone. You can however, run a linux branded zone, which emulates a linux environment, but it's not the same as running linux in a zone. It's running linux apps in solaris.
LDOMS are hardware virtualization, so you can run Linux in them. Only some servers are supported, though.
Just thought i better clarify.
Is this a surprise? (Score:4, Insightful)
Amazing! Not running several additional copies of an operating system with all of the needless overhead involved is faster! Who would have guessed?
Sometimes a virtual machine is far more "solution" than you need. If you really want the same OS with lots of separated services and resource management... then run a single copy of the OS and implement some resource management. Jails are just one example - I find Solaris Containers to be much more elegant. Of course, then you have to be running Solaris...
Different tools for different jobs (Score:5, Interesting)
So I would love to RTFA to make sure about this, but their high-performance web servers running on FreeBSD jails are down, so I can't...
But here's what I do know. FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago. That means that either:
A) They were running on the Hosted VMware Server product, whose performance is NOT that impressive (it is a Hosted Virtualization product, not a true Hypervisor)
or B) They were running the unsupported OS on ESX Server, which means there was no VMware Tools available. The drivers included in the Tools package vastly improve things like storage and network performance, which means no wonder their performance stunk.
But moreover, Jails (and other OS-virtualization schemes) are different tools entirely - comparing them to VMware is an apples-to-oranges comparison. Parallels Virtuozzo would be a much more apt comparison.
OS-Virtualization has some performance advantages, for sure. But do you want to run Windows and Linux on the same physical server? Sorry, no luck there, you're virtualizing the OS, not virtual machines. Do you want some of the features like live migration, high availability, and now features like Fault Tolerance? Those don't exist yet. I'm sure they will one day, but today they don't, or at least not with the same level of support that VMware has (or Citrix, Oracle or MS).
If you're a company that's trying to do web hosting, or run lots of very very similar systems that do the same, performance-centric task, then yes! OS Virtualization is for you! If you're like 95% of datacenters out there that have mixed workloads, mixed OS versions, and require deep features that are provided from a real system-level virtualization platform, use those.
Disclosure: I work for a VMware and Microsoft reseller, but I also run Parallels Virtuozzo in our lab, where it does an excellent job of OS-Virtualization on Itanium for multiple SQL servers...
Guys - We found the iTanic customer (Score:2, Funny)
Parent poster admits to using iTanic - someone tie his hands to the tree while I call the Vet.
We will tranq him and put him in a zoo. This will mean big things for us, big things. Tours on broadway, my picture on the cover of Time....
Re: (Score:2)
|There's even this slashdot story from 2004 about freebsd 4.9 being supported as an esx guest.
Yes, but that was before bsd was confirmed dead.
OpenVZ & Virtuozzo are my favorite way to go (Score:2)
I would expect that the BSD product is similar in design - basically chroot on steroids.
I/O on the free "VMWare Server" sucks (Score:3, Informative)
I'm not surprised that FreeBSD Jails had better performance. VMWare Server is great for test environments and such, but I wouldn't ever use it in production.
It's not at all near the same class of product as the VMWare Infrastructure stuff (ESX, ESXi, etc.)
VMWare offers VMWare ESXi as a free download, and I/O performance under it would have been orders of magnitude better.
However, it does have the drawback of requiring a Windows machine (or a Windows VM) to run the VMWare Infrastructure management client.
Re:I/O on the free "VMWare Server" sucks (Score:5, Informative)
http://www.vm-help.com//esx40i/esx40_whitebox_HCL.php [vm-help.com]
Re: (Score:2)
There's overhead, but not 10x worse performance unless you're hitting the disk far more in the VM than you were in the native deployment.
The "gotcha" is that VMWare Server will, by default, use file-backed memory for your VMs so that you can get in a situation where the VM is "thrashing", but neither the host nor guest operating system shows any swap activity. The tell-tale sign is that a vmstat on the host OS will show massive numbers of buffered input and output blocks (i.e. disk activity) when you're doi
Virtualization doesn't make sense (Score:5, Interesting)
Well, in one case it does: when you're trying to run a different operating system simultaneously on the same machine. But in most "enterprise" scenarios, you just want to set up several isolated environments on the same machine, all running the same operating system. In that case, virtualization is absofuckinglutely insane.
Operating systems have been multi-user for a long, long time now. The original use case for Unix involved several users sharing a large box. Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?
Hypervisors have become more and more complex, and a plethora of APIs for virtualization-aware guests has appeared. We're reinventing the kernel-userland split, and for no good reason.
Technically, virtualizaiton is insane for a number of reasons:
In having to set aside memory for each guest, we're returning to the OS9 memory mangement model. Not only are we reinventing the wheel, but we're reinventing a square one covered in jelly.
FreeBSD's jails make a whole lot of sense. They allow several users to have their own userland while running under the same kenrel --- which vastly improves, well, pretty much everything. Linux's containers will eventually provide even better support.
Re: (Score:2)
Saving money on hardware will just cost you kickbacks from the supplier anyway. There is no advantage in that.
Re: (Score:3, Funny)
How the fuck is someone with (only) an MCSE supposed to manage a Unix system?
Easy [amazon.com].
Re:Virtualization doesn't make sense (Score:5, Insightful)
Virtualization DOES make sense, when you're trying to solve the right problem. Do not blame the tool for the incompetence of those using it. It's no good using a screwdriver to shovel dirt and then blaming the screwdriver.
Virtualization is good for many things:
- Low performance apps. Install once, run many copies
- Excellent for multiple test environments where tests are not hardware dependant
- Infrequently used environments, like dev environments, especially where the alternate solution is to provide physical access to multiple machines
- Demos and teaching where multiple operating systems are required
- Running small apps that don't run on your OS of choice infrequently
Virtualization is NOT good for:
- High performance applications
- Performance test envrionemnts
- Removing all dependence on physical hardware
- Moving your entire business to
Your specific concerns:
# Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical
Actually this depends on your virtualization solution
# TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.
So is hard disk access from multiple virtual operating systems contending for the same disk (unless you're going to have one disk per guest OS...even then are you going through one controller?) Resource contention is a trade-off. If all your systems are going to be running flat out simultaneously virtualization is a bad solution.
# A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest
You can often mount the virtual disks in a HOST OS. No different to needing software to access multiple partitions. As long as the software is available, it's not as big an issue.
# Memory management is an absolute clusterfuck. From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made
A lot of operating systems are becoming virtualization aware, and can be scheduled cooperatively to some degree. That doesn't mean your concern isn't valid, but there is hope that the problems will be reduced. However once again if all your virtual environments are running flat out, you're using virtualization for the wrong thing.
Re: (Score:2)
Which ends up being as complex as the kernel-userland boundary, so why not just use a kernel-userspace boundary in the first place?
Re: (Score:3, Informative)
No it doesn't. The parent is clearly talking/complaining about VMware, Xen, Kvm type virtualization, and guest OS instances for all those require their own kernel. He isn't talking about jails/container solutions (FreeBSD Jails, OpenVZ, Solaris Containers, etc) or none of his points would make any sense
So the parent is using the wrong kind of virtualisation, then blaming the tool. My screw driver won't shovel dirt very well. Bad screw driver. He found a solution that better fitted what he was trying to do a
Re:Virtualization doesn't make sense (Score:5, Insightful)
Thank you for saying that. The purpose of a multi-tasking multi-user OS is to allow running multiple applications with full isolation from one another. If we need some other piece of software - like a VM hypervisor - to do that, then the OS has failed in its duty. But wait, some people say, it's not just about multiplexing hardware, it's about migration and HA and deploying system images easily. These are also facilities the OS should be providing. Again, if we need some other piece of software then the OS has failed.
One could argue that we've evolved to a point where the functions of an OS have been separated into two layers. One layer takes care of multiplexing the hardware; the other takes care of providing an API and an environment for things like filesystems. Better still, you get to mix and match instances of each layer. OK, fine. Given the Linux community's traditional attitude toward layering (and microkernels, which this approach also resembles) it's a bit inconsistent, but fine. That interpretation does raise some interesting questions, though. People are putting a lot of thought into where drivers should live, and since some drivers are part of "multiplexing the hardware" then it would make sense for them to live in the hypervisor with a stub in the guest OS - just as is being done, I know. But what about the virtual-memory system? That's also a form of hardware multiplexing, arguably the most important. If virtualization is your primary means of isolating users and applications from one another, why not put practically all of the virtual-memory functionality into the hypervisor and run a faster, simpler single-address-space OS on top of that?
If we're going to delegate operating-system functionality to hypervisors, let's at least think about the implications and develop a coherent model of how the two interact instead of the disorganized and piecemeal approaches we see now.
Re:Virtualization doesn't make sense (Score:4, Insightful)
Inefficiently as fuck, by the way
Uh, why? Even shit applications don't replace or extend the kernel
FreeBSd runs Linux apps just fine last time I checked
Re: (Score:3, Insightful)
Inefficient as fuck. Whereas if they'd just been processes running under the same OS, the kernel would already know they were sharing the same page.
I don't think you did your research. (Score:5, Informative)
If you are separating similar work loads like web apps and databases you are probably better off running them within the same os and database server and separating them via security as the poster realized.
However if you have a variety of services that do not do the same thing you can really benefit from separating them in virtual machines and have them share common hardware.
Virtualization also gives you some amazing fault tolerance options that are consistent across different OS and services that are much easier to manage than individual OS and service clustering options.
Re: (Score:3, Informative)
After looking more closely at the article it sounds like they where trying to use VMWare Server instead of ESX, which explains a lot. If that was the case they were then carring the overhead of the host OS, VMLayer and the multiple guest OS. Not something you do with high performance apps.
Coral Cache (Score:2)
The cache doesn't help. (Score:2)
That just gets you a cached version of a page with a link to the actual article. The actual article [playingwithwire.com] is more useful.
Re: (Score:2)
Hmmm... the coral cache is snappy for me; the original link is not even loading yet.
Here's the start of the article, in any case:
Jun 01. Virtual Failure: YippieMove switches from VMware to FreeBSD Jails
Our email transfer service YippieMove is essentially software as a service. The customer pays us to run some custom software on fast machines with a lot of bandwidth. We initially picked VMware virtualization technology for our back-end deployment because we desired to isolate individual runs, to simplify maintenance and to make scaling dead easy. VMware was ultimately proven to be the wrong choice for these requirements.
Ever since the launch over a year ago we used VMware Server 1 for instantiating the YippieMove back-end software. For that year performance was not a huge concern because there were many other things we were prioritizing on for YippieMove â09. Then, towards the end of development we began doing performance work. We switched from a data storage model best described as âoea huge pile of filesâ to a much cleaner sqlite3 design. The reason for this was technical: the email mover process opened so many files at the same time that weâ(TM)d hit various limits on simultaneously open file descriptors. While running sqlite over NFS posed its own set of challenges, they were not as insurmountable as juggling hundreds of thousands of files in a single folder. ...
Virtualization is good enough (Score:5, Informative)
I work for $LARGE_US_BANK in the performance and capacity management group, and we constantly see the business side of the house buy servers that end up running at 10-15% utilization. Why? Lots of reasons - the vendor said so, they want "redundancy", they want "failover" and they want "to make sure there's enough". Given the load, if you lose 10-20% overhead due to VM, who cares ?
Re: (Score:3, Insightful)
It's CYA in practise. Here's the usual chain of events:
1. Business makes requirements to vendor: We want X capacity/response time/whatever
2. Vendor to business side: Well, what will you do with it?
3. Business makes requirements to vendor: Maybe A, maybe B with maybe N or N^2 users
4. Vendor to business side: That was a lot of maybes. But with $CONFIG you'll be sure
Particularly if the required hardware upgrades aren't part of the negotiations with the vendor, then it's almost a certainty.
Virtualization is a gift for Windows servers! (Score:5, Informative)
I've worked for many of the Fortune 10 (DB, GS, CS, JP, MS, etc.) banks on the Windows server side and they are all going full steam ahead for virtualization with VMWare or Xen exactly because they have been buying way too much hardware for their backend applications for the last decade. The utilization on all of these servers hardly hits 5-10% and the vast majority of time these systems sit idle. The standard has always been rackmount servers with multiple processor/core systems with gigs of memory all sitting around being unused, mostly Compaq/HP systems with IBM xSeries servers and some Dells thrown in for good measure.
The reason that this over-capitization has been the requirement of the business line departments to choose only from four or five server models for their backend application. These standard configs are usually configured in rackmount spaces 1U, 2U, 3U, and 4U sizes and with nearly maxed out specs for each size and the size of the server determines the performance you get. You have a light web server you get a blade or a pizza box, you have a light backend application you get a 2U server with two processors or four cores even though you might have a single threaded app that was ported from MS-DOS a few years ago, you want something beefier you get the 4U server with 4 processors, 8 cores and 16 GB of RAM even though your application only runs two threads and allocates 512MB of ram maximum. I've monitored thousands of these servers through IBM Director, InsightMangager, and NetIQ for performance and 99% of the time these servers are at 2% processor and memory utilization and only once in a while for a short amount of time one or two of the cores get hit with a low-mid work load for processing and then go back to doing nothing. These were the Production servers.
Now consider the Development servers, where a bank has 500 servers dedicated for developer usage with the same specs as the production boxes and at any one time maybe a few of those servers get used for testing while the other few hundred sit around doing nothing while the developers get a new release ready for weeks at a time. The first systems to get virtualized were the development servers because they were so underutilized that it was unthinkable.
(Off topic: Funny and sad story from my days in 2007 at a top bank (CS) helping with VMWare virtualzation onto HP Blades and 3Par SAN storage for ~500 development servers. The 3Par hardware and firmware was in such a shitty state that it crashed the entire SAN frame multiple times crashing hundreds of development servers at the same time during heavy I/O load. The 3Par would play the blame game against other vendors accusing Brocade for faulty SAN fibre switches, Emulex for faulty hardware and drivers, HP Blade and IBM Blade for faulty server, and the Windows admins for incompetence. Only to find that it was their SAN interface firmware causing the crashes.)
VMWare solves the problem of running commercial backend applications on Windows servers since each application is so specific due to the requirements of the OS version, service pack, hotfixes, patches, configurations that the standard is always one-server to one-application and nobody every wanted to mix them because any issue would always be blamed on the other vendor's application on the server. There were always talks from management about providing capacity to businesses that is scalable instead of providing them with single servers with a single OS. That was five years ago and people wanted to use Windows Capacity Management features but they were a joke since they were based on per-process usage quotas and the of course nobody wanted to mix two different apps on the same box so those talks went nowhere.
That is until VMWare showed up and showed a real way to isolate each OS instance from another while it also allowed us to configure capacity requirements on each instance while letting us package all those shitty single threaded backend applications each running on a separate server onto on
Well, duh! (Score:2, Flamebait)
You ask that the OS be put into a virtual machine, would you not expect a big performance hit??? It is only common sense to anyone with any basic computer knowledge. You are adding another layer between the hardware and the program, what do you think would happen?
Re: (Score:3, Informative)
A real hypervisor like used by IBM on their p-series frames doesn't impose this penalty. You're thinking of an emulator.
I've seen this before (Score:5, Interesting)
I've seen similar hideous slowdowns on ESX before for database workloads, and it's not VMware's fault.
This kind of slowdown is almost always because of badly written chatty applications that use the database one-row-at-a-time, instead of simply executing a query.
I once benchmarked a Microsoft reporting tool on bare metal compared to ESX, and it ran 3x slower on ESX. The fault was that it was reading a 10M row database one row at a time, and performing a table join in the client VB code instead of the server. I tried running the exact same query as a pure T-SQL join, and it was something like 1000x faster - except now the ESX box was only 5% slower instead of 3x slower.
The issue is that ESX has a small overhead to switching between VMs, and also a small overhead for estabilishing a TCP connection. The throughput is good, but it does add a few hundred microseconds of latency, all up. You get similar latency if your physical servers are in a datacenter environment and are seperated by a couple of a switches or a firewall. If you can't handle sub-millisecond latencies, it's time to revisit your application architecture!
Wrong tool for the job (Score:5, Funny)
You might as well have said,
"Our earth moving business took a big jump in productivity when we switched from ice-cream scoops to backhoes".
DBAs and Virtualization (Score:4, Insightful)
howto: Rev-UP VMWare Server/Wkstn in Linux Host OS (Score:3, Informative)
Okay, I have been through this at work several times recently. There are two major slow-downs in the default (but reasonably bullet-proof) VMWare machines running on a Linux _host_.
1) If you are doing NAT attachment _don't_ use the vmware NAT daemon. It pulls each packet into userspace before deciding how to forward/nat it. So don't use the default nat device (e.g. vmnet8). Add a new custom made "host only" adapter (e.g. vmnet9-or-more) by adding another adapter, and then use regular firewalling (ip_forward = 1 and iptables rules) so that the packets just pass through the Linux kernel and netfilters once. (you can use vmnet1 in a pinch but blarg! 8-)
1a) If you want/need to use the default nat engine (e.g. vmnet8) then put the nat daemon into a real-time scheduling group with "chrt --rr --pid 22 $(pgrep vmnet-natd)". Not quite a good as staying in the kernel all the way to your physical media.
1b) if you do item one, don't use the vmware-dhcpd, configure your regular dhcpd/dhcpd3 etc daemon because it will more easily integrate with your system as a whole.
(in other words, vmware-dhcpd is not magic, and vmware-natd is _super_ expensive)
2) VMWare makes a /path/to/your/machine/machine_name.vmem file, which is a memory mapped file that represents the RAM in the guest. This is like having the whole vm living forever in your swap space. It's great there if you want to take a lot of snapshots and want to be more restart/crash/sleep safe. It _sucks_ for performance. If you use "mainmem.usenamedfile=FALSE" in your .vmx files. (you have to edit the files by hand). This will move the .vmem file into your temp directory and unlink it so it's anonymous and self-deleting. It slows down snapshots but...
2a) Make _SURE_ your /tmp file system is a mounted tmpfs with a size=whatever mount option that will let the tmpfs grow to at least 10% larger than the (sum of the) memory size of (all of the) vritual machine(s) you are going to run at once. This will cause the "backing" of the virtual machine RAM to be actual RAM and you will get rational machine RAM speed.
2b) If you want/need to, there is a tmpDirectory=/wherever diretive to say where those files go. It gangswith the usenamedfile=FLASE and you can set up dedicated tmpfs files to back the machines specially/separately.
2c) If you want/need the backing or have a "better" drive you want to use with real backing, you can use the above in variations to move this performance limiter onto different spindle than your .vmd (virtual disk files).
3) No matter what, your virtual memory file counts against your overcommit_ratio (/proc/sys/vm/overcommit_ratio) compared to your ram. It defaults at 50% for _all_ the accounted facilities system-wide. If you have 4Gig RAM and try to run a 3G vm while leaving your overcommit_ratio at 50, you will suffer some unintended consequences in terms of paging/swapping pressure. Ajust your ratio to like 75 or 80 percent if your total VM memory size is 60 to 65 percent of real ram. _DONT_ set this nubmer to more than 85% unless you have experimented with the system stability at higher numbers. It can be _quite_ surprising.
Anyway, that's the three things (in many parts) you need to know to make VMWare work its best on your linux host OS. It doesn't matter what the Guest OS is, always consider the above.
Disclaimer: I don't work for VMWare etc, this is all practical knowledge and trial-n-error gained knowledge. I offer no warranty, but it will work...
Re:-1, Flamebait (Score:5, Informative)
Re:This is Ironic, right? (Score:4, Informative)
Re: (Score:3, Insightful)
Can we ask you then, why are you running it on VMware Server? Use ESXi. Its free. VMware Server's I/O performance is no where near as good as ESXi.
Re:This is Ironic, right? (Score:4, Informative)
Use ESXi. Its free.
Since ESXi became free, I've installed it on several servers at work. The problem is that its hardware requirements are pretty specific. It won't install on just any PC. It would be nice if I could install it on some of the older servers we have kicking around (DL-140s) or some decommissioned desktops, but it just doesn't support those pieces of hardware.
The primary reason that Server's disk I/O performance is so horrid is that your VM's disk(s) is being stored as a file on the host OS's filesystem. That causes extra layers of system calls to access files in the guest OS. Between talking to the virtual disk that has to get translated to physical disk, plus the guest OS's filesystem, which appears to the guest as a contiguous physical disk, may become fragmented on the host OS; double fragmentation can occur which causes SERIOUS performance penalties. ESX has a specialized vmfs that it uses to store your images which is designed for VM performance.
Personally, I would recommend Xen over ESX if you don't have the proper hardware and/or don't want to pay licensing fees. Although it's got a higher learning curve, it's easier to automate (especially since ESXi got rid of CLI support) and there are a plethora of free tools and documentation around. Being that ESX isn't free, it's harder to find support in forums other than VMWare's own site.
Also, VMWare Server's performance in general leaves a lot to be desired. I would *never* use it for production systems. I've had it installed on machines with 6x15K SaS drives (this is before ESXi) and 8 cores and it would start to choke after about 4 or 5 VMs. Plus VMWare Server doesn't handle multi-core VMs very well. Incredible performance issues arise and you're better off creating your application to scale out to multiple single-core VMs rather than make them dual-core. Server also doesn't handle memory nearly as well as the Type1 hypervisors that Xen and ESX use.
Re: (Score:3, Insightful)
We've deployed a number of ESXi boxes with mixed results. In a nutshell, if you have a underutilized server that's lightly loaded, it's a great candidate to be a VM.
Anything that needs performance should be considered off the list for a VM, unless you can convince the consumers that the speed penalty is worth the ease of management.
* Web server that servers your lunch menu and maybe your HR vacation scheduling system: VM
* Build machine that pounds through 7 GB of source and takes 10hrs of solid
Re: (Score:2)
Re:UML FTW! (Score:4, Informative)