Forgot your password?
Operating Systems Software BSD

When VMware Performance Fails, Try BSD Jails 361

Posted by kdawson
from the like-a-virtual-machine-without-the-machine-part dept.
Siker writes in to tell us about the experience of email transfer service YippieMove, which ditched VMware and switched to FreeBSD jails. "We doubled the amount of memory per server, we quadrupled SQLite's internal buffers, we turned off SQLite auto-vacuuming, we turned off synchronization, we added more database indexes. We were confused. Certainly we had expected a performance difference between running our software in a VM compared to running on the metal, but that it could be as much as 10X was a wake-up call."
This discussion has been archived. No new comments can be posted.

When VMware Performance Fails, Try BSD Jails

Comments Filter:
  • by OrangeTide (124937) on Monday June 01, 2009 @09:54PM (#28176677) Homepage Journal

    Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.

    When applied to a problem it seems to create more performance issues than it solves. But it can make managing lots of services easier. I think that's the primary goal to these VMware-like products.

    Things like Xen take a different approach and seem to have better performance for I/O intensive applications. But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.

    VMware is more like how the Mainframe world has been slicing up mainframes into little bits to provide highly isolated applications for various services. VMware has not caught up to the capabilities and scalability to things IBM has been offering for decades though. Even though the raw CPU performance of a PC is better than a mid-range mainframe at 1% of the cost (or less). But scalability and performance are two separate things, even though we would like both.

    • by xzvf (924443) on Monday June 01, 2009 @10:11PM (#28176799)
      This is slightly off the server virtualization topic, but I had a similar experience with LTSP and some costly competitors. Using LTSP we were able to put up 5X the number of stable Linux desktops on the same hardware. I'd tell every organization out there to do a pilot bake-off as often as possible. It won't happen all the time, but I suspect that more often than not, the free open solution, properly setup will beat the slickly marketed, closed proprietary solution.
    • by gfody (514448) on Monday June 01, 2009 @10:22PM (#28176905)

      Most of the performance issues and I think also the issue faced in TFA have to do with IO performance when using virtual hard drives especially of the sparse-file, auto-growing variety. If they would configure their VMs to have direct access to a dedicated volume they would probably get their 10x performance back in DB applications.

      It would be nice to see some sort of virtual SAN integrated into the VMs

      • by ckaminski (82854) <(ckaminski) (at) (> on Monday June 01, 2009 @10:42PM (#28177063) Homepage
        What are you talking about? ESX has supported REAL SANS since almost day one. I've been able to GREAT things on a single vmware server, in one instance I managed 25 2GB J2EE app VMs on a quad core XEON (2005 era). In another I managed 168 sparsely used testing VMs (2x quad core). But I've ALWAYS had trouble with databases and Citrix, in particular, with VMware.

        Storage is only part of the issue. Having to run 10-160 schedulers *IS* the issue. Vmware doesn't utilize efficiencies in this arena like Xen or Jails, or OpenVZ or Solaris Containers can.
        • by gfody (514448) on Monday June 01, 2009 @11:20PM (#28177279)

          If you have a real SAN sure, IO performance is probably not your problem. If not then you might just try to use a sparse-file virtual hard disk and experience incredibly bad IO performance. My experience is really only with VirtualBox where the virtual disk is the only thing available in the UI and setting up direct disk access is advanced, text-based config - I'm not sure if it's like this with ESX - but I think it would be nice if instead of that whole virtual hard disk crap if the VM host was also a SAN server for your VMs and you just always use SAN.

      • by mysidia (191772) on Monday June 01, 2009 @11:14PM (#28177251)

        Totally unnecessary. If you want a 'virtual SAN', you can of course create one using various techniques. The author's biggest problem is he's running VMware Server 1, probably on top of Windows, and then tried VMware Server 1 on top of Ubuntu.

        Running one OS on top of another full-blown OS, with several layers of filesystem virtualization, no wonder it's slow; a hypervisor like ESX would be more appropriate.

        VMware Server is great for small-scale implementation and testing. VMware server is NOT suitable for mid to large-scale production grade consolidation loads.

        ESX or ESXi is VMware's solution for such loads. And by the way, a free standalone license for ESXi is available, just like a free license is available for running standalone VMware server.

        And the I/O performance is near-native. With ESX4, on platforms that support I/O virtualization , Vt-d/IOMMU, in fact, the virtualization is hardware-assisted.

        The VMware environment should be designed and configured by someone who is familiar with the technology. A simple configuration error can totally screw your performance. In VMware Server, you really need to disable memory overcommit and shut off page trimming, or you'll be sorry -- and there are definitely other aspects of VMware server that make it not suitable at all (at least by default) for anything large scale.

        It's more than "how much memory and CPU" you have. Other considerations also matter, many of them are the same considerations for all server workloads... e.g. how many drive spindles do you have at what access latency, what's your total IOPs?

        In my humble opinion, someone who would want to apply a production load on VMware server (instead of ESX) is not suitable briefed on the technology, doesn't understand how piss-poor VMware server's I/O performance is compared to ESXi, or just didn't bother to read all the documentation and other materials freely available.

        Virtualization isn't a magic pill that lets you avoid properly understand the technology you're deploying, make bad decisions, and still always get good results.

        You get FreeBSD jails up and running, but you basically need to be skilled at FreeBSD, and understand how to properly deploy that OS in order to do it.

        Otherwise, your jails might not work correctly, and someone else could conclude that FreeBSD jails suck, stick with OpenVZ VPSes or Solaris logical domains.

        • by Feyr (449684) on Monday June 01, 2009 @11:25PM (#28177317) Journal

          seconded. last time i tried, vmware server couldn't handle a single instance of a lightly loaded db server. moving to esx we're running 6 VM on that same hardware and the initial server has near-native performances

          in short. use the right tool for the right job, or you have no right to complain

        • by BaldingByMicrosoft (585534) on Tuesday June 02, 2009 @12:31AM (#28177727)

          TFA wasn't running ESXi? Thanks, now I can skip the read entirely. Silly TFA.

          Anyway, isn't "virtualization" so last year? "Local cloud" is the groove.

        • by aarggh (806617) on Tuesday June 02, 2009 @02:10AM (#28178169)

          I would actually say that the day ESXi became free, it made server completely obsolete for ANYTHING other than initial testing or building.

          As you stated, this article really on every level is a ridicuously poorly designed implimentation, I don't get into flame wars as to what's the better OS, etc, etc, so far as I'm concerned whatever is best at doing what I need it to is the solution I aim for, and with ESX I must admit I have been extremely happy with the time and resource savings, as well as the GREATLY reduced management overhead. Throw in the HA, DRS, vMotion, and disaster recovery, and I now sleep a lot better at night, and get far fewer calls!

      • by Thundersnatch (671481) on Monday June 01, 2009 @11:21PM (#28177283) Journal

        It would be nice to see some sort of virtual SAN integrated into the VMs

        Something like this [] you mean? Turns the local storage on any VMware host into part of a full-featured, clustered, iSCSI SAN. Not cheap though (about $2500 per TB)

    • by Eil (82413) on Monday June 01, 2009 @10:39PM (#28177047) Homepage Journal

      But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.

      Actually, Xen is not at all similar to a BSD jail, no matter how you look at it. Xen does full OS virtualization from the kernel and drivers on down to userland. A FreeBSD is basically chroot on steroids. The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.

      I guess the thing that bugged me about the most about TFA was the fact that they were using VMWare Server and actually expecting to get decent performance out of it. Somebody should have gotten fired for that. VMWare server is great for a number of things, but performance certainly isn't one of them. If they wanted to go with VMWare, they should have shelled out for ESX in the beginning instead of continually trying to go the cheap route.

      • by QuoteMstr (55051) <> on Monday June 01, 2009 @10:54PM (#28177123)

        First of all, you have to admit that the product line names are confusing. You'd expect a product with the word "server" in its title to be useful for, well, servers. Second, even ESX is still less efficient than just using a kernel to isolate different processes. That's what it's there for, after all.

      • by aarggh (806617) on Monday June 01, 2009 @11:43PM (#28177459)

        In my opinion it always comes down to the fact that shelling out some money for a good product always beats trying to stuff around with a "free" one that's hard to configure and maintain. I run 4 ESX farms, and have NO problem rolling out virtually any type of server from Oracle/RHEL, to Win2k3/2k8, and everything inbetween. I simply make sure I allocate enough resources, and NEVER over commit. I did a cost analysis ages back trying to convince management we needed to go down the virtualisation path to guarantee business continuity.

        In the end it took the failure of our most critical CRM server crashing and me importing an Acronis backup of it into ESX that convinced them beyond a shadow of a doubt.

        I would say to anyone, something for $15-20K that gives:

        Easy server roll-outs
        Simple network re-configuration
        Almost instant recoverability of machines

        Is more than worth the cost! The true cost of NOT doing it can be the end of a business, or as I have seen, several days of data/productivity lost!

        Performance issues? Reliability issues? I have none at all, the only times i've had issues are poorly developed .NET apps, IIS, etc, which I then dump the stats and give them to the developers to get them to clean up their own code. And more than once I've had to restore an entire server because someones scripts deleted or screwed entire data structures, and in a case like that, being able to restore a 120GB virtual in around 30mins from the comfort of my desk or home really beats locating tapes, cataloging them, restoring, etc, etc.

        I have Fibre SAN's (with a mix of F/C, SAS, and SATA disks) and switches, so the SAN just shrugs off any attempt to I/O bind it! The only limitation I can think of is the 4 virtual NIC's, it would be good for some of our products to be able to provide a much higher number.

        No comparison in my opinion.

      • by mysidia (191772) on Monday June 01, 2009 @11:47PM (#28177485)

        ESXi is free, and they could have used that. The overhead for most I/O is a fraction that of VMware server's.

        If they did this so long ago that ESXi wasn't available for free, then their basis for discussing problems with VMware is way outdated too, a lot changes in 14 months....

        VMware Server simply has many issues: layering the VM filesystem on top of a bulky host filesystem. Relying on a general purpose os to schedule VM execution, memory fragmentation, slow memory ops, contention for memory and disk (VS inappropriate host OS caching/swapping), etc.

        And it bears repeating: Virtualization is not a magic pill.

        You can't deploy the technology and have it just work. You have to understand the technology, make good design decisions starting at the lowest level (your hardware, your network, storage design, etc), configure, and deploy it properly.

        It's not incredibly hard to deploy virtualization properly, but it still takes expertise, and it's not going to work correctly if you don't do it right.

        Your FreeBSD jail mail server might not work that well either, if you chose a notoriously-inefficient MTA written in Java that only runs on top of XWindows.

      • by DaemonDazz (785920) on Monday June 01, 2009 @11:51PM (#28177513)

        Actually, Xen is not at all similar to a BSD jail, no matter how you look at it. Xen does full OS virtualization from the kernel and drivers on down to userland. A FreeBSD is basically chroot on steroids. The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.


        Similar products in the Linux space are Linux Vserver [] (which I use) and OpenVZ [].

      • by OrangeTide (124937) on Tuesday June 02, 2009 @01:37AM (#28178059) Homepage Journal

        I disagree. I consider Xen to be a kernel which other kernels are modified to run inside of, it is just a guest kernel making requests(read system calls) to a hypervisor(a special sort of kernel) that then translates it into requests to the host kernel. But mostly I feel this way because of the way I/O is handled in Xen is very much unlike the way VMware does it (go find my resume, I used to be an ESX developer at VMware).

        Because Xen was originally designed to function without special hardware extensions to support virtualization it is a virtual machine in the same sense that Unix is a virtual machine(processes were literally virtual machines from day 1 in Unix). Xen just jams one more layer above processes.

        BSD Jails are just a more Unix way of virtualizing a set of processes than Xen is. Xen requires an entire kernel to encapsulate the virtualization, BSD jails do not. In my opinion that is where they differ the most, but that difference is almost unimportant.

    • by syousef (465911) on Monday June 01, 2009 @11:10PM (#28177225) Journal

      Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.

      Screw-drivers are an excellent tool. However if you're in a position to buy tools for your company, you should know enough to show me the door if I try to sell you a screw driver to shovel dirt.

      Right tool. Right job.

      In any industry:

      Poor management + slick marketing = Disaster

    • by mlts (1038732) * on Tuesday June 02, 2009 @12:33AM (#28177737)

      Virtualization isn't just about performance:

      If you have physical machines connected to the same SAN, both VMWare's products and Microsoft's Hyper-V support running failover clustering. This way, if one of the machines goes down, the VM and its services keep running with perhaps a small delay (in milliseconds) while the handoff to the other machine takes place.

      The advantage of failover clustering at this level as opposed to the application level is that not all applications have the ability to use clustering, and a lot of companies may have database utilities that support clustering, but the app running on it may have issues, and making code for a handover at the app level might be impossible, especially if the utility is a niche item.

      Another advantage of virtualization is the ability to do a hardware upgrade without affecting anything in the VM. For example, I have had a VM running on my Linux box that was a DHCP server and DNS cache. When it became time to move to a new machine because the 1999 vintage Linux box used too much power, I just copied the VM disk files to the new box, upgraded the client side OS drivers, and called it done. To stuff running in the VM, all it would notice is that it run faster, but everything else would be exactly the same. Similar with filesystem manipulation. If I want to move the VM to a new disk, I just turn it off, move the files to the new volume, turn it back on. The VM doesn't care one bit that its virtual disks are on a new SATA drive than an old IDE.

      OS snapshotting comes to mind too. One of the uses I use a VM for is snapshots and rollbacks. For example, I tend to browse the Web in a VM under VirtualPC. This way, if some malicious code makes it past Firefox/Adblock/NoScript, it will only affect the VM (barring a hole that allows code to affect the hypervisor executable), and a simple click on the close window button dumps all changes. Another use is system upgrades. Say I do a major upgrade or patch of an application and the app won't start. I can rollback to a previous snapshot, turn the VM on, and be able to resume production with that machine without running out of time in the service window.

      Filesystem manipulation. If you have the software, backing up VMs becomes easy. You have either a tape drive, a disk to hold stuff on, or both. The VM can be happily running, and at the same time, its filesystem can be snapshotted outside the OS and backed up.

      There is a penalty for using a VM, and that is performance. However, this is mitigated significantly by having drivers in the client OS. For example, Hyper-V has a very fast virtual network switch, and it supports a virtual 100baseT adapter. So, just by installing the client drivers, a VM can communicate with others on the same virtual switch a lot faster than without.

      Another penalty is unknown security issues. There is always the concern that a hypervisor can be compromised, and malicious code that is running in a VM can get the machine access of the hypervisor (whose drivers for a lot of tasks might be running with root or kernel level authority). This can be mitigated by making sure the guest operating systems are secure.

    • by moon3 (1530265) on Tuesday June 02, 2009 @01:59AM (#28178121)
      Virtualization came to life not to solve developer's voes, but to enable firms to sqeeze even more servers in a single rack (this is where the money is made on VM). You can run 8 or more virtual servers on one metal. The performance is terrible, of course, but developers hardly even use those resources, most the these servers are idling. Hosting companies are very happy with the results.
  • Interesting (Score:2, Funny)

    by kspn78 (1116833) on Monday June 01, 2009 @09:55PM (#28176681)
    I wonder if this would help me, I am running 2 VMWare servers on an older box and it is a little lethargic at the moment. If I could ever get to the story I might be able to find out :|
  • Back to the Future? (Score:5, Informative)

    by guruevi (827432) <> on Monday June 01, 2009 @09:55PM (#28176689) Homepage

    So we go back to where we started from: chroot and jails. What really is the benefit of extended virtualization? I haven't "embraced" it as I am supposed to do.

    I can see where it makes sense if you want to merge several servers that do absolutely nothing all day into a single machine but a decent migration plan will run all those services on a single 'non-virtual' server. Especially when those machines are getting loaded, the benefits of virtualization quickly break down and you'll have to pay for more capacity anyway.

    As far as high availability goes: again, low cost HA doesn't work that well. I guess it's beneficial to management types that count the costs of but don't see the benefit in leaving a few idle machines running.

    Then you have virtualized your whole rack of servers into a quarter rack single blade solution and a SAN that costs about the same than just a rack of single servers but you can't fill the rack because the density is too high. And like something that recently happened at my place: the redundant SAN system stops communicating with the blades because of a driver issue and the whole thing comes crashing down.

    • by hpavc (129350) on Monday June 01, 2009 @10:20PM (#28176877)

      Comparing ESX and Zones? Seems like a horridly thought out comparison

    • by ckaminski (82854) <(ckaminski) (at) (> on Monday June 01, 2009 @10:45PM (#28177079) Homepage
      Consolidate several lightly used, different services onto ONE server? Have you ever managed multiple applicatoins in a heterogenous environment? Consolidating applications causes operational complexity that is inappropriate in a lot of instances. While service isolation is easy on Unix platforms, it's not on Windows.
    • by wrench turner (725017) on Monday June 01, 2009 @11:13PM (#28177243)

      Running multiple services on one OS requires that when you must reboot a server because of an OS bug or mis-configuration all of the services are brought down... Same if it crashes or hangs. As compelling as that is I've never used a hypervisor in 30 years on 10's of thousands of servers.

      I do routinely use chroot jails on thousands of servers to isolate the application from the host OS. This way I do not need to re-qualify any tools when we implement an OS patch.

      Check it out: [] :-)

    • by Mr. Flibble (12943) on Monday June 01, 2009 @11:20PM (#28177277) Homepage

      So we go back to where we started from: chroot and jails. What really is the benefit of extended virtualization? I haven't "embraced" it as I am supposed to do.

      I can see where it makes sense if you want to merge several servers that do absolutely nothing all day into a single machine but a decent migration plan will run all those services on a single 'non-virtual' server. Especially when those machines are getting loaded, the benefits of virtualization quickly break down and you'll have to pay for more capacity anyway.

      This is exactly what VMware lists as best practice for using virtualization. If a server is maxing out, it should not be virtualized as it is not a good candidate. However, if you have a number of servers that are under utilized, then the advantage of turning them into VMs become clear. VMware has a neat feature called Transparent Page Sharing, where VMs using the same sections of memory with the same bitmaps across the same images are all condensed down into the same single pages of memory in the ESX server. This means that your 10 (or more) windows 2003 server images "share" the same section of RAM, this frees up the "duplicate" RAM across those images. I have seen 20% of RAM saved by this, IIRC it can go above 40%.

      As far as high availability goes: again, low cost HA doesn't work that well. I guess it's beneficial to management types that count the costs of but don't see the benefit in leaving a few idle machines running.

      If you mean VMware HA, I find it works quite well, granted the new version in Vsphere (aka Virtual Center 4) is much better as it supports full redundancy.

      Then you have virtualized your whole rack of servers into a quarter rack single blade solution and a SAN that costs about the same than just a rack of single servers but you can't fill the rack because the density is too high. And like something that recently happened at my place: the redundant SAN system stops communicating with the blades because of a driver issue and the whole thing comes crashing down.

      You are assuming that the people don't have this already. I have been to a number of data centers that have racks and racks of under-utilized machines that also have SAN storage. VMware Consolidation is a way of consolidating the hardware you already have to run your ESX hosts. You use a program called VMware Conveter to do P2V (Physical to Virtual) to convert the real hardware machines to VMs, then you reclaim that hardware and install ESX on it, freeing up more resources. You don't always have to run out and buy new hardware!

      VMs are great when the hardware is under-utilized, I do not recommend VMs that max out, and neither does VMware.

    • by gdtau (1053676) on Monday June 01, 2009 @11:41PM (#28177431) Homepage

      "What really is the benefit of extended virtualization?

      1) The ability to deploy a system image without deploying physical hardware. All those platforms you are meant to have, but don't: a build machine, an acceptance test machine, a pre-production test machine. And if you've done all the development and testing on a VM then changing the machine when it moves from production from a VM to being real hardware doesn't seem worth the risk.

      2) IT as a territorial dispute. You are the IT Director for a large enterprise. You want everything in good facilities, what after the last time a cleaner unplugged the server that generates customer quotes, bringing revenue to a screaming halt. The owner of the quotes server will barely come at that. They certainly won't hand over sysadmin control. Their sysadmins like whitebox machines (the sysadmin's brother assembles them), but you'll never have parts on the shelf for that if it breaks. So get them to hand over a VM image, which you run on hardware of your choice, and which you can backup and restore for them.

      3) Single hardware image. No more getting a "revised" model server and finding that the driver your OS needs isn't available yet (or better still, won't ever be available for that OS, since the manufacturer really only supports new hardware in their forthcoming releases). And yeah, the server manufacturer has none of the previous model in stock.

      And of course there's minor stuff. Like being able to pull up a shiny clean enterprise image to replicate faults.

      You'll notice the lack of the word "silver bullet" above. Because virtualisation isn't. But it does have a useful role, so the naysayers aren't right either.

      I'm waiting for the realisation that merely combining images onto one physical machine does not do much to lower costs. For a directly-administered Windows OS the sysadmin's time was costing you more than the hardware. Now that the hardware is gone can you really justify maybe $50kpa/5 = $10pa per image for sysadmin overhead? This is particularly a problem for point (2) above, as they are exactly the people likely to resist the rigorous automation needed to get sysamdin per image overhead to an acceptable point (the best practice point is about $100 per image -- the marginal cost of centrally-administered Linux servers. You'll notice that's some hundreds of times less than worst-practice sysadmin overhead).

      I'll also be a bit controversial and note that many sysadmins aren't doing themselves any favours here. How often do you read on Slashdot of time-consuming activities just to get a 5% improvement. If that 5% less runtime costs you 5% more sysadmin time then you've already increased costs by a factor of ten.

  • by gbr (31010) on Monday June 01, 2009 @10:03PM (#28176731) Homepage

    We had performance issues with VMWare Server as well, especially in the disk I/O area. Converting to XenServer from Citrix solved the issues for us. We have great speed, can virtualize other OS's, and management is significantly better.

    • by xzvf (924443) on Monday June 01, 2009 @10:20PM (#28176891)
      XenServer is a great product and has many skilled developers. The "from Citrix" really gives me a queasy feeling. I know the products are solid and innovative, but so many people I hear out in the wild, scream and run from Citrix. It might be behind the reason Ubuntu and Red Hat are backing KVM for virtualization. Even to the point where RH bought Qumarant (KVM "owners").
    • by 00dave99 (631424) on Monday June 01, 2009 @10:33PM (#28176997)
      XenServer has some good features, but you really can't compare VMware Server with XenServer. I have many customers that were impressed to be able to run 4 or 5 VMs on VMware Server. Once we got them moved to ESX on the same hardware they couldn't believe that they were running 20 to 25 VMs on the same hardware. That being said back end disk configuration is the most important design consideration on any virutalization product.
      • by ckaminski (82854) <(ckaminski) (at) (> on Monday June 01, 2009 @10:50PM (#28177097) Homepage
        I broke VMware ESXs upper CPU limit of 168 vcpus with 104 running VMs. About 20 of which were under any significant load. 24ghz of CPUs and 32 GB of memory. Pretty damn impressive, if you ask me.
        • by funwithBSD (245349) on Monday June 01, 2009 @11:52PM (#28177521)

          The company I work for has just about every Midrange VM solution you can imagine: Citrix, ESX (Seperate Windows and Linux clusters), Solaris Containers, and AIX VIO/Lpars. That is more or less the order of stability, btw.

          of all the solutions, AIX is the most consistent and stable. Cheap is what they are not, but in our case they are Blue Dollars. It does exactly what it is billed to do, day in, day out.

          Solaris 10 Zones a royal bastard to patch, but otherwise perfectly stable. (quite frankly, they are really just jails, just a little more configurable I suppose)

          ESX is stable enough, depending on hardware. Certainly easier than anything but perhaps the HMC.

          Citrix is the worst of the lot. But with so much invested, they don't want to do anything else.

    • by coffee_bouzu (1037062) on Tuesday June 02, 2009 @12:00AM (#28177559)
      Comparing XenServer and VMware Server is like comparing apples and oranges. While VMware Server is impressive, it is very much like an emulator: It runs on top of another operating system and has to work harder to execute privileged commands. VMware ESX is a bare-metal hypervisor that is better optimized to do virtualization. While it is still doing "emulation", It is a much better comparison to XenServer than VMware Server is.

      TFA is slashdotted at the moment, so I don't know if VMware Server or ESX is being compared. Either way, the advantage of virtualization is not performance, it is flexibility. The raw performance may be less, but it gives you the ability to do things that just aren't possible with a physical machine. The ability to hot migrate from one physical machine to another in the event of hardware failure or replacement and the ability to have entire "machines" dedicated to single purposes without needing an equal number of physical machines are, at best, more difficult if not impossible when not using virtualization.

      Don't get me wrong, I'm no VMware fanboy. It certainly has its rough edges and is certainly not perfect. However, virtualization as a technology has undeniable benefits in certain situations. Absolute performance just isn't one of them right now.
  • Sounds about right (Score:5, Informative)

    by Just Some Guy (3352) <> on Monday June 01, 2009 @10:05PM (#28176755) Homepage Journal

    We use jails a lot at my work. We have a few pretty beefy "jail servers", and use FreeBSD's ezjail [] port to manage as many instances as we need. Need a new spamfilter, say? sudo ezjail-admin create and wait for 3 seconds while it creates a brand new empty system. It uses FreeBSD's "nullfs" filesystem to mount a partially populated base system read-only, so your actual jail directly only contains the files that you'd install on top of a new system. This saves drive space, makes it trivially easy to upgrade the OS image on all jails at once (sudo ezjail-admin update -i), and saves RAM because each jail shares the same copy of all the base system's shared libraries.

    For extra fun, park each jail on its own ZFS filesystem and take a snapshot of the whole system before doing major upgrades. Want to migrate a jail onto a different server? Use zfs send and zfs receive to move the jail directory onto the other machine and start it.

    The regular FreeBSD 7.2 jails already support multiple IP addresses and any combination of IPv4 and IPv6, and each jail can have its own routing table. FreeBSD 8-CURRENT jails also get their own firewall if I understand correctly. You could conceivably have each jail server host its own firewall server that protects and NATs all of the other images on that host. Imagine one machine running 20 services, all totally isolated and each running on an IP not routable outside of the machine itself - with no performance penalty.

    Jails might not be the solution to every problem (you can't virtualize Windows this way, although quite a few Linux distros should run perfectly), but it's astoundingly good at the problems it does address. Now that I'm thoroughly spoiled, I'd never want to virtualize Unix any other way.

  • by Vip (11172) on Monday June 01, 2009 @10:11PM (#28176797)

    FTA, "Jails are a sort of lightweight virtualization technique available on the FreeBSD platform. They are like a chroot environment on steroids where not only the file system is isolated out but individual processes are confined to a virtual environment - like a virtual machine without the machine part."

    Not knowing much about FreeBSD and it's complementary software, what is the difference between FreeBSD Jail and Solaris Zones?
    A Solaris Zone could also be described the same way.


  • by kriston (7886) on Monday June 01, 2009 @10:14PM (#28176825) Homepage Journal

    The new buzzword of Virtualization has reached all corners of the US Government IT realm. Blinded by the marketing hype of "consolidation" and "power savings" agencies of the three-letter variety are falling over themselves awarding contracts to "virtualize" the infrastucture. Cross-domain security be damned, VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time, for every application, in every environment, for no reason.

    This is the recovery from the client-server binge-and-purge of the 1990s.

    Here we go again.

  • by gmuslera (3436) on Monday June 01, 2009 @10:15PM (#28176841) Homepage Journal
    If you really need all the performance you can get for a service, don't virtualize it, or at least check that what you can get is enough, Virtualization have a lot of advantages, but dont give you the full resources of the real machine is running into (and if well how much you lose depend on the kind of virtualization you use, still wont be full). Maybe the 10x number could be VMWare fault or just a reasonable consequence of how is doing virtualization (maybe taking into account disk IO performance you could explain a good percent of that number).
  • Solaris Zones also (Score:4, Informative)

    by ltmon (729486) on Monday June 01, 2009 @10:16PM (#28176853)

    Zones are the same concept, with the same benefit.

    An added advantage Solaris zones have is flavoured zones: Make a Solaris 9 zone on a Solaris 10 host, a Linux zone on a Solaris 10 host and soon a Solaris 10 zone on an OpenSolaris host.

    This has turned out much more stable, easy and simply effecient than our Vmware servers, which we now only have for Windows and other random OS's.

    • by Anonymous Coward on Monday June 01, 2009 @11:24PM (#28177299)

      Zones are just the operating system partitioned, so it doesn't make sense to run linux in a zone. You can however, run a linux branded zone, which emulates a linux environment, but it's not the same as running linux in a zone. It's running linux apps in solaris.

      LDOMS are hardware virtualization, so you can run Linux in them. Only some servers are supported, though.

      Just thought i better clarify.

  • by diamondsw (685967) on Monday June 01, 2009 @10:20PM (#28176889)

    Amazing! Not running several additional copies of an operating system with all of the needless overhead involved is faster! Who would have guessed?

    Sometimes a virtual machine is far more "solution" than you need. If you really want the same OS with lots of separated services and resource management... then run a single copy of the OS and implement some resource management. Jails are just one example - I find Solaris Containers to be much more elegant. Of course, then you have to be running Solaris...

  • So I would love to RTFA to make sure about this, but their high-performance web servers running on FreeBSD jails are down, so I can't...

    But here's what I do know. FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago. That means that either:
    A) They were running on the Hosted VMware Server product, whose performance is NOT that impressive (it is a Hosted Virtualization product, not a true Hypervisor)
    or B) They were running the unsupported OS on ESX Server, which means there was no VMware Tools available. The drivers included in the Tools package vastly improve things like storage and network performance, which means no wonder their performance stunk.

    But moreover, Jails (and other OS-virtualization schemes) are different tools entirely - comparing them to VMware is an apples-to-oranges comparison. Parallels Virtuozzo would be a much more apt comparison.

    OS-Virtualization has some performance advantages, for sure. But do you want to run Windows and Linux on the same physical server? Sorry, no luck there, you're virtualizing the OS, not virtual machines. Do you want some of the features like live migration, high availability, and now features like Fault Tolerance? Those don't exist yet. I'm sure they will one day, but today they don't, or at least not with the same level of support that VMware has (or Citrix, Oracle or MS).

    If you're a company that's trying to do web hosting, or run lots of very very similar systems that do the same, performance-centric task, then yes! OS Virtualization is for you! If you're like 95% of datacenters out there that have mixed workloads, mixed OS versions, and require deep features that are provided from a real system-level virtualization platform, use those.

    Disclosure: I work for a VMware and Microsoft reseller, but I also run Parallels Virtuozzo in our lab, where it does an excellent job of OS-Virtualization on Itanium for multiple SQL servers...

  • by pyite69 (463042) on Monday June 01, 2009 @10:24PM (#28176919)

    I would expect that the BSD product is similar in design - basically chroot on steroids.

  • by mrbill (4993) <> on Monday June 01, 2009 @10:26PM (#28176937) Homepage
    The I/O performance on the free "VMWare Server" product *sucks* - because it's running on top of a host OS, and not on the bare metal.
    I'm not surprised that FreeBSD Jails had better performance. VMWare Server is great for test environments and such, but I wouldn't ever use it in production.
    It's not at all near the same class of product as the VMWare Infrastructure stuff (ESX, ESXi, etc.)

    VMWare offers VMWare ESXi as a free download, and I/O performance under it would have been orders of magnitude better.
    However, it does have the drawback of requiring a Windows machine (or a Windows VM) to run the VMWare Infrastructure management client.
    • by zonky (1153039) on Monday June 01, 2009 @10:37PM (#28177025)
      ESXi does also have many limitations around supported hardware. That said, there are some good resources around running ESXi on 'white box' hardware. []

    • by snookums (48954) on Monday June 01, 2009 @11:10PM (#28177223)

      There's overhead, but not 10x worse performance unless you're hitting the disk far more in the VM than you were in the native deployment.

      The "gotcha" is that VMWare Server will, by default, use file-backed memory for your VMs so that you can get in a situation where the VM is "thrashing", but neither the host nor guest operating system shows any swap activity. The tell-tale sign is that a vmstat on the host OS will show massive numbers of buffered input and output blocks (i.e. disk activity) when you're doing things in the VM which should not require this amount of disk troughput.

      A possible solution is:

      1. Move the backing file to tmpfs*
      2. Increase your mounted tmpfsto cover most of the host machine RAM (I'd say total RAM - 1 GB).
      3. Allocate RAM to your VMs in such a way that you are not over-committed (total of all VMs not more than tmpfs size set at step 2).

      *Take a look at the option mainMem.useNamedFile = "FALSE"

  • by QuoteMstr (55051) <> on Monday June 01, 2009 @10:27PM (#28176945)

    Well, in one case it does: when you're trying to run a different operating system simultaneously on the same machine. But in most "enterprise" scenarios, you just want to set up several isolated environments on the same machine, all running the same operating system. In that case, virtualization is absofuckinglutely insane.

    Operating systems have been multi-user for a long, long time now. The original use case for Unix involved several users sharing a large box. Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?

    Hypervisors have become more and more complex, and a plethora of APIs for virtualization-aware guests has appeared. We're reinventing the kernel-userland split, and for no good reason.

    Technically, virtualizaiton is insane for a number of reasons:

    • Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical
    • TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.
    • A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest
    • Memory management is an absolute clusterfuck. From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made.

      In having to set aside memory for each guest, we're returning to the OS9 memory mangement model. Not only are we reinventing the wheel, but we're reinventing a square one covered in jelly.

    FreeBSD's jails make a whole lot of sense. They allow several users to have their own userland while running under the same kenrel --- which vastly improves, well, pretty much everything. Linux's containers will eventually provide even better support.

    • If you are going to hire cheap MCSEs to manage all your systems, including the unix ones then it makes sense to be able to put those unix systems inside a little box on your screen with nice borders around it so you can easily see what connects to what.

      Saving money on hardware will just cost you kickbacks from the supplier anyway. There is no advantage in that.
    • by syousef (465911) on Monday June 01, 2009 @11:07PM (#28177203) Journal

      Virtualization DOES make sense, when you're trying to solve the right problem. Do not blame the tool for the incompetence of those using it. It's no good using a screwdriver to shovel dirt and then blaming the screwdriver.

      Virtualization is good for many things:
      - Low performance apps. Install once, run many copies
      - Excellent for multiple test environments where tests are not hardware dependant
      - Infrequently used environments, like dev environments, especially where the alternate solution is to provide physical access to multiple machines
      - Demos and teaching where multiple operating systems are required
      - Running small apps that don't run on your OS of choice infrequently

      Virtualization is NOT good for:
      - High performance applications
      - Performance test envrionemnts
      - Removing all dependence on physical hardware
      - Moving your entire business to

      Your specific concerns:
      # Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical

      Actually this depends on your virtualization solution

      # TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.

      So is hard disk access from multiple virtual operating systems contending for the same disk (unless you're going to have one disk per guest OS...even then are you going through one controller?) Resource contention is a trade-off. If all your systems are going to be running flat out simultaneously virtualization is a bad solution.

      # A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest

      You can often mount the virtual disks in a HOST OS. No different to needing software to access multiple partitions. As long as the software is available, it's not as big an issue.

      # Memory management is an absolute clusterfuck. From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made

      A lot of operating systems are becoming virtualization aware, and can be scheduled cooperatively to some degree. That doesn't mean your concern isn't valid, but there is hope that the problems will be reduced. However once again if all your virtual environments are running flat out, you're using virtualization for the wrong thing.

      • A lot of operating systems are becoming virtualization aware

        Which ends up being as complex as the kernel-userland boundary, so why not just use a kernel-userspace boundary in the first place?

    • by Salamander (33735) <> on Tuesday June 02, 2009 @08:01AM (#28180215) Homepage Journal

      We're reinventing the kernel-userland split, and for no good reason.

      Thank you for saying that. The purpose of a multi-tasking multi-user OS is to allow running multiple applications with full isolation from one another. If we need some other piece of software - like a VM hypervisor - to do that, then the OS has failed in its duty. But wait, some people say, it's not just about multiplexing hardware, it's about migration and HA and deploying system images easily. These are also facilities the OS should be providing. Again, if we need some other piece of software then the OS has failed.

      One could argue that we've evolved to a point where the functions of an OS have been separated into two layers. One layer takes care of multiplexing the hardware; the other takes care of providing an API and an environment for things like filesystems. Better still, you get to mix and match instances of each layer. OK, fine. Given the Linux community's traditional attitude toward layering (and microkernels, which this approach also resembles) it's a bit inconsistent, but fine. That interpretation does raise some interesting questions, though. People are putting a lot of thought into where drivers should live, and since some drivers are part of "multiplexing the hardware" then it would make sense for them to live in the hypervisor with a stub in the guest OS - just as is being done, I know. But what about the virtual-memory system? That's also a form of hardware multiplexing, arguably the most important. If virtualization is your primary means of isolating users and applications from one another, why not put practically all of the virtual-memory functionality into the hypervisor and run a faster, simpler single-address-space OS on top of that?

      If we're going to delegate operating-system functionality to hypervisors, let's at least think about the implications and develop a coherent model of how the two interact instead of the disorganized and piecemeal approaches we see now.

  • by BagOBones (574735) on Monday June 01, 2009 @10:29PM (#28176971)

    If you are separating similar work loads like web apps and databases you are probably better off running them within the same os and database server and separating them via security as the poster realized.

    However if you have a variety of services that do not do the same thing you can really benefit from separating them in virtual machines and have them share common hardware.

    Virtualization also gives you some amazing fault tolerance options that are consistent across different OS and services that are much easier to manage than individual OS and service clustering options.

    • by Animats (122034) on Monday June 01, 2009 @10:37PM (#28177027) Homepage

      That just gets you a cached version of a page with a link to the actual article. The actual article [] is more useful.

      • by Qubit (100461) on Monday June 01, 2009 @10:55PM (#28177127) Homepage Journal

        Hmmm... the coral cache is snappy for me; the original link is not even loading yet.

        Here's the start of the article, in any case:

        Jun 01. Virtual Failure: YippieMove switches from VMware to FreeBSD Jails

        Our email transfer service YippieMove is essentially software as a service. The customer pays us to run some custom software on fast machines with a lot of bandwidth. We initially picked VMware virtualization technology for our back-end deployment because we desired to isolate individual runs, to simplify maintenance and to make scaling dead easy. VMware was ultimately proven to be the wrong choice for these requirements.

        Ever since the launch over a year ago we used VMware Server 1 for instantiating the YippieMove back-end software. For that year performance was not a huge concern because there were many other things we were prioritizing on for YippieMove â09. Then, towards the end of development we began doing performance work. We switched from a data storage model best described as âoea huge pile of filesâ to a much cleaner sqlite3 design. The reason for this was technical: the email mover process opened so many files at the same time that weâ(TM)d hit various limits on simultaneously open file descriptors. While running sqlite over NFS posed its own set of challenges, they were not as insurmountable as juggling hundreds of thousands of files in a single folder. ...

  • by Gothmolly (148874) on Monday June 01, 2009 @10:44PM (#28177075)

    I work for $LARGE_US_BANK in the performance and capacity management group, and we constantly see the business side of the house buy servers that end up running at 10-15% utilization. Why? Lots of reasons - the vendor said so, they want "redundancy", they want "failover" and they want "to make sure there's enough". Given the load, if you lose 10-20% overhead due to VM, who cares ?

    • by Kjella (173770) on Monday June 01, 2009 @11:35PM (#28177393) Homepage

      It's CYA in practise. Here's the usual chain of events:

      1. Business makes requirements to vendor: We want X capacity/response time/whatever
      2. Vendor to business side: Well, what will you do with it?
      3. Business makes requirements to vendor: Maybe A, maybe B with maybe N or N^2 users
      4. Vendor to business side: That was a lot of maybes. But with $CONFIG you'll be sure

      Particularly if the required hardware upgrades aren't part of the negotiations with the vendor, then it's almost a certainty.

    • by JakFrost (139885) on Tuesday June 02, 2009 @12:55AM (#28177831)

      I've worked for many of the Fortune 10 (DB, GS, CS, JP, MS, etc.) banks on the Windows server side and they are all going full steam ahead for virtualization with VMWare or Xen exactly because they have been buying way too much hardware for their backend applications for the last decade. The utilization on all of these servers hardly hits 5-10% and the vast majority of time these systems sit idle. The standard has always been rackmount servers with multiple processor/core systems with gigs of memory all sitting around being unused, mostly Compaq/HP systems with IBM xSeries servers and some Dells thrown in for good measure.

      The reason that this over-capitization has been the requirement of the business line departments to choose only from four or five server models for their backend application. These standard configs are usually configured in rackmount spaces 1U, 2U, 3U, and 4U sizes and with nearly maxed out specs for each size and the size of the server determines the performance you get. You have a light web server you get a blade or a pizza box, you have a light backend application you get a 2U server with two processors or four cores even though you might have a single threaded app that was ported from MS-DOS a few years ago, you want something beefier you get the 4U server with 4 processors, 8 cores and 16 GB of RAM even though your application only runs two threads and allocates 512MB of ram maximum. I've monitored thousands of these servers through IBM Director, InsightMangager, and NetIQ for performance and 99% of the time these servers are at 2% processor and memory utilization and only once in a while for a short amount of time one or two of the cores get hit with a low-mid work load for processing and then go back to doing nothing. These were the Production servers.

      Now consider the Development servers, where a bank has 500 servers dedicated for developer usage with the same specs as the production boxes and at any one time maybe a few of those servers get used for testing while the other few hundred sit around doing nothing while the developers get a new release ready for weeks at a time. The first systems to get virtualized were the development servers because they were so underutilized that it was unthinkable.

      (Off topic: Funny and sad story from my days in 2007 at a top bank (CS) helping with VMWare virtualzation onto HP Blades and 3Par SAN storage for ~500 development servers. The 3Par hardware and firmware was in such a shitty state that it crashed the entire SAN frame multiple times crashing hundreds of development servers at the same time during heavy I/O load. The 3Par would play the blame game against other vendors accusing Brocade for faulty SAN fibre switches, Emulex for faulty hardware and drivers, HP Blade and IBM Blade for faulty server, and the Windows admins for incompetence. Only to find that it was their SAN interface firmware causing the crashes.)

      VMWare solves the problem of running commercial backend applications on Windows servers since each application is so specific due to the requirements of the OS version, service pack, hotfixes, patches, configurations that the standard is always one-server to one-application and nobody every wanted to mix them because any issue would always be blamed on the other vendor's application on the server. There were always talks from management about providing capacity to businesses that is scalable instead of providing them with single servers with a single OS. That was five years ago and people wanted to use Windows Capacity Management features but they were a joke since they were based on per-process usage quotas and the of course nobody wanted to mix two different apps on the same box so those talks went nowhere.

      That is until VMWare showed up and showed a real way to isolate each OS instance from another while it also allowed us to configure capacity requirements on each instance while letting us package all those shitty single threaded backend applications each running on a separate server onto on

  • Well, duh! (Score:2, Flamebait)

    by (142825) on Monday June 01, 2009 @11:00PM (#28177157) Homepage

    You ask that the OS be put into a virtual machine, would you not expect a big performance hit??? It is only common sense to anyone with any basic computer knowledge. You are adding another layer between the hardware and the program, what do you think would happen?

  • by bertok (226922) on Monday June 01, 2009 @11:33PM (#28177375)

    I've seen similar hideous slowdowns on ESX before for database workloads, and it's not VMware's fault.

    This kind of slowdown is almost always because of badly written chatty applications that use the database one-row-at-a-time, instead of simply executing a query.

    I once benchmarked a Microsoft reporting tool on bare metal compared to ESX, and it ran 3x slower on ESX. The fault was that it was reading a 10M row database one row at a time, and performing a table join in the client VB code instead of the server. I tried running the exact same query as a pure T-SQL join, and it was something like 1000x faster - except now the ESX box was only 5% slower instead of 3x slower.

    The issue is that ESX has a small overhead to switching between VMs, and also a small overhead for estabilishing a TCP connection. The throughput is good, but it does add a few hundred microseconds of latency, all up. You get similar latency if your physical servers are in a datacenter environment and are seperated by a couple of a switches or a firewall. If you can't handle sub-millisecond latencies, it's time to revisit your application architecture!

  • by FranTaylor (164577) on Tuesday June 02, 2009 @12:29AM (#28177721)

    You might as well have said,

    "Our earth moving business took a big jump in productivity when we switched from ice-cream scoops to backhoes".

  • by Bigmilt8 (843256) on Tuesday June 02, 2009 @08:38AM (#28180563)
    You wasted your time. I'm a DBA with a programming background. Virtualization is not suitable for mid- to large- database environments. Database software is designed to handle all IO and memory issues internally. The virtualization software just gets in the way.
  • by IBitOBear (410965) on Wednesday June 03, 2009 @04:03AM (#28192917) Homepage Journal

    Okay, I have been through this at work several times recently. There are two major slow-downs in the default (but reasonably bullet-proof) VMWare machines running on a Linux _host_.

    1) If you are doing NAT attachment _don't_ use the vmware NAT daemon. It pulls each packet into userspace before deciding how to forward/nat it. So don't use the default nat device (e.g. vmnet8). Add a new custom made "host only" adapter (e.g. vmnet9-or-more) by adding another adapter, and then use regular firewalling (ip_forward = 1 and iptables rules) so that the packets just pass through the Linux kernel and netfilters once. (you can use vmnet1 in a pinch but blarg! 8-)

    1a) If you want/need to use the default nat engine (e.g. vmnet8) then put the nat daemon into a real-time scheduling group with "chrt --rr --pid 22 $(pgrep vmnet-natd)". Not quite a good as staying in the kernel all the way to your physical media.

    1b) if you do item one, don't use the vmware-dhcpd, configure your regular dhcpd/dhcpd3 etc daemon because it will more easily integrate with your system as a whole.

    (in other words, vmware-dhcpd is not magic, and vmware-natd is _super_ expensive)

    2) VMWare makes a /path/to/your/machine/machine_name.vmem file, which is a memory mapped file that represents the RAM in the guest. This is like having the whole vm living forever in your swap space. It's great there if you want to take a lot of snapshots and want to be more restart/crash/sleep safe. It _sucks_ for performance. If you use "mainmem.usenamedfile=FALSE" in your .vmx files. (you have to edit the files by hand). This will move the .vmem file into your temp directory and unlink it so it's anonymous and self-deleting. It slows down snapshots but...

    2a) Make _SURE_ your /tmp file system is a mounted tmpfs with a size=whatever mount option that will let the tmpfs grow to at least 10% larger than the (sum of the) memory size of (all of the) vritual machine(s) you are going to run at once. This will cause the "backing" of the virtual machine RAM to be actual RAM and you will get rational machine RAM speed.

    2b) If you want/need to, there is a tmpDirectory=/wherever diretive to say where those files go. It gangswith the usenamedfile=FLASE and you can set up dedicated tmpfs files to back the machines specially/separately.

    2c) If you want/need the backing or have a "better" drive you want to use with real backing, you can use the above in variations to move this performance limiter onto different spindle than your .vmd (virtual disk files).

    3) No matter what, your virtual memory file counts against your overcommit_ratio (/proc/sys/vm/overcommit_ratio) compared to your ram. It defaults at 50% for _all_ the accounted facilities system-wide. If you have 4Gig RAM and try to run a 3G vm while leaving your overcommit_ratio at 50, you will suffer some unintended consequences in terms of paging/swapping pressure. Ajust your ratio to like 75 or 80 percent if your total VM memory size is 60 to 65 percent of real ram. _DONT_ set this nubmer to more than 85% unless you have experimented with the system stability at higher numbers. It can be _quite_ surprising.

    Anyway, that's the three things (in many parts) you need to know to make VMWare work its best on your linux host OS. It doesn't matter what the Guest OS is, always consider the above.

    Disclaimer: I don't work for VMWare etc, this is all practical knowledge and trial-n-error gained knowledge. I offer no warranty, but it will work...

A holding company is a thing where you hand an accomplice the goods while the policeman searches you.