If you have a ton of VMs (virtual machines) sprawling the globe, how do you make them all fast? That’s the subject floating around the show floor at VMworld in SFO this year. You see, it’s not enough just to have fast disks, connections and CPUs (central processing unit); there’s a lot more involved in a real world deployment.

This comes in the form of intelligent caching, miniaturizing deployed containers with things like Docker, backend de-duplication and metadata based transactions for backup, security, transport optimization and harvesting a few GPU (graphics processing unit) cycles to make video happy.

Of the vast bulk of files that are typically associated with a given VM, comparatively few really change much over a given timeframe. So backend management sitting behind the VM guest scenes has taken a more active role in managing what specific actions happen to individual files on its guests.

This is no trivial task because it basically pushes the optimization to a meta-layer of information, where if you have integrity issues with the metadata anywhere along the chain, big things can break in fabulous fashions.

Add that to the task of trying to add near real time de-duplicated backup on some files, and making the metadata visible across your virtualized fabric so you can transact seamlessly, and you start to understand the scope of the problem.

Luckily there is a sort of oozing together between smaller and more efficient containerized VM instances, and more robust core architectures with much faster network interfaces, and a whole ton of SSDs and caching to make it all appear fast and seamless.

Some vendors even had hybrid storage to tap into the speed of an SSD (solid state drive) on the frontend for rapid input/output, and big RAIDs (redundant array of independent disks) or JBODs (just a bunch of disks) sitting on the backend, sometimes clustered together with ZFS or other files systems that claim to be more abuse-proof and resilient.

The nice thing is that if you have a pile of metadata floating around, you can move your security scanning to that pile, rather than hitting all the endpoints exclusively, thereby offloading CPU/memory/hardware for other tasks, especially redundant scanning across the enterprise.

Also, with SDN (software defined networking), you can ooze your network security nodes around to better digital intersections, and if you change your mind (or workload), you can move them back relatively painlessly.

So then, a fresh batch of vendors at the show have trotted out systems that monitor this real-time performance across the whole fabric, and they seem to be picking up steam year-over-year (or getting acquired).

And with the dropping in price of 10G and sometimes 40G interfaces and fat fiber pipes, it’s all sort of getting better. Not to say that there aren’t still growing pains, but if you get it right, you should be able to push a few buttons and deploy VMs wherever you want, and they should just magically have a better chance of working so your support team’s phones and ticket system doesn’t blow up, or at least a little less.

If you happen to be pushing out large amounts of desktop clients, you have another problem: 3D graphics. Windows 7 itself often hogs a lot of video performance to make things look nice. There used to be pretty rudimentary tools to try to fractionalize the host hardware video GPU power; now with some NVidia tools, you can assign different chunks of a GPU to a given VM, thereby speeding up your client experience without forklifting in some new bare metal to do it. Very nice.

In short, the VM world is getting better, slowly, but your clients should be happier with the right constellation of tools to make their experience snappy.