Once again, I felt like a chemist or a cook in the kitchen when reading the publication “Algorithm Design for Performance Aware VM Consolidation“, written by Alan Roytman, Aman Kansal, Sriram Govindan, Jie Liu, and Suman Nath over at MSR. The conclusion is that optimally aligning VMs based on predictive workloads and reducing contention amongst VMs on shared infrastructure will generate a…
… proposed system realiz[ing] over 30% savings in energy costs and up to 52% reduction in performance degradation”
This assumes that you are looking to be efficient with compute resources that provide room for splicing a systems processing power, i.e. an active VM on a system may only use a fraction of the processing power available on the node itself. So, the plan is to get enough VM’s crunched together to reduce Idle time and decrease contention.
After all this, it goes to show that perhaps splicing the processor physically, would make it easier to manage, i.e. use smaller processors, that consume less power, and are part of a brokered model to handle specific work. ARMs and Atom chip blade chassis make an excellent choice to have smaller VMs with dedicated ARM processors vs. having multiple VMs share compute cycles with a larger multi-core processor.
How do you figure it out? Simple, start with this formula on page 9 of the MSR paper listed. It’s all part of the consolidation goal.