Showing posts from April, 2009

Boosting server utilization 100% by accelerated VM migration (MemoryMotion™)

Recently I guest-wrote an article on about a technology I've been researching for some time, called MemoryMotion™. By providing a distributed memory disambiguation fabric, VM live migration time is greatly accelerated. Besides the obvious benefits, ultra-quick migration unlocks a healthy chunk of additional server utilization, currently inaccessible due to slow migration times in current hypervisors.

It's worth stepping back a moment and asking the question, "why can we only run a physical server at say a nominal 40% (non-VDI) / 60% (VDI) maximum load?" Based on intuition, one might answer that it's due to workload spikes, which cause us to provision conservatively enough as to absorb spikes to an extent (the micro level), and fall back to distributed scheduling across servers to balance out at the macro level. But that's really how we deal with the problem, rather than an identification of the problem. The truth is that a good chunk o…

Inflows and outflows of US cities using a U-Haul barometer

Expanding on the U-Haul metric idea in the article "Fleeing Silicon Valley", I gathered costs of moving a household via U-Haul throughout a matrix of cities in the US. The thesis is that moving truck rentals will fairly accurately reflect the supply/demand equation, likely on a near real-time basis, giving us a current indicator of where the inflows and outflows are. This is interesting for a whole host of investment (and personal) themes, such as which direction commercial and residential real estate values will trend in various areas.

I picked 10 cities, which represent top startup locations in the US (a personal bias). To net out differences in travel distance, and to normalize values so that one can quickly compare and see trends, I gathered data for one-way moves in both directions between every one of 10 cities, and then divided the price of source-to-destination by the price of destination-to-source. What remains are ratios which show which way people are likely mo…

Portable Linux future using LLVM

Imagine a single Linux distribution that adapts to whatever hardware you run it on. When run on an Atom netbook, all the software shapes and optimizes to the feature set and processor characteristics of the specific version of Atom. Want to run it as a VM on a new Nehalem-based server? No problem, it re-shapes to fit the Nehalem (Core i7) architecture. And here, when I say "re-shape", I don't mean at the GUI level. Rather, I mean the software is effectively re-targeted for your processor's architecture, like it had been re-compiled.

Today, Linux vendors tend to make choices about which one or set of processor generations (and thus features) are targeted during compile-time, for the variety of apps included in the distro. This forces some least-common-denominator decisions, based on processor and compiler features, as known at the time of creation of a given distro version. In other words, decisions are made for you by the Linux distributor, and you're stuck…