And of course, as soon as one contemplates serving Android sessions from a server farm, virtualization springs to mind. While one could put each Android its own VM, Android is ripe for an application style of virtualization, having only one kernel and multiple application group boundaries. One can achieve much higher consolidation ratios that way. With whatever choice of virtualization style, one can then imagine that the Android sessions are not necessarily constrained to any one company's private datacenter/cloud, but could also be served from a public cloud. If a public cloud provider can put sessions close enough to a given user's current location (networking latency wise), this proposition gets really interesting. For one, because Android could work its way into the consumer and enterprise VDI spaces. And two, because Google owns a lot of datacenters and could potentially go beyond the OS/application stack space, and into owning the execution of user sessions as well as maintain all their data. This would be likely be a reoccurring revenue (rental) type of service, and open the door to some premium options such as backups, latency/bandwidth QoS, execution locality zones, etc. Kind of the Android desktop version of Amazon's EC2/S3 web services.
There are a number of interesting ways to enhance the server-side Android model. For example, one could allow an Android session to seamless migrate from execution on a server to execution within a local hardware environment (using a VM or otherwise). So for example, if you want to "snap to local device", then execution migrates to your local device and the display interface originates from the local hardware rather than be remoted. There's no reason the user has to see or care about this transition. If you want to "snap to home entertainment device", then your Android session moves seamlessly to your TV. Ditto for the display on your car or netbook. To pull this off, it helps if the environments synchronize in the background automatically. And of course, doing all this on any real scale, means one has to have access to a hearty (Google) sized infrastructure.
Adding in one more piece of the puzzle, a thin-client style tablet (or other form factor) which I wrote about recently, would be an excellent way to access a server-side Android session without ever having any hardware smarts or locally resident data (which can be lost of stolen), and yet would also provide a larger interface for smartphones etc. This kind of device could be manufactured inexpensively on mass scales because it has very little in the way of hardware requirements (runs only firmware). But would be a big opportunity for Google branding and penetration into new markets, and would be a gateway to the evolution of for-pay Google services as mentioned. Perhaps this would be something manufactured by the ODMs such as Compal.
But the discussion isn't nearly complete until we talk about gaming! Server side rendering is a new trend, which decouples the amount of compute power from the end-point device, allowing less capable devices to display amazing server-side rendered games (see my previous article). And it has some of the same requirements as above, in terms of placing the sessions close to the end-point (latency wise), having enough data centers to cover important geographic areas, etc. And a hearty amount of the popular smartphone apps tend to be games, making a great synergy with a "cloud based Android". This style of computing could usher in a new era of phenomenal photo-realistic gaming, decoupled altogether from the underlying client-side hardware. Write once, game anywhere...
Disclosure: no positions