Thursday, March 19, 2009

Microsoft + Facebook + netbook = World domination (again)!

I'd imagine Microsoft is still stinging, years after letting the Google opportunity slip through their fingers. That's always an unfortunate possibility for companies who adopt the wait-and-follow approach to innovation. Although Microsoft did switch gears significantly and invested in Facebook in October 2007 before it ran away too. Based on the deal terms (a 1.6% stake of $240 million), it would appear there was more value to Microsoft than just the equity position. But I've come up with a strategy for Microsoft to get ahead of the curve this time, one that could return it to it's World domination position.

It's no secret that Facebook's growth is stellar, projected Worldwide at 5 million new users every week! Facebook is in some ways becoming to social networking what Google is to search. And that is what makes a phenomenal opportunity to Microsoft if they get ahead of this one. Less than 25% of the World's population are Internet users. For many of those users, Microsoft could service using their current roadmap. But what about the remaining 75%? Several trends taken together, paint a pretty clear picture of how to capitalize on this massive wave of Internet newbies.

The first trend is towards ultra-cheap netbook like devices. The more inexpensive they are, the more appealing it is to subsidize them with service oriented revenues, thus accelerating their availability. A related and second trend is a move towards pushing applications and data to the cloud. Microsoft has already jumped on this bandwagon to extents, both with lightweight browser-based versions of Office apps, and with its Azure platform. And a third trend is that to many users, social networking is becoming a central 'platform' of its own.

Putting all of this together, my idea for Microsoft is to 1) buy Facebook right now, 2) get extremely aggressive about producing (through ODMs of course) the most insanely cheap netbook-like device imaginable, 3) market these devices to emerging markets with high Internet adoption potential (they're not displacing existing business anyways), and 4) grow the next billions of users on the platform.

If you look at Google's Android platform and its potential to be put on ultra-cheap devices (like say a netbook sized tablet-shaped device with only a soft-keyboard), the above may sound very similar. But there's something importantly different about what I'm proposing. The Android strategy seems to be riding the commoditization curve downward, penetrating new markets as the technology improves and more importantly as the cost to manufacture decreases. In a previous blog, I proposed a way to get in front of this, and produce an extremely inexpensive device that would serve the low expectations market which includes those who are starting at zero. As those users and their related economies grow, they could then purchase more capable devices, and at the same time, economies of scale would keep even those new purchase prices low. So in essence, I'm proposing the opposite trajectory as Android, i.e., start with masses of people who are just barely able to get connected, and work backwards to reach people who are farther along over time.

Microsoft has the resources to put a big push behind technologies, such as higher-refresh no-power screens, like those on e-book readers. Those would be a huge help in reducing battery requirements, and thus reducing production costs. Every such reduction accelerates the feasibility of a subsidized model, which ultimately means devices could be given out en masse, in order to grow the next billions of users who provide service revenue and user base. Why play catch up in established search, when you can own new search, LBS, social networking, software as a service, e-book sales, media sales, etc? Ultimately, service revenue is where it's all going. Why not prove it out in non-competitive markets? Make Facebook the first experience users ever learn. It's obviously sticky. That could be a huge advantage to Microsoft.

Forget $100 netbooks. Make $25 devices and subsidize them to $0. If the device doesn't do its own processing, a lot of components can be scrapped. At any rate, school systems would be a great place to seed these devices...

Disclosure: no positions

Virtualization / cloud M&A opportunities

We've entered the phase where M&A of technology companies gets interesting. Oddly, while the economics are less than stellar, and cut backs and lay-offs run rampant, a number of major companies sit on mountains of cash. Recent M&A activities and rumors thereof, will knee-jerk companies into the buying frenzy that accompanies this phase. But beyond that, there is a new trend of Unified Computing, written indelibly in ink by the recent Cisco move into the server market. This will focus the M&A urgency on a few specific areas, and light up the war rooms at many corporate headquarters. So I'm offering some related ideas.

Hypervisors

In the open source dimension I see a big future for Linux+KVM. Factoring in trends towards cloud computing, a bad economy putting a push behind Open Source, and a huge ecosystem surrounding Linux, Linux makes sense as the hypervisor. And as it so happens, the company behind the KVM hypervisor (Qumranet) is now owned by Red Hat, who has a focus on the Linux server business -- the same Red Hat who reported a 20% jump in earnings last quarter.

For those hiding under a news rock for the last week, Cisco's entry into the Unified Computing market means that to stay strong, players need to have an all-in-one solution. This reminds me greatly of the trend in the processor world, where to compete, players needed to pull in chipset and graphics logic to survive. So, for anyone who wants to be a big player in the cloud computing market, you'll need a strong virtualization and Linux and networking component. Buy Red Hat -- it'll cost you more than the USD 3 Billion market cap to not own it. You might also read my friend Tarry Singh's posting; he believes Oracle should snap up Red Hat.

Desktop virtualization isn't yet as pervasive as server virtualization. But it's a very interesting technology, and when implemented properly solves a lot of truly important problems: offering multiple OS personalities (one personal, one corporate), operating off-line or on-line, allowing remote unified image management, backup management, etc. We can really break this category down to two major bins, the hypervisor and everything else. The most interesting thing here is the everything else bin, because that's the logic that can potentially operate across a multitude of hypervisors. As far as promising companies here, the most interesting is Virtual Computer. Currently, they use Xen as the hypervisor, but their strength is really the everything else bin. My recommendation for big players who count corporate notebooks and desktops into their future, is to buy Virtual Computer now. So far, they are the most innovative private company in the space. Adding a team to extend it across Hyper-V and Linux+KVM environments would be a good follow-up idea.

Networking

In short, everything about cloud computing means more networking -- moving forward, computing is a dynamic fabric overlaying a multitude of physical sites. As soon as we start talking about dynamic and multi-site, many continuity problems arise which require more sophisticated techniques, with underlying networking. Storage becomes very much a networking issue, for example. If you want to see where multi-site memory optimizations (which enable cloud-wide long-distance VM migration and memory de-duplication) are going, check out a recent post of mine. These optimizations are new consumers of networking, and that will only increase. Networking continuity itself is a problem -- how do you maintain open TCP connections when migrating workloads to different physical sites?

To have a future selling solutions into tomorrow's data center, having a strong networking component is essential -- more so than ever before. Of course, some players will be forced to partner to get this. But Cisco's entry into the server market with their Unified Computing System, ought to send a smoke signal to the big players that they better get busy fast.

Rather than make recommendations here, I'll defer to some possibilities outlined in this article and this article (e.g. F5 and Brocade). My aim here is more to point out that networking is absolutely essential to the future of cloud computing. And I really don't separate virtualization out from cloud computing -- it's really and essential part of it. As well, I would think there'll be at least jockeying to partner or get busy with Juniper. A lesson learned from the processor and chipset market is that it doesn't pay fighting convergence. And convergence is exactly what is occurring. You need to offer it all-in-one, packaged neatly for the customer. IMO, M&A and deep partnership will have to occur soon.

Disclosure: no positions

Friday, March 13, 2009

Virtualization 3.0: Cloud-wide VM migration and memory de-duplication

For those unfamiliar with my background, I authored a full x86 PC emulation project in the 1990's, called bochs. It was used in one form or another, by a number of virtualization players, including an R&D project at Stanford which became VMware. It's been interesting watching x86 virtualization mature, from the early days of it being used as a developer's tool and then on to server consolidation (what I call virtualization 1.0). Consolidation is an interesting proposition of its own, making use of the VM level of abstraction to pack more work onto less physical resources. But it's still a very static technology in that VMs need to be booted and shut-down to be moved to different hardware.

When VM migration came onto the scene, it unlocked a wealth of additional value of virtualization, and a set of higher-level technologies. In VMware terminology, DRS tapped VM migration to do load balancing within a cluster of machines. DPM uses migration to pack VMs into lesser number of machines, de-powering ones not needed, thus doing cluster-wide power management. I call this whole suite of dynamic VM technologies, virtualization 2.0. Unfortunately, these dynamic features have generally been confined to within a physical site location, or at best within a Campus Area Network.

To get to where virtualization needs to go, we need to be able to look at virtualization as a fabric, stretching or overlaying numerous physical sites. And Cloud Computing will absolutely exacerbate this need. Many things that we've contemplated on a small scale (e.g. load balancing, power management, down-time maintenance), need to be brought to a larger context of a virtualization fabric stretching across physical sites. Virtualization needs to stretch to the cloud. To be sure, there are a number of issues to solve to make this happen, including networking and storage continuity. But I'd like to present a part of this next evolutionary step, virtualization 3.0, which is critical to its success yet unanswered elsewhere to my knowledge.

Memory density in servers continues to go up following its own exponential path. And as virtualization is used for increasingly higher-end workloads, the size of per-VM memory will continue to rise. Just imagine if you piled up all the RAM from all of your data centers, in one spot! Yet, to enable a fluid and dynamic virtualization 3.0 fabric, we need to rapidly allow all kinds of intra and inter-site VM migrations to occur, often driven automatically. That requires a whole new approach to how we manage VM memory; huge volumes of it effectively need to be transported rapidly. On the storage front, there are a number of technologies afoot, which are enablers of virtualization 3.0. But, I've been working for some time on concepts for making VM memory a 1st class citizen of the virtualization 3.0 vision.

We know from various benchmarks that there is a significant amount of redundant memory state across VMs. Within a server, VMware will consolidate some redundancy, on the order of up to 30%. Research which uses a sub-page granularity for reducing redundancy brings that up to 65%. Further research, using a differencing engine, ups this to a phenomenal 90% with similar apps and operating systems, and 65% even across VMs running disparate workloads!

Knowing all of this, what could we do if we managed memory with virtualization 3.0 in mind? In short, we can enable faster and more long distance VM migrations, cross-site load balancing and power management, de-duplicate memory thoughout multiple sites, and even WAN accelerate VM memory! Not only can we make the virtualization fabric quicker, more responsive, and encompass a much large geography, but we can actually increase VM density (and/or decrease physical RAM requirements), thus become more capital efficient and reduce overall power consumption. What's fortunate, is that the buzz around "long distance VMotion" (VMotion is VMware-speak) has finally picked up, starting at the latest VMworld. As storage, networking, and other hurdles are overcome, there is a vast amount of optimization opportunities at the VM memory level.

I put together this PowerPoint presentation to highlight these points. Feel free to pass it along. Note that this is Patent Pending technology. Give me a shout if you'd like to talk more.

Disclosure: no positions, author has related Patent Pending technology

Sunday, March 1, 2009

A netbook concept for the next billion people

Despite the current economics, netbook sales have been growing at double-digit rates. It's one of the few hot spots in the consumer computing space. There are now over 50 vendors with an offering across EMEA! What's really interesting is the shift towards the importance of the telco channel in accelerating netbooks sales. The decreasing price point of netbooks, while not necessarily exciting the hardware manufacturers, is enabling new ecosystems, use-cases, and facilitating a value-system shift from the hardware to the newly enabled ecosystems. In Europe, for example, this shift has been towards wireless data service providers, in much the same way that inexpensive handsets enabled wireless voice services. But there are many other exciting possibilities in after-market sales, with decreasingly expensive devices. According to IDC, "mini-notebooks will continue to shake the industry from both a go-to-market and a pricing perspective".

Sales prices (not factoring in subsidies) don't necessarily have to follow a smooth and continuous curve. Sometimes there are significant changes which drive big step function price decreases. I'm presenting one such change, which I believe would change the pricing point enough to drastically accelerate adoption and enable new after-market value systems to emerge, especially in developing nations. In short, I believe this change would help enable the next billion people to get connected. It's worth pointing out, when those people come online, they're not necessarily starting with the same expectations, given they're starting from nothing or very little. So the form factor for the next billion doesn't have to obey the same rules as the first billion. Sometimes we need to think differently, to see where the change will come from.

So how do we make an extremely high-volume, high-competition device like a netbook, even less expensive? Well, how about the following idea.

Why do the smarts of the netbook need to be an attached part of the netbook form-factor? Imagine if we split a netbook into two, one display unit that essentially had the functionality of a Celio Redfly (smartphone companion) device, and another smarts unit with a CPU, memory, network port, storage and wall power plug (which ideally would fold in when not in use). Let's assume the display (and keyboard) unit is driven by an inexpensive integrated SoC solution; no RAM, disk, or additional intelligence is needed. It would have integrated WiFi, to speak with the smarts unit. To have something concrete in mind, let's say the smarts unit looked something like the Marvell SheevaPlug, or other product of their Plug Computing initiative. The two units could communicate via WiFi, or perhaps a USB port (also good for re-charging batteries). Potentially, these two units could snap together like Legos, to form something on the order of a conventional netbook or tablet. If one got creative, the USB port could even be used as the data and power connection to keep the BOM costs low. But they don't need to physically attach to keep the design simple.

This arrangement would at first blush seem to increase the Bill of Materials (BOM) cost. But that's assuming everyone needs the smarts unit, which they don't! A home or work PC can function as the smarts, as can a laptop or an educational server, or a smartphone. Anything programmable with even humble computing capabilities and wireless communications can act as the brains. Additionally, there's no reason why one smarts unit can't support multiple displays (essentially a terminal server), which means one smarts unit could potentially support a household or small class-room, when needed at all.

Having less hardware in the display also requires less battery costs and weight, which allows it to be manufactured less expensively. Effectively, we've lifted out redundant costs and power consumption, and put them in a separate unit which can be shared or is not needed. Perhaps as e-ink solutions come down (helped along by the Amazon Kindle), power consumption could become even lower, further reducing the battery input costs.

I mentioned enablement of after-market value systems. Thus far, wireless data services has been one such case. But what else could we do with extremely inexpensive netbook-like devices? To be sure, enabling the next billion people opens up significant educational opportunities -- a whole ecosystems of its own. But what about electronic books in general? By definition, there's an entirely untapped market for low cost versions of books in electronic form, because those people don't have devices yet. We have the concept of low-cost drugs for developing countries, why not e-books? Or, what about social networking? Could an extremely inexpensive device be branded by Facebook? Why not be the first thing people learn to use when they enter the connected world? Or how about a Google branded device? Maybe an Android companion would be interesting. There are just so many opportunities here. The question in my mind, is who will do this first?

Disclosure: no positions