Showing posts from 2009

Reducing atmospheric soot a leveraged global warming play

There's been a lot of discussion of late regarding the contribution of soot and other particulate to global warming. Washington's blog did a nice job of summarizing why soot reduction is important to climate efforts. NASA has been studying the effects of soot for some time. And when Scientific American, BusinessWeek and U.S. News wrote about it, it got me thinking...

In specific, I wondered how much warming comes from areas which are "susceptible" to melting, as a proxy to measure soot's potential influence (light absorption, inducing precipitation, and resulting exposure of darker surfaces under snow & ice)? Fortunately, it's reasonably easy to do a proxy study after a trip to NASA's Goddard Institute for Space Studies to download some GISTEMP data and after doing some programming handywork.

The findings are telling! Warming is accelerated much more (2x or 3x) in areas which fluctuate on either side of freezing, and no so much in areas which remai…

Cloud Pipeline: future of inter cloud provider sneaker-nets

One of the notable frictions surrounding use of cloud computing providers has been the difficulties in getting large data sets into and out of the domain of the cloud provider. Once your data set grows beyond a certain level, it's just not feasible to use the public network to transfer it. Amazon, in May 2009, began addressing this friction by offering an import feature, whereby one can ship them data (on an external SATA/USB drive), and they'll load it into their S3 storage service. And just recently, Amazon added a similar export feature. This is extremely useful between the customer and Amazon, but I believe it's only the beginning of a trend in what's to come to inter cloud "sneaker nets".

There are a slew of interesting use-cases of transferring data sets between various forms of providers, without the customer ever touching the data, nor ever sending physical devices. This of course, would dictate there being some (set of) standards/formats for inte…

Server side Android, a Google version of Amazon's EC2

While everyone contemplates the place that Android will hold on the mobile device, in home entertainment and on the netbook, there is another interesting use-case for Android that's not yet been talked about. There's no reason that Android, as a complete OS, application stack and ecosystem (including the app market), has to be run on the client side. In environments where multiple users might want to use the same client hardware (monitor, keyboard, mouse, etc), such as at the office, the thin-client model could be a very useful way to access any given user's Android session. This way, the Android session can be displayed at any end-point, be it a desktop, notebook, meeting-room projector, or even smartphone device. Using a VPN or even SSL protected web browser session from home, a user could also bring up their work Android session.

And of course, as soon as one contemplates serving Android sessions from a server farm, virtualization springs to mind. While one could pu…

Fault tolerance a new key feature for virtualization

VM migration has been a key feature and enabling technology which has differentiated VMware from Microsoft's Hyper-V. Though as you may know, Windows Server 2008 R2 is slated for broad availability on or before October 22, 2009 (also the Windows 7 GA date), and Hyper-V will then support VM migration. So you may be wondering, what key new high-tech features will constitute the next battleground for differentiation amongst the virtualization players?

Five-Nines (99.999%) Meets Commodity Hardware

One such key feature is very likely to be fault tolerance (FT) -- the ability for a running VM to suffer hardware failure on one machine, and to be restarted on another machine without losing any state. This is not just HA (High Availability), it's CA (Continuous Availability)! And I believe it'll be part of the cover-charge that virtualization vendors (VMware, Citrix/XenSource, Microsoft, et al) and providers such as Amazon will have to offer to stay competitive. When I talk abou…

Apple Tablet-killer: the Thin-Tablet

While the 'net is abuzz over rumors of the Apple tablet, I'd like to point out a category of device in a form-factor that doesn't yet exist, but would be a killer product. It's also what I believe the CrunchPad tablet should have been designed to be. And that's the "thin^2 tablet". By thin, I mean it's physically thin in dimension, like the iPhone, but it's also thin in the sense that thin-clients are thin when they have nothing but firmware to access a remote server.

The problem I see with a rumored $800-ish device of that size, is that it's highly likely the same buyer will also own a smartphone. For the CrunchPad, at a rumored $400-ish price-point, it's hard to buy into the couch-surfing, coffee shop sipping usage model. In either case, if you already have a capable smartphone, home & work PC, and/or just have an available WiFi network, why duplicate functionality, applications, and user configuration and data files across multi…

A business model for Twitter, Google-style

While the scale first style of attacking a market is still be proven out, getting a critical mass is mostly certainly a key element in the success of a social networking startup. These kinds of startups can pop up seemingly overnight in mass quantities, and one of the key ways to compete is to create an extremely rapid growth of user base and establish a site or service as de facto (and a verb) for the space it's in. This describes the trajectory that Twitter appears to be on.

Scaling to 1 billion users for any one startup is monumentally difficult. Scaling to 10 billion, given the current population, is impossible. Sometime before reaching either size, a startup needs to transition into a real business model. There's always the M&A path, but having a real business model amps up the M&A valuation significantly. In that spirit, here're some thoughts about some changes Twitter could make to allow them to "turn on the revenue" tap when they need to.

Two things…

Yahoo's infrastructural disadvantage to Google: Java performance does not scale

Yahoo(YHOO) uses a Java-based MapReduce infrastructure called Hadoop. This article demonstrates why Java performance does not scale well for large scale compute settings, relative to C++, which is what Google(GOOG) uses for their MapReduce infrastructure.

A couple months ago, I wrote an article about how Hadoop infrastructure should use C++/LLVM, not Java, to be as scalable and efficient as possible. And to be competitive with Google infrastructure. Discussions surrounding Java vs C++ performance often seem to morph into something bordering on religion, are muddled with arguments about various tweaks that can be done in one language or the other, and then dissipate into the abyss of non-action. Very few discussions focus on the real issue.

I thought I'd take a different tact, and benchmark what I believe the real problem with Java is, vis-a-vis its use in large scale settings. Rather than focus on the relative Java-vs-C++ performance, I instead benchmarked the behaviour of multi…

Venture 3.0, Andreessen Horowitz will change venture capital forever

The announcement of the new VC firm, Andreessen Horowitz, and their first $300 million fund, is not just the entry of yet another high-profile person into professional venture capital. It delineates the emergence of a new style of VC, a recipe of successful venture capital going forward, and the Darwinistic demise of those who do not quickly adapt to it.

I've been dialed into startups on the entrepreneur side for 15 years. It's pretty clear to me, the conventional ways of venture capitalism are no longer effective. Entrepreneurs have gotten smarter, are much more informed, and are seeking alternative ways to fund startup companies. In the age of TheFunded ("the Yelp of the VC world"), it's become imperative to think about the funding process as a customer service industry. Treat an entrepreneur poorly, get some bad juju added to your rating.

What's even more difficult to contend with, is that the rate of technology change is accelerating. The Stone Age laste…

kaChing: the sound of money flowing away from mutual funds

This year's Finovate Startup 09 Conference in San Francisco hosted some 56 financial oriented startups. Attending as a blogger from Seeking Alpha (a conference sponsor) and a serial startup guy, it's hard to beat merging the best of two worlds. If you didn't have a chance to attend, a great way to summarize the big picture painted by Finovate Startup '09 was encapsulated by a remark from a fellow Seeking Alpha blogger and hedge fund manager; there are no areas of finance left which will not be heavily disrupted. Some of these startups represent such disruptions to current financial business models. If you're in the biz, I heavily recommend attending the flagship Finovate in NYC on September 29, 2009. Your life is about to change.

My runner-up favorite theme at Finovate was peer-to-peer lending. This has a lot of potential in that it creates a new asset class which takes banks out of the equation, allowing (pools of) borrowers to directly borrow from (pools of)…

Airbus looking at descent until it adds pilot override

I think we'll see accelerating cancellations of Airbus orders. It will have little to do with whether Airbus can identify and correct sensor and/or other problems with Air France Flight 447, which went down a week ago while en route from Rio de Janeiro to Paris. Rather, I believe it will be because the Flight 447 crash will make the public aware of something that must be very unsettling to pilots -- Airbus has a design philosophy of using computer fly-by-wire, without the ability of pilots to override! Of course, Boeing jets also operate with fly-by-wire, but at least pilots' inputs can override the computer.

I don't know if this design philosophy truly reflects a difference between American and European cultures. But I would say, to the public, this is not a story about culture or mechanical sensors. It's a story about whether the pilots in the cabin have a fighting chance to bring you back alive, if something goes very wrong. That is an epic human story, of which I w…

Hadoop should target C++/LLVM, not Java (because of watts)

Over the years, there have been many contentious arguments about the performance of C++ versus Java. Oddly, every one I found addressed only one kind of performance (work/time). I can't find any benchmarking of something at least as important in today's massive-scale-computing environments, work/watt. A dirty little secret about JIT technologies like Java, is that they throw a lot more CPU resources at the problem, trying to get up to par with native C++ code. JITs use more memory, and periodically run background optimizer tasks. These overheads are somewhat offset in work/time performance, by extra optimizations which can be performed with more dynamic information. But it results in a hungrier appetite for watts. Another dirty little secret about Java vs C++ benchmarks is that they compare single-workloads. Try running 100 VMs, each with a Java and C++ benchmark in it and Java's hungrier appetite for resources (MHz, cache, RAM) will show. But of course, Java folk…

50% of Cloud Compute Power & Carbon Waste Solved by Software Technique

Ever wonder why virtualized servers are usually run at nominal loads of 30..40%? It's because the speed at which VMs can be live migrated is much slower than the speed at which load spikes can arise. Thus, a large "head room" is built into the target loads at which IT is generally comfortable running their virtualized servers.

Faster networking would seem to save the day, but per-server VM density is increasing exponentially, along with the size of VMs (see latest VMworld talks). Thus there are many more (increasingly bigger) VMs to migrate, when load spikes occur.

Besides the drastic under-utilization of capital equipment, it's a shame that the load "head room" is actually in the most efficient sweet spot of the curve. The first VMs tasked on a server are the most power-costly, and the last ones are nearly free (except we only use that band for spike management).

If loads could comfortably be advanced from 40 to 80%, half (or more) of the compute-oriented…

Boosting server utilization 100% by accelerated VM migration (MemoryMotion™)

Recently I guest-wrote an article on about a technology I've been researching for some time, called MemoryMotion™. By providing a distributed memory disambiguation fabric, VM live migration time is greatly accelerated. Besides the obvious benefits, ultra-quick migration unlocks a healthy chunk of additional server utilization, currently inaccessible due to slow migration times in current hypervisors.

It's worth stepping back a moment and asking the question, "why can we only run a physical server at say a nominal 40% (non-VDI) / 60% (VDI) maximum load?" Based on intuition, one might answer that it's due to workload spikes, which cause us to provision conservatively enough as to absorb spikes to an extent (the micro level), and fall back to distributed scheduling across servers to balance out at the macro level. But that's really how we deal with the problem, rather than an identification of the problem. The truth is that a good chunk o…

Inflows and outflows of US cities using a U-Haul barometer

Expanding on the U-Haul metric idea in the article "Fleeing Silicon Valley", I gathered costs of moving a household via U-Haul throughout a matrix of cities in the US. The thesis is that moving truck rentals will fairly accurately reflect the supply/demand equation, likely on a near real-time basis, giving us a current indicator of where the inflows and outflows are. This is interesting for a whole host of investment (and personal) themes, such as which direction commercial and residential real estate values will trend in various areas.

I picked 10 cities, which represent top startup locations in the US (a personal bias). To net out differences in travel distance, and to normalize values so that one can quickly compare and see trends, I gathered data for one-way moves in both directions between every one of 10 cities, and then divided the price of source-to-destination by the price of destination-to-source. What remains are ratios which show which way people are likely mo…

Portable Linux future using LLVM

Imagine a single Linux distribution that adapts to whatever hardware you run it on. When run on an Atom netbook, all the software shapes and optimizes to the feature set and processor characteristics of the specific version of Atom. Want to run it as a VM on a new Nehalem-based server? No problem, it re-shapes to fit the Nehalem (Core i7) architecture. And here, when I say "re-shape", I don't mean at the GUI level. Rather, I mean the software is effectively re-targeted for your processor's architecture, like it had been re-compiled.

Today, Linux vendors tend to make choices about which one or set of processor generations (and thus features) are targeted during compile-time, for the variety of apps included in the distro. This forces some least-common-denominator decisions, based on processor and compiler features, as known at the time of creation of a given distro version. In other words, decisions are made for you by the Linux distributor, and you're stuck…

Microsoft + Facebook + netbook = World domination (again)!

I'd imagine Microsoft is still stinging, years after letting the Google opportunity slip through their fingers. That's always an unfortunate possibility for companies who adopt the wait-and-follow approach to innovation. Although Microsoft did switch gears significantly and invested in Facebook in October 2007 before it ran away too. Based on the deal terms (a 1.6% stake of $240 million), it would appear there was more value to Microsoft than just the equity position. But I've come up with a strategy for Microsoft to get ahead of the curve this time, one that could return it to it's World domination position.

It's no secret that Facebook's growth is stellar, projected Worldwide at 5 million new users every week! Facebook is in some ways becoming to social networking what Google is to search. And that is what makes a phenomenal opportunity to Microsoft if they get ahead of this one. Less than 25% of the World's population are Internet users. For many …

Virtualization / cloud M&A opportunities

We've entered the phase where M&A of technology companies gets interesting. Oddly, while the economics are less than stellar, and cut backs and lay-offs run rampant, a number of major companies sit on mountains of cash. Recent M&A activities and rumors thereof, will knee-jerk companies into the buying frenzy that accompanies this phase. But beyond that, there is a new trend of Unified Computing, written indelibly in ink by the recent Cisco move into the server market. This will focus the M&A urgency on a few specific areas, and light up the war rooms at many corporate headquarters. So I'm offering some related ideas.


In the open source dimension I see a big future for Linux+KVM. Factoring in trends towards cloud computing, a bad economy putting a push behind Open Source, and a huge ecosystem surrounding Linux, Linux makes sense as the hypervisor. And as it so happens, the company behind the KVMhypervisor (Qumranet) is now owned by Red Hat, who has …

Virtualization 3.0: Cloud-wide VM migration and memory de-duplication

For those unfamiliar with my background, I authored a full x86 PC emulation project in the 1990's, called bochs. It was used in one form or another, by a number of virtualization players, including an R&D project at Stanford which became VMware. It's been interesting watching x86 virtualization mature, from the early days of it being used as a developer's tool and then on to server consolidation (what I call virtualization 1.0). Consolidation is an interesting proposition of its own, making use of the VM level of abstraction to pack more work onto less physical resources. But it's still a very static technology in that VMs need to be booted and shut-down to be moved to different hardware.

When VM migration came onto the scene, it unlocked a wealth of additional value of virtualization, and a set of higher-level technologies. In VMware terminology, DRS tapped VM migration to do load balancing within a cluster of machines. DPM uses migration to pack VMs into les…

A netbook concept for the next billion people

Despite the current economics, netbook sales have been growing at double-digit rates. It's one of the few hot spots in the consumer computing space. There are now over 50 vendors with an offering across EMEA! What's really interesting is the shift towards the importance of the telco channel in accelerating netbooks sales. The decreasing price point of netbooks, while not necessarily exciting the hardware manufacturers, is enabling new ecosystems, use-cases, and facilitating a value-system shift from the hardware to the newly enabled ecosystems. In Europe, for example, this shift has been towards wireless data service providers, in much the same way that inexpensive handsets enabled wireless voice services. But there are many other exciting possibilities in after-market sales, with decreasingly expensive devices. According to IDC, "mini-notebooks will continue to shake the industry from both a go-to-market and a pricing perspective".

Sales prices (not factoring …

Future of Linux desktop: co-Linux on Android

We're at the native Linux desktop, moving towards the Android desktop (netbooks coming soon). What would bridge those two environments, is to offer a second Linux sandbox which runs along with Android.

Android has a very specific architecture, with its own libraries and non-X based GUI, which are not conducive to running standard Linux/X applications. Even it's libc version (bionic) omits certain POSIX features, making it not fully compatible. Android apps have to be targeted for and compiled against Android.

To allow native Linux apps to run, a second sandbox environment is needed, which can co-operate with Android. Android would be the master environment, providing all of the kernel, hardware drivers, and complete software stack that it already does. The co-Linux environment would provide a separate set of non-kernel components: libraries, configuration and administrative files, applications, etc. As Android drives the hardware, the co-Linux environment would need to de…

Linux winners will be deal-makers with Microsoft

We have just entered a new era in interoperability between Linux and Windows.

The Desktop

A week ago, Google announced that they licensed Microsoft's Exchange ActiveSync protocol technology, and released a beta of the Google Sync service. Google Sync is a new push technology that allows over-the-air sync of Google calendar appointments and email contacts, across a number of handset environments including Windows Mobile devices. This was a wise move and one with a desktop component best described in a related quote I found:
"... a solid step in Google's march toward owning the desktop."If Linux wins ground on the desktop, it won't be because the Bell Curve of users cares it's Linux -- it'll be because the desktop gives users what they want. Easy synchronization is essential. At the same time, I believe this was smart of Microsoft. Android and Google services are gaining traction, and at some point Microsoft has to capitalize on the areas it has value, ra…

Canonical half as revenue efficient as Red Hat, per person

Canonical is the company behind Ubuntu Linux.

Recently, Canonical's CEO gave some indications of their annual revenue (about $30 million/year). As Canonical is a private company, there hasn't otherwise been a lot of financial information forthcoming. However, it's estimated that Canonical has 200+ employees.

This makes for an interesting opportunity to compare revenue efficiency on a per-employee basis with Red Hat (RHT), a competing public Linux company. Red Hat has approximately 2200 employees and its estimated 2009 revenue is 653.65 million dollars. I love this as a first order comparison between direct competitors to see how efficiently companies generate revenue.

That would put Red Hat at ~300K dollars per employee and Canonical at ~150K dollars per employee, or only half. To add some perspective, Microsoft rakes in ~676K dollars per employee. To be fair, Canonical is a younger company, and has likely made investments that will need time to play out on the revenue …

Cloud storage in, consumer HDDs out

There are a lot of rumblings of recent about a possible Google Web Drive offering. Of course, Microsoft has its cloud storage counterpart SkyDrive, and there are a whole panoply of companies with existing offerings with a bias from online file sharing to storage to backups. Just to name a few, there is, IBackup, Mozy, Iron Mountain’s Connected Backup for PC, and many others.

For consumers, online storage is a lot about convenience -- one of the key values which consumers will pay for, and in this case quite possibly on a reoccurring basis. There's the convenience to access files from anywhere. And to share files with others groups or even with the public. And to have someone else deal with back-ups. Forget buying a NAS box for backups -- backup to a service. What enables this trend to occur is simply network bandwidth availability. It certainly wouldn't have been feasible with a modem dial-up ISP.

It's worth thinking about other trends to which cloud storage …