Tuesday, January 27, 2009

Cloud storage in, consumer HDDs out

There are a lot of rumblings of recent about a possible Google Web Drive offering. Of course, Microsoft has its cloud storage counterpart SkyDrive, and there are a whole panoply of companies with existing offerings with a bias from online file sharing to storage to backups. Just to name a few, there is Box.net, IBackup, Mozy, Iron Mountain’s Connected Backup for PC, and many others.

For consumers, online storage is a lot about convenience -- one of the key values which consumers will pay for, and in this case quite possibly on a reoccurring basis. There's the convenience to access files from anywhere. And to share files with others groups or even with the public. And to have someone else deal with back-ups. Forget buying a NAS box for backups -- backup to a service. What enables this trend to occur is simply network bandwidth availability. It certainly wouldn't have been feasible with a modem dial-up ISP.

It's worth thinking about other trends to which cloud storage is very complimentary. Netbooks for example, are targeted for mostly online activities, and as such are well suited for use of cloud storage for user files such as photos, videos, songs, etc. If a lot of storage needs of user files can be pushed to the cloud, then the requirements of netbook local storage goes down or doesn't need to grow as rapidly (end of the Moore's law of consumer storage). That shifts the suitability curve of flash memory storage products (like SSDs), accelerating its adoption in netbooks. The same can be said for any PC platform, for that matter. Which would mean that HDD storage growth will shift from consumer products to the server dimension, where it's utilized by cloud storage providers.

What will be interesting to watch play out, is how each cloud storage vendor handles security. For private storage, consumers will want an option to secure their files with no less exposure than they have with a local drive. And at the same time, if smart-caching of well-used files is done well (even across boot-ups), we'll have a very functional cloud storage model with relatively few trade-offs.

Disclosure: no positions

Monday, January 26, 2009

Utility computing syndication -- evolutionary future

Owning and operating data centers is an extremely complex and expensive proposition. The result, data center complexes often in multiple fixed locations, each with a life span of 15 years or so. And thus tied to locality considerations of power generation, labor pools, real estate, taxes and many others. This is not the epitome of a grand dynamic computer and storage fabric vision whereby capacity can be added, subtracted or moved around as needed. The fact that so many organizations throughout the world have to roll out their own data centers, ought to be the first clue that a huge amount of inefficiencies exist in organizations owning and running their own data centers.

However, wherever significant inefficiencies exist, opportunities are created. It's an unsustainable notion, having each organization build its own data center, even for smaller versions with only a handful of machines. Besides being tied to the locality issues already enumerated, one just can not come close to tapping economies of scale of a large virtualization provider, nor utilize the same level of statistical usage. Which is why I see the future of data centers will move towards outsourcing to larger data center providers, who house workloads and data for many many companies at once.

Amazon with its Elastic Compute Cloud (EC2) and Simple Storage Service (S3), offers a first-order outsourced data center service, for example. Though, it's mostly set up currently to handle server workloads. It doesn't support Windows XP and Vista VMs, so it's not yet an on-demand VDI provider. Same for the UK based Flexiscale and GoGrid. But looking forward, why can't utility computing handle most or all of your entire data center needs? They have to. There's just too much inefficiency in the old model. Kicking and screaming and dragged into the next decade perhaps, but all the attendant security, compliance, networking and other issues have to be solved.

Before the world catches up to utility computing, I'm proposing that we think about the next step in the evolution -- because it requires (re-)factoring of so many facets of the utility fabric (virtualization, networking, storage, management, security, ...). Ultimately, we will need to allow for utility providers to syndicate for a myriad of reasons:
  • Capacity management
  • Power management and rate optimizations
  • Catastrophe management
  • Follow the Sun operations
  • Eliminating single vendor lock-in
  • VDI optimizations (see below)
  • ...
VDI is worth explaining more. One of the big problems with streaming a desktop session from a VM running on a server, is that in a mobile and distributed work-force world, a user can be literally anywhere on the planet. And VDI just doesn't work that well across longer latencies, especially if you want a high quality desktop experience. But how can any organization expect to serve up a low latency VDI session anywhere? To do that would mean to place the running VM somewhat near the users. The answer is syndication. By borrowing capacity from another provider, the sheer amount of physical locations that a VM can run is bounded only by the syndication network size. Until then, I don't see VDI as a viable solution for true road-warriors.

To use an analogy with the airline business, if unexpected events take capacity out of one airline, they can re-book flights with other airlines. Similar analogies exist with the power grid, network fabric, silicon fabrication and many others. That's where we need to get to with utility computing.

And the reason I mention this, is not only because there are so many additional opportunities for related products and services, when one realizes syndication is the future. But also, to urge all types of companies to push their products and road-maps to accommodate it.

Disclosure: no positions

The real Windows 7 threat to Linux (desktop)

There's been a recent spate of predictions that Windows 7 will kill off the Linux desktop, especially on netbooks. My favorite title so far is Windows kicks Linux to the curb, which provides a nice visual. Well, I'm not actually going to disagree, at least as far as traditional desktop Linux and netbooks go.

But first, a question to drive a point home. Have you ever used a version of OpenOffice Cloud? Neither have I, because it doesn't exist. The closest you can come is to use Ulteo, a virtualized Linux provider service, started by a previous Mandriva founder -- then everything is running on a VM in the cloud including OpenOffice. If you want to go cloud, you have to use something like Google Docs. And isn't a netbook about the cloud?

Microsoft hasn't been sleeping while the cloud trend has been developing. A few months ago they announced they're extending Office to the browser, with lightweight versions of Word, Excel, PowerPoint and OneNote. And they also announced Azure, their vision of an OS for the cloud. Now, a Linux vendor could pray that a move from XP to Windows 7 on netbooks would mean a return to the days of high priced BOM-based sales (and thus an opportunity for Linux to compete more favorably again). But the reality is, Microsoft will charge whatever the market will bare for netbook platforms, and move the rest of the business to chase the after-market revenues that cloud computing offers. That's something very hard to compete with, for desktop Linux distros which often struggle to stay afloat. This was why I wrote the article about how to make the Linux desktop profitable. The reality is that, IMHO, the Linux distros will never compete well with Windows on BOM-based sales.

So who is the one company who really gets cloud computing, has the infrastructure for it, and an interest in Linux? Google. My take is Windows 7 is a threat to the classic Linux desktop distros. And that Google is the future of the Linux desktop, with Android pushing up the food-chain, and many of Google's infrastructure bits pervading the desktop. Note to distros, belly up to Google...

Disclosure: no positions

Sunday, January 25, 2009

Useful Wi-Fi extension: add user-programmable field for access points

I was doing a little research into Wi-Fi positioning systems (like Skyhook), and it occurred to me we're missing out on huge opportunities in the Wi-Fi space. Skyhook can tell you within excellent resolution where you are, based on a combination of an extensive access point address to physical location database and signal strength information (now even assisted by GPS and cellular radio).

But there's another way to look at this problem, which would give a more democratized and rich solution, however with attendant noise and error. Many access points (APs) are programmed to send out periodic "beacon" packets (unless programmed not to announce their networks). These packets are designed to allow devices to discover Wi-Fi networks before attaching to them, so no transmit capabilities are needed to receive them. Why not allow an extension of the IEEE 802.11 protocols such that a given AP can introduce a programmable payload of data into an extended field in the beacon (or other such) packet? GPS coordinates would be an obvious payload component.

While researching this, I came across interesting suggestions to use XML to encapsulate the data. Of course, packet length is important, so either payloads would have to be short, or perhaps have critical fields (GPS coordinates for example) and a reference URL to the rest online. There are many possible useful bits of info which could be in the XML payload, such as those in one related suggestion:
Perhaps they could also provide hints such as, "quiet area: devices should switch to vibrate mode here."
In any case, what I like about offering user configurable data is that the packet fields only need to be extended once and the XML format can evolve independently. And perhaps as importantly, devices can glean information without having to ever transmit information. Receive-only or lower power devices could use this data, opening up a whole new world of opportunity! Maybe we'll see a pair of shoes which gather a whole stream of information as we walk about cities, all self powered by our walking...

Disclosure: no positions

Friday, January 23, 2009

Trusted hypervisors to enable commercial HPC@home services

Volunteer distributed computing projects have been around for a while. You've probably heard of for example, SETI@home, a Berkeley project launched in 1999 to listen for radio signals from ET. Hey, maybe you even run the work manager as your screensaver. Another example would be the Stanford protein folding study project, Folding@home. I call this whole class, HPC@home (High Performance Computing).

These are very cool projects which can tap vast volunteer resources. But it's hard to scale this model to commercial projects which have sensitive data computations. Essentially, there is no guaranteed isolation between a user's general purpose environment, and the sensitive computation.

Imagine then, what could be done if trusted hypervisors were installed on a large base of home PCs. Assume all PCs would have a TPM, and a complete chain of trust from power-on to the hypervisor running. TPMs are working their way into popularity (IDC figures a 90% attachment rate by 2010), so that seems like a fair assumption moving forward. Bare-metal desktop hypervisors are just coming into the limelight, likely first on the corporate side, and then on to the consumer desktop, so I'm reading ahead a little there.

Given an HPC@home VM could be run on a desktop PC with guaranteed isolation, and access only the CPU, GPU, and networking to the HPC cloud, then a whole new model of commercial HPC cloud services could emerge. For a healthy percentage of any given day, most PCs are unused. They could even be put in a low-power sleep state with wake-on-LAN enabled, and power-on when needed to the hypervisor and HPC VM quickly to consume an HPC work task.

What's interesting is that many of the pieces are coming together. NVIDIA's CUDA architecture enables using their later GPUs to do general-purpose parallel processing, and they claim over 100 million CUDA-enabled GPUs have been sold. All the major GPU vendors have now announced support for OpenCL. A key requirement is that all the players make sure these resources are accessible one way or the other by an HPC VM.

I could see new business models emerge for existing HPC services, for example render farms. Currently companies either have an in-house render farm or send jobs to an outsourced render farm service. But perhaps a future outsourced render-farm service would not have any (or a lot less) of its own equipment, and instead farm-out the jobs to a cloud of subscribed home PCs with GPUs. Perhaps starting on the low-end of the business (building comfort and proving out the model) and working upward would grease the skids. Given a flexible infrastructure, this kind of HPC outsourcing company could sell much more than rendering.

This has the added benefit of not only distributing hardware (and not having much capital outlay), but also de-centralizing power consumption away from data centers. Perhaps work can be farmed out in such a way that no particular node gets enough data (to alleviate paranoia further). Anyway, I love the idea of getting paid monthly for some other company to use my PC when it's not in-use. Can I charge a premium for "green cycles" if I power my PC with solar?

Sweet juicy opportunity for PC vendors: having an HPC outfit effectively rent cycles from a home (or work) PC enables some very interesting subsidy models which will help you penetrate the next billion as well as get through this little global recession we've got going.

Disclosure: no positions

Thursday, January 22, 2009

Server-side rendering to push gaming to the cloud

Generally when something gets pushed to "the cloud", it seems the user experience gets depreciated or new limitations arise. Adobe Flash based games have been a step in the direction of the cloud, being at least delivered from the cloud but run locally. Quality of Flash games have gotten better, although I wouldn't equate them with high intensity DirectX PC games. And in any case, you need a platform which supports Flash.

Then along comes this idea of server-side rendering, using the browser as the canvas. This opens a whole new dimension for cloud gaming, virtual environments and other interactive 3D usage, whereby the rendering computation cannot be done ahead of time. Check out, for example, this excellent article about OTOY. What makes server-side rendering very interesting, is that it can unleash rendering capacity far greater than the CPU and GPU capabilities on the consumer device. So for example, if you're enjoying a game on a smartphone, the rendering might be parallelized across an array of servers and high-end graphics cards, giving resolution and effects quality not even remotely attainable otherwise on the smartphone (and certainly not within it's power envelope).

Making this all work of course, presupposes respectable networking bandwidth and latency. But offering respectable networking infrastructure pervasively, should be on top of any nation's list...

I believe this could really transform gaming and virtual world environments, truly pushing them to the cloud while making them accessible to many consumer devices. It could also re-shape game portals, allowing users to try and play any number of high-intensity games without any install or strict hardware requirements.

Disclosure: no positions

Wednesday, January 21, 2009

Future of personal navigation data & software is free & open

A while ago, a cool freebie Android navigation app called AndNav2 became available in alpha. Two things that caught the eye immediately, are first that it supports turn-by-turn directions (not available on the iPhone yet), and second that it now uses the Open Street Map database. AndNav and the app store revolution can't be too pleasing to GPS vendors such as Garmin. But what's also interesting is that OSM "aims to do for maps what Wikipedia has done for encyclopaedias". This is crowd-sourcing brought to mapping!

Until recently, this was the commercial province of companies like NAVTEQ and TeleAtlas. If you look on Google Maps or MapQuest, for example, you'll see copyrights from those two companies on the bottom of the map. What really struck me as an endorsement and signal of the future of OSM, was that recently the French granted OSM access to land registry data! This helps set a precedent, that such data such be shared with the World community, and I would think will nudge other governments to follow.

I can think of so many new kinds of data which can become available in a crowd-sourced database, that a whole realm of startups and opportunities await...

Disclosure: no positions

Tuesday, January 20, 2009

Linux kernel needs more modularity for bare-metal hypervisor viability

I wrote previously about how Linux is notionally a bare-metal hypervisor.

But let me point out a sticking point w.r.t. the Linux kernel being considered a viable bare-metal hypervisor: it's HUGE! Imho, a lot of the debate over hypervisor design types, bare-metal, etc. is largely academic. When you consider the overall footprint size of the management layers, storage, authentication, domain0 stack in Xen-style hypervisors, special guest drivers, etc., the hypervisor footprint is put in a better perspective. However, it is true that modularity and smaller more manageable components are generally better designed and more auditable for errors and security issues.

When the Linux kernel was initially designed, it was a monolithic chunk of code (i.e. drivers could not be modules). Some years later, after various debate, loadable kernel module support was added. That was a big step in the right direction for code quality and modularity -- few people could imagine Linux without it today. Thereafter was one of the more storied debates on adding a pluggable scheduler -- one size does not fit all workloads. The net result is that it's now in recent Linux kernels, and again the consensus is that this was a very good thing.

So what's become of the size of the Linux kernel after such developments? Well, not that long ago I compiled an extremely stripped down recent Linux kernel for a VM. Uncompressed, the size of the kernel binary still rhymes with megabyte. That tells me there's a lot of extra baggage which needs to be modularized out. Whether it's perception, marketing, or reality, I think we need to get a slim kernel-proper down to a few hundred kilobytes or less before people consider it a bare-metal hypervisor.

Beyond consideration as a hypervisor, taking time to modularize and clean house is always a good thing, and like past modularization efforts always produces a better design and encourages more innovation. Smaller modular chunks mean people or groups of people can comprehend, analyze, audit, innovate and improve within a more manageable domain. This would go a long way towards allowing academic projects which need to be completed in one semester (nod to Andrew Tanenbaum). And with Linux development picking up steam, I think this is well needed in any case. With that, I offer the following project ideas. Sometimes people are looking for ways to help.
  1. Modularization Guru: oversee modularization of anything that can possibly be modularized, and drive changes into the mainline kernel. This gig is not for the faint of heart -- you'll need a flame-proof shield.
  2. Bloat Tracker: do minimal compiles over a history of Linux kernels and plot the size of the kernel proper with everything modularized. Keep track of new code introductions which add bloat creep. The job is to shame people to help the modularization guru.
And I'd note to the Linux crowd, that the hypervisor is becoming the new OS. Either Linux adapts, or becomes subjugated...

Disclosure: no positions

Sunday, January 18, 2009

Will Android be Moblin 2.0/3.0?

Intel created and is a large contributor to Moblin, a "Linux-based software platform for building visually rich, dynamic, and connected applications that run on devices based on Intel® Atom™ processor technology", to enhance sales of MIDs and Netbooks. Moblin 1.0, based on Ubuntu Linux, was announced July 2007, but didn't get much traction. Around July 2008, it was reported that Moblin 2.0 was switching to a Fedora base and would be announced at the Fall 2008 IDF. That IDF came and went without a related announcement. In October, it was reported Moblin 2.0 was pushed to the 1st half of 2009.

Aside from the timeline, it's interesting to look at Moblin from the perspective of what it seeks to achieve, which is essentially an Open Source application stack and GUI optimized for mobile devices, and an associated ecosystem of software and hardware vendors to help drive adoption. Those goals are quite similar in nature to what Google's Android platform aims for, and Android is from all reports rapidly picking up steam. There's already an Android Market to deliver 3rd party apps, and with nearly 50 OHA members backing Android, 2009 is likely to replete with many new Android devices. Note that Intel is one of the OHA members, and there's now an x86 Android port which has been shown on a netbook.

By contrast, a Google news search of "moblin 2.0" yields almost nothing. And using a favorite quick-check of mine (using the counts of the auto-complete drop-down on a Google search), Moblin had 740K counts (only ~2% of Android's 31.4M)! Given how much momentum is building behind Android and the series of Moblin delays, it seems a real possibility that Moblin 2.0 will be eclipsed by Android. One of the key benefits of Moblin was that it would provide a more consistent environment for mobile devices, because there's otherwise so much fragmentation which prevents gaining a critical mass. But I'm now wondering if Android has essentially provided the means to satisfy those goals. Perhaps Intel can graft some interesting bits from Moblin onto Android, and call it a win. Otherwise, Moblin may instead create confusion and fragmentation which it originally intended to mitigate.

Disclosure: no positions

Friday, January 16, 2009

Self-encrypting drives to speed up PC boot times?

In the last year, drive vendors such as Seagate, Hitachi and Fujitsu announced self-encrypting drives. The general scheme is that you type in a password during the BIOS boot-up phase, and the password is authenticated by the drive. The drive then decrypts disk reads and encrypts disk writes at native speed, all internal to the drive. So to Windows, Linux or other software, the drive appears as a normal unencrypted drive, as all such software is booted after unlocking the drive.

It occurs to me, if self-encryption becomes a common feature in drives, perhaps one of the banes of a quick boot-up (anti-virus checks) could be eliminated during some or all of the boot-up phase? TPMs are also working their way into popularity (IDC figures a 90% attachment rate by 2010), which would offer a more complete chain of trust to complement self-encrypting drives. If it could be trusted that no modifications have occurred to the drive since the last boot, couldn't a lot of scanning be eliminated, with a focus only on newly added content?

If well-coordinated with AV software, I wonder if this will open the door to snappier boot times on Windows platforms?

Disclosure: no positions

Tuesday, January 13, 2009

The Linux desktop could be profitable soon

I've been involved in Linux since the early 1990's, worked at one of the mainline Linux distros some years ago and have been an Open Source author of two projects. Over the years, the Linux environment has made great functional progress, yet mainline Linux vendors have struggled continuously to create a profitable desktop business around it. Many have tried, some have just given up on the desktop.

Given much of the software in the Linux environment is free, it seems a natural corollary that a Linux desktop would not be a profitable proposition. But I contend that not only can Linux be a profitable business, but it will be. Making money selling a shrink-wrapped Linux OS, or ISV apps, I'd agree is a very tough gig. But a number of factors are aligning to make an entirely different kind of Linux desktop business a very viable proposition. The equation:
profitability = build-ecosystem + app-store + reoccurring-revenue

By "build ecosystem" I mean proliferate a given Linux distro as widely as possible. Give it away if needed. Do whatever it takes to get it on as many platforms as possible. It's seemingly antithetical, but one can not be concerned with making profit here, IMHO. Spend money to make this happen.

Build out a low-friction (as in brain dead easy to use) app-store, which users can click-and-buy to get new exciting apps. Put a lot of energy into this app-store -- it's the foundation of why your distro will be profitable. Build gravity and critical mass. Make developers eager to get their apps on your app-store. A percentage of apps being free is a good thing, on the app-store -- build critical mass like your life depends on it.

Reoccurring revenue. Linux vendors need to get out of the mentality of selling Linux. Rather, Linux can be a platform and a vehicle for generating after-market and reoccurring revenue. Some concrete examples follow.

Does this sound a lot like Google's Android model? Yep. And as far as reports go thus far, they don't even seem to be concerned with making money on the Android Market itself. Android is such a great vehicle for mobile search, location based services, content delivery etc -- apparently that's plenty. And btw, Android is founded on a Linux kernel, and there's a lot of buzz about it pushing up into netbooks and higher food-chain devices. I would think Android will end up being a highly profitable proposition for Google, just not in the conventional Linux distro business type of way. That's the mentality I believe will make a Linux desktop profitable.

As promised, here are some apps that would support a "back end" loaded Linux desktop business model:
  • Backup to the cloud (user backup is a thing of the past)
  • Pay-to-enable hardware features widget (take cut of enablement)
  • Multi-media search redirects and sales (Boxee, Roku, ...)
  • Personal navigation / mapping software & database (Garmin, ...)
  • Smartphone companion software (e.g. Celio)
  • Pre-installed Windows/Office VM
  • Voice recognition software (plugs into a host of other potential services)
  • Location based services redirects
  • Android VM (revenue share with Google?)
  • 3G service enablement, pay-as-you-go, or multi-vendor trials (kick-back)
  • Other apps on store (take % for commercial apps)
I can think of a few more money makers, but hey, not all my apps are free. :)

Disclosure: no positions

Thursday, January 8, 2009

Time for pay-to-enable-features computer devices

There is much chatter about the hyper-commoditization of computer devices causing a "race to the bottom". Additionally, the confluence of smartphone and mobile PC markets has not only the potential to disrupt either market, but also adds new hardware requirements to the traditional mobile PC market which are influenced from smartphones. One doesn't have too look far to see new netbook/notebook devices announced which include 3G, GPS, and touch screen. Intel's Classmate design even has an accelerometer, which could open up some really interesting usage.

But then the classic question is, if high-volume low-price devices need to be ever increasingly equipped with hardware features, how to device OEMs and chip vendors make money, especially given the disruption to higher margin devices which used to sit higher on the food chain?

I believe the time has come for manufacturers to usher in a new era of pay-to-enable-features devices. With this comes a necessary shift in business model from squeezing profits out of initial sales, to collecting incremental and/or reoccurring revenue after sales have occurred. For example, a netbook device which has a superset of the functionality required to sell into a low-price high-volume market could be sold near or even below cost. If the consumer would like to enable 3G or the accelerometer, they could then pay an additional cost to enable such features. Want to enable your processor to go faster, or enable the latest WiFi? Click to enable and pay. Merely having more features than are initially offered, gives a teaser vehicle to allow consumers to try-before-they-buy. Once they get hooked, the restistance to purchase goes down significantly.

This strategy dovetails nicely with the proliferation of online app-stores. Take for example a more open one like the Android Market. Games tend to enjoy popularity by viral marketing, as do many other categories of apps. A popular game might request use of the accelerometer. So too could a personal navigation app. What a great driver to convince a consumer to purchase an incremental hardware feature that they didn't previously need!

How such incremental revenues are split between the various component, system manufacturer, software and web vendors, are up for grabs. I can't imagine this would be an easy transition. But this is an area where a company like Intel could exploit and take leadership. They tend to enjoy a healthy silicon process advantage that perhaps makes including extra functionality a real possibility, where other vendors can't necessarily absorb the additional silicon real estate costs.

Could an app store essentially be subsidizing your future netbook purchase? Will there be a race to be the key apps that drive incremental purchases of certain hardware features because those app vendors receive a kick-back? If the answer to these questions is yes, look for the kings of the hill of process technology (Intel) and mobile device open platform and apps store (Google) to benefit.

Disclosure: no positions

Thursday, January 1, 2009

The ultimate iPhone app: try and buy another carrier

Here's a free New Year's idea for Apple. Offer one or more apps on the app store that will let a user try out another carrier for a day or two, either for free or for some nominal price. If they like the service, they can then pay a "switching fee" and sign up for good with the new carrier. I have a feeling carriers like Sprint would offer such an app for free.

The benefit to the user is that they can see if the carrier has good 3G and voice coverage in a particular geographic area. The benefit to Apple and AT&T in America for example, is that they may be able to prevent getting sued. In many people's eyes, they over-promised and under-delivered. Prevent an uprising and bank some pro-rated switching costs before it's too late.

Disclosure: no positions

Killer product combo for 2009: smartphone + Redfly + RDP

The 'net is abuzz with 2009 predictions for the best products. Advancing the challenge a little more, I'm proposing a killer product combo instead.
killer-product-combo = (smartphone + Redfly + RDP)
Smartphones

The state of the art includes WiFi, 3G (or better), 3D graphics, multimedia processing, increasingly capable ARM-based processors and thanks to the attractiveness of their sheer market size, a very rich set of applications.

Redfly

What smartphones don't have for the moments when you need to get some real work done or enjoy multimedia at a more pleasing size, are bigger screens and keyboards. The Celio Redfly provides just that, in a $200 extremely thin and light "smartphone companion" package. And an added benefit from carrying the extra (yet respectably light) weight, is that the Redfly acts as an external battery, powering your smartphone via a USB port.

RDP

If you can connect to the 'net, you can attach to a remote desktop, giving access to all the facilities your remote desktop session offers without needing those resources locally. A remote desktop can be a physical machine or even a virtual machine (VM).

Killer combo

When you're mobile and want to access a remote work or personal desktop, you can use your smartphone with RDP. But a smartphone offers but a small window into the screen of your desktop, so you'll need to pan and/or zoom a lot. That doesn't cut it for getting real work done. That's where the Redfly comes in. Attach it, either through bluetooth or USB, and you're in business. Now RDP isn't something that works well on some 3G networks. But WiFi is quite capable, making it important that all smartphones incorporate WiFi going forward.

The bonus of the Redfly is that it acts only as an external display and keyboard for your smartphone -- you don't sync your smartphone with it. So, if it's lost or stolen, no data is lost with it. And with RDP, files don't have to ever be leaked out of the desktop to your smartphone either. This is obviously valuable to consumers. But to the enterprise, this is critical. Due to an increasing regulatory buzzword soup (SOX, PCI, HIPAA, GLB, ...), this is even mandatory.

The great thing about this combo is it's flexibility. You can take only the smartphone with you, and use it as a smartphone. Or even RDP into a remote desktop and do some lightweight work. Or you can carry both the smartphone and Redfly, and use them together to view videos or to compose longer emails and documents. If you don't bring your Redfly, you can still cope. But to an enterprise, the very limited nature of the Redfly makes it wonderful for regulatory compliance. The RDP piece isn't necessary for some consumers. But it's a real deal sweetener for many power users.

Sweet juicy product idea

How about a larger sized iPhone-like device, but which serves no purpose (and thus can be a lot "dumber" and cheaper to design and build) other than to act as an input/output companion for smartphones? If designed right, nobody will ever know the brains are really in your smartphone stowed away in your bag. Turn it on, it automatically connects via bluetooth. Bonus points if you integrate a web-cam to facilitate video conferencing (at least for an up-sell) or allow it to sit tilt-angle on a table hands-free. Make the smartphone bits easily downloadable through an app store, so others can quickly borrow a consumers's device. It would be nice to have a social ice-breaker to replace smoking. Plus, that makes good viral marketing...

Don't forget to also make a mounting mechanism for the car dashboard. Your smartphone likely already has GPS and the brains to act as a navigation device. Coupled with the sleek form factor companion display and bluetooth wireless connectivity, it'd make a great PND. Garmin et al, you listening?

Disclosure: no positions