Typically when we think of computing we think of two components that drive the computing infrastructure: hardware and software. Hardware is the physical components that make up the computing system and software being the desktop, and backend applications that serve our particular needs. However, there is a new component being built into our computing infrastructure which is Infoware. That is a computing component that deals with (and analyzes) the large quantities of information and connections that exist between data, and provides a front-end that makes it easy for the user to interact and understand the vast amounts of data. Typically this infoware works over the infrastructure which we call the Internet, and most people refer to Infoware, these days as "Web applications." Amazon.com being a popular example of Infoware used by a retailer. The Infoware will have profound effects on society, as things that were in the physical realm (such as retailers) can now use this technology to move into the virtual realm (such as e-retailers). In addition, the Infoware changes itself dynamically as the information it uses changes (think of changing reviews for a product on Amazon.com for instance).
Now let us consider Infoware in the context of the open source community. It would appear that open source is leading the way in providing the technology and infrastructure that allows Infoware to appear (i.e. the Internet infrastructure). BIND (Berkeley Internet Name Daemon) handles a great deal of the DNS database (the system which translates URL to IP addresses). In addition, the Apache project, an open source web server, is the most popular web server used to host web applications. Beyond the software that is being used to enable Infoware to be used on the web - open source scripting languages (i.e. Perl, Python, and TCL) provide much of the logic that runs these Infoware based applications, and open standards such as HTML provide the protocols that drive the Web. While it is true that Microsoft was working on their own version of the Web, called the Microsoft Network, it failed because the barriers to entry were to high, in comparison to the set of open technologies and standards which existed at the time. For instance, one would have to pay Microsoft to be part of their network, be locked into Microsoft's proprietary tools and to seek Microsoft's approval before being part of the network. To content creators, the set of open standards seemed to be the obvious choice, in comparison with Microsoft's limited and controlled network.
It's also important to note that developing content for Infoware is much different than developing traditional software. For one thing, since Infoware deals with data and the relations between it (via hyperlinks), while traditional software, typically deals less with data and more about data structures, logic and user interfaces that can be readily installed and used on a computer. This means that developing Infoware doesn't require programmers in the traditional sense of the word, but rather content creators, who understand the content (which is to say data and information) they are weaving into Infoware. This is why technologies like ActiveX (which is designed to be used by developers) has for the most part failed in the Infoware arena. Instead flexible scripting languages act like duct tape, and "glue," various disparate information sources together in understandable ways. Much like the desktop revolution and the need for desktop productivity software gave way to Microsoft's market dominance and IBM did not realize the significance of this new wave, Microsoft stands a good chance of seeing a similar fate if it does not understand this new paradigm in computing.
Web 2.0 has been a relatively hot topic over the past couple of years, but what exactly is Web 2.0? This term, like a lot of technical terms is has been associated with so many different meanings that often times it can be difficult to come up a single definition that truly defines what Web 2.0 truly is. In any event, we can at least begin to define when Web 2.0 started, which was shortly after the DotCom bust in the Fall of 2001 (the time which many overvalued Web 1.0 sites which originally made up the internet boom busted), when the dust settled, and websites which contained real value were all that remained.
How most people define Web 2.0 is to contrast it to the principles that drove Web 1.0. Web 1.0 companies sold products that harnessed the power of the web. For instance Netscape sold a web browser - a desktop browser that allowed people to surf the web, and it planned on selling server software and other products to dominate in the marketplace. On the other hand companies such as Google, a Web 2.0 company, instead provided a service, Google, from which it made money through marketing revenue. Furthermore, the power of Google's service wasn't the application so much as it was the management of the vast amounts of data it was continually indexing.
Another difference between Web 1.0 and Web 2.0 companies is that Web 1.0 companies based their business model on a model where web users were merely consumers who consumed this read-only medium called the Internet. In the Web 2.0 model, sites are designed not only for consumption of content by publishers, but also production of content by web users. In fact the aggregate power of a number of small sites (such as Web blogs), can wield as much power and content as larger mainstream content providers. Therefore Web 2.0 companies made their services realize this distinct fact about Web 2.0, and as a result reached out to the millions of web pages and content providers on the world wide web.
An important characteristic of Web 2.0 mentioned slightly in the previous paragraph is the power of the crowd. Since there are millions and millions of web users, many sharing their own content (blogs, pictures, movies, etc.), a collective intelligence is beginning to form on the internet. Since the web is nothing more than a bunch of data related to one another through links, the structure of the web affords a collective intelligence. This collective intelligence will continue to get "smarter," as more people join this community and those companies that realize this fact can harness the collective intelligence to provide data that is has a wide variety of applicability and uses. A good example of this is Google's Pankrank algorithm which uses the collective intelligence of web surfers to fine tune the ranking of search results.
We now live in an age where companies such as Amazon can track our buying habits in ways that traditional brick and mortar stores could only dream of. Furthermore, companies are producing and aggregating large sets of data which may represent a significant investment on their part (for instance Amazon's database of products). An important question in the Web 2.0 age is who owns this data? There are several companies who want to own and control the data they have created in some way (for instance Navteq's street maps). Meanwhile a movement similar to the open source movement for software has started for data in the Web 2.0 era, and is epitomized by the CreativeCommons movement.
Since in the Web 2.0 model we are delivering a service rather than a product (or piece of software for that matter), we must rethink the process associated with designing and building things. In the Web 1.0 and desktop metaphor we care about developing reliable, robust applications that are released at predefined release intervals and which occupy a set of features which marketing believes will sell. On the other hand in the Web 2.0 model, since we are delivering a service rather than a product, what counts is ensuring that a consistent (and optimum) level of operational excellence can be delivered. We think about such things as up-time, latency and metrics often not thought of in the traditional software development space. Furthermore, the Internet affords us the ability to be constantly update the service with minimal to no cost. In addition, thanks to the data that can now be collected and sent back to service providers, they now have the ability to measure users needs and uses of services, meaning that web services can be tailored more to user's needs.
The lightweight, low cost access to data form a variety of sources, couple with interfaces and mechanisms that make it easy to snap in services to a particular website, is another cornerstone of Web 2.0. Many web users are now mixing and mashing data from a variety of different sources to provide a unique glimpse into an area of interest. Technologies like RSS,AJAX, and others are lightweight interfaces that allow content creators the ability to easily compose these glimpses and change data sources and services at a whim. This new method of producing web content means not only that more web content can be created, but that the web content can change rapidly to the content creator's needs, or to changes in the underlying data sources. In addition, since the content or service is being delivered over the web can reach an audience of devices running on many different platforms (Windows,Mac,Linux) and form factors (handheld, desktop, laptop,etc.) in ways that traditional applications could never do.
All of this occurs in Web 2.0, with minimal co-ordination on anyone's part.
The user experience of Web 2.0 is much different than with Web 1.0 (its predesscor). In Web 2.0, rich interactivity is a hallmark reached through open technologies such as AJAX (which allow content to be updated on a website seamlessly), as well as some closed technologies such as Adobe's FLASH platform. In fact many traditional desktop applications are being remade for the Web today and are adding affordances that only the web can offer (such as the ability to have a similar user experience on a variety of different devices).
Now that we have been thinking about applications delivered via the web, let us now consider the Internet Operating System, which is the concept that the applications that you use and the services that are being provided are done online, "through the cloud,". A fundamental paradigm shift occurs where the main processing, and value that is being provided by the Internet Operating System isn't the particular device you are using (handheld, laptop, or otherwise) but rather the servers that handle these requests. An increasing trend in the area of the Internet Operating System, has been the emergence of providers who supply not only the computing hardware (i.e. servers and other physical computing resources) to content/application providers, but also an interface that makes the management of the applications on this Internet Operating System infrastructure as easy as possible for solutions providers (equivalent to a OS's API). Providers of these services include : Amazon's Web Services, Microsoft's Azure, and the Google App engine. A potential risk with these services is the potential of vendor lock-in for the solutions providers who rely on such Infrastructure as a Service providers.
Just as the traditional operating systems provided a set of core functionality that made application developer's lives easier (such as abstracting out hardware functionality), the Internet Operating System provided by such companies will offer similar services for web application developers who leverage their operating system, albeit in a manner unique for the internet. For instance, since the domain by which many web applications act on is the domain of data, services that simplify the otherwise complex task of search will be provided by these Internet OS providers. In addition, concepts from the PC world, like files are extended to the Internet world, to include web pages, rich media, and the underlying services that must be provided for this type of content hosting (such as access control). Other services which most web users are familiar with that are part of the Internet platform, include things like payment processing, advertising, location-awareness, social networking, image and speech recognition, data sources (for instance governmental), to name but a few.
I mentioned that the Internet Operating System consists primarly of the backend infrastructure of the Internet, that is the servers, proprietary software that provides services for the solution provider, and associated hardware. However, the front-end of the Internet operating system, that is the web browsers and applications that run on the device which connect to the Internet Operating System also are important as well. Infact the web browser is only one type of front-end that is being used the Internet Operating System strategy, another is the Software Plus Services, platform where the front end provides a bunch of critical facilities, with services providing complementary services. In any event, there is no doubt that those who control the front end experience will seek tight integration with their backend, to optimize the user's experience.
So what's missing in an Internet Operating System? A lot of the things that we have taken for granted in a traditional operating system cannot be necessarily found in an Internet Operating System. Reliability and scability is a good example, while we take for granted that our desktop operating system will perform well a majority of the time, the same cannot be necessarily said about the Internet Operating System. For one thing, if demand of services provided by an Internet Operating System exceed the supply (i.e. overload the servers), the experience may diminish or fail to work. Other things that we take for granted in a traditional operating system, is the ability to access our data at all times (something which is more difficult to achieve in an internet operating system model).
Finally, there are a bunch of providers that offer Internet Operating Systems/Platforms. Let us compare and contrast each of their strengths and weaknesses to get a sense of the Internet Operating System landscape.
Amazon, Google and Microsoft offer strong Infrastructure As a Service offerings (storage, computation, and Hosted SaaS apps), while Apple and Facebook overall have weak offerings in this arena. Amazon, Google, Microsoft, Facebook, and Apple each specialize and have strengths in certain kinds of media access (for instance Facebook has a strong presence in things like photo sharing). For the most part all of the providers have in some way a monetization platform be it advertising, or payment processing. Google and Microsoft for that matter have a strong presence in location aware services, while others are ages behind. Microsoft, Google and Apple have respectable calendaring and scheduling services. Microsoft, Apple, Google, and Facebook have the potential for building rich and deep social graphing services. Apple, Google, and Microsoft have strong positions in the communication services arena (in things such as e-mail and chat). Microsoft and Google have a good position in the sensor management services (such as image and speech recognition). Google and Apple, have a strong presence in both mobile devices and the operating systems that support them. Finally, Microsoft, Apple and Google have a strong presence in the web browsing arena.
So what will happen to all of these infrastructure providers? There are really two options. Firstly, one of these providers can build on the competencies which they lack to become the, "be all end all," service provider which provides all the services that application developers could possibly need, and hence much like Microsoft did to the Personal Computer industry, become the defacto Operating System. A second, and what I feel more likely choice is that each of these companies will work together and interoperate with each other in the competencies which they are strong at, as it is very difficult to be the jack of all trades given the large number of services and competencies that are needed to control this platform. An interesting note is that the companies not mentioned in my comparision (for instance Twitter and VMWare), when combined together are strongly competent in all the areas listed.
No comments:
Post a Comment