This is “The Hardware Cloud: Utility Computing and Its Cousins”, section 10.9 from the book Getting the Most Out of Information Systems: A Manager's Guide (v. 1.0). For details on it (including licensing), click here.
For more information on the source of this book, or why it is available for free, please see the project's home page. You can browse or download additional books there. To download a .zip file containing this book to use offline, simply click here.
After studying this section you should be able to do the following:
While SaaS provides the software and hardware to replace an internal information system, sometimes a firm develops its own custom software but wants to pay someone else to run it for them. That’s where hardware clouds, utility computing, and related technologies come in. In this model, a firm replaces computing hardware that it might otherwise run on-site with a service provided by a third party online. While the term utility computing was fashionable a few years back (and old timers claim it shares a lineage with terms like hosted computing or even time sharing), now most in the industry have begun referring to this as an aspect of cloud computing, often referred to as hardware cloudsA cloud computing model in which a service provider makes computing resources such as hardware and storage, along with infrastructure management, available to a customer on an as-needed basis. The provider typically charges for specific resource usage rather than a flat rate. In the past, similar efforts have been described as utility computing, hosting, or even time sharing.. Computing hardware used in this scenario exists in the cloud, meaning somewhere on the Internet. The costs of systems operated in this manner look more like a utility bill—you only pay for the amount of processing, storage, and telecommunications used. Tech research firm Gartner has estimated that 80 percent of corporate tech spending goes toward data center maintenance.J. Rayport, “Cloud Computing Is No Pipe Dream,” BusinessWeek, December 9, 2008. Hardware-focused cloud computing provides a way for firms to chip away at these costs.
Major players are spending billions building out huge data centers to take all kinds of computing out of the corporate data center and place it “in the cloud.” Efforts include Sun’s Network.com grid, IBM’s Cloud Labs, Amazon’s EC2 (Elastic Computing Cloud), Google’s App Engine, Microsoft’s Azure, and Salesforce.com’s Force.com. While cloud vendors typically host your software on their systems, many of these vendors also offer additional tools to help in creating and hosting apps in the cloud. Salesforce.com offers Force.com which includes not only a hardware cloud, but also several cloud-supporting tools, including a programming environment (IDE) to write applications specifically tailored for Web-based delivery. Google’s App Engine offers developers a database product called Big Table, while Amazon’s offers one called Amazon DB. Traditional software firms like Oracle are also making their products available to developers through various cloud initiatives.
Still other cloud computing efforts focus on providing a virtual replacement for operational hardware like storage and backup solutions. These include the cloud-based backup efforts like EMC’s Mozy, and corporate storage services like Amazon’s Simple Storage Solution (S3). Even efforts like Apple’s MobileMe and Microsoft’s Live Mesh that sync user data across devices (phone, multiple desktops) are considered part of the cloud craze. The common theme in all of this is leveraging computing delivered over the Internet to satisfy the computing needs of both users and organizations.
Large, established organizations, small firms and startups are all embracing the cloud. The examples below illustrate the wide range of these efforts.
Journalists refer to the New York Times as, “The Old Gray Lady,” but it turns out that the venerable paper is a cloud-pioneering whippersnapper. When the Times decided to make roughly one hundred fifty years of newspaper archives (over fifteen million articles) available over the Internet, it realized that the process of converting scans into searchable PDFs would require more computing power than the firm had available.J. Rayport, “Cloud Computing Is No Pipe Dream,” BusinessWeek, December 9, 2008. To solve the challenge, a Times IT staffer simply broke out a credit card and signed up for Amazon’s EC2 cloud computing and S3 cloud storage services. The Times then started uploading terabytes of information to Amazon, along with a chunk of code to execute the conversion. While anyone can sign up for services online without speaking to a rep, someone from Amazon eventually contacted the Times to check in after noticing the massive volume of data coming into its systems. Using one hundred of Amazon’s Linux servers, the Times job took just twenty-four hours to complete. In fact, a coding error in the initial batch forced the paper to rerun the job. Even the blunder was cheap—just two hundred forty dollars in extra processing costs. Says a member of the Times IT group: “It would have taken a month at our facilities, since we only had a few spare PCs.…It was cheap experimentation, and the learning curve isn’t steep.”G. Gruman, “Early Experiments in Cloud Computing,” InfoWorld, April 7, 2008.
NASDAQ also uses Amazon’s cloud as part of its Market Replay system. The exchange uses Amazon to make terabytes of data available on demand, and uploads an additional thirty to eighty gigabytes every day. Market Reply allows access through an Adobe AIR interface to pull together historical market conditions in the ten-minute period surrounding a trade’s execution. This allows NASDAQ to produce a snapshot of information for regulators or customers who question a trade. Says the exchange’s VP of Product Development, “The fact that we’re able to keep so much data online indefinitely means the brokers can quickly answer a question without having to pull data out of old tapes and CD backups.”P. Grossman, “Cloud Computing Begins to Gain Traction on Wall Street,” Wall Street and Technology, January 6, 2009. NASDAQ isn’t the only major financial organization leveraging someone else’s cloud. Others include Merrill Lynch, which uses IBM’s Blue Cloud servers to build and evaluate risk analysis programs; and Morgan Stanley, which relies on Force.com for recruiting applications.
The Network.com offering from Sun Microsystems is essentially a grid computer in the clouds (see Chapter 4 "Moore’s Law and More: Fast, Cheap Computing and What It Means for the Manager"). Since grid computers break a task up to spread across multiple processors, the Sun service is best for problems that can be easily divided into smaller mini jobs that can be processed simultaneously by the army of processors in Sun’s grid. The firm’s cloud is particularly useful for performing large-scale image and data tasks. Infosolve, a data management firm, uses the Sun cloud to scrub massive data sets, at times harnessing thousands of processors to comb through client records and correct inconsistent entries.
IBM Cloud Labs, which counts Elizabeth Arden and the U.S. Golf Association among its customers, offers several services, including so-called cloudburstingDescribes the use of cloud computing to provide excess capacity during periods of spiking demand. Cloudbursting is a scalability solution that is usually provided as an overflow sservice, kicking in as needed.. In a cloudbursting scenario a firm’s data center running at maximum capacity can seamlessly shift part of the workload to IBM’s cloud, with any spikes in system use metered, utility style. Cloudbursting is appealing because forecasting demand is difficult and can’t account for the ultrarare, high-impact events, sometimes called black swansUnpredicted, but highly impactful events. Scalable computing resources can help a firm deal with spiking impact from Black Swan events. The phrase entered the managerial lexicon from the 2007 book of the same name by Nassim Taleb.. Planning to account for usage spikes explains why the servers at many conventional corporate IS shops run at only 10 to 20 percent capacity.J. Parkinson, “Green Data Centers Tackle LEED Certification,” SearchDataCenter.com, January 18, 2007. While Cloud Labs cloudbursting service is particularly appealing for firms that already have a heavy reliance on IBM hardware in-house, it is possible to build these systems using the hardware clouds of other vendors, too.
Salesforce.com’s Force.com cloud is especially tuned to help firms create and deploy custom Web applications. The firm makes it possible to piece together projects using premade Web services that provide software building blocks for features like calendaring and scheduling. The integration with the firm’s SaaS CRM effort, and with third-party products like Google Maps allows enterprise mash-ups that can combine services from different vendors into a single application that’s run on Force.com hardware. The platform even includes tools to help deploy Facebook applications. Intuitive Surgical used Force.com to create and host a custom application to gather clinical trial data for the firm’s surgical robots. An IS manager at Intuitive noted, “We could build it using just their tools, so in essence, there was no programming.”G. Gruman, “Early Experiments in Cloud Computing,” InfoWorld, April 7, 2008. Other users include Jobscience, which used Force.com to launch its online recruiting site; and Harrah’s Entertainment, which uses Force.com applications to manage room reservations, air travel programs, and player relations.
These efforts compete with a host of other initiatives, including Google’s App Engine and Microsoft’s Azure Services Platform, hosting firms like Rackspace, and cloud-specific upstarts like GoGrid.
Hardware clouds and SaaS share similar benefits and risk, and as our discussion of SaaS showed, cloud efforts aren’t for everyone. Some additional examples illustrate the challenges in shifting computing hardware to the cloud.
For all the hype about cloud computing, it doesn’t work in all situations. From an architectural standpoint, most large organizations run a hodgepodge of systems that include both package applications and custom code written in house. Installing a complex set of systems on someone else’s hardware can be a brutal challenge and in many cases is just about impossible. For that reason we can expect most cloud computing efforts to focus on new software development projects rather than options for old software. Even for efforts that can be custom-built and cloud-deployed, other roadblocks remain. For example, some firms face stringent regulatory compliance issues. To quote one tech industry executive, “How do you demonstrate what you are doing is in compliance when it is done outside?”G. Gruman, “Early Experiments in Cloud Computing,” InfoWorld, April 7, 2008.
Firms considering cloud computing need to do a thorough financial analysis, comparing the capital and other costs of owning and operating their own systems over time against the variable costs over the same period for moving portions to the cloud. For high-volume, low-maintenance systems, the numbers may show that it makes sense to buy rather than rent. Cloud costs can seem super cheap at first. Sun’s early cloud effort offered a flat fee of one dollar per CPU per hour. Amazon’s cloud storage rates were twenty-five cents per gigabyte per month. But users often also pay for the number of accesses and the number of data transfers.C. Preimesberger, “Sun’s ‘Open’-Door Policy,” eWeek, April 21, 2008. A quarter a gigabyte a month may seem like a small amount, but system maintenance costs often include the need to clean up old files or put them on tape. If unlimited data is stored in the cloud, these costs can add up.
Firms should enter the cloud cautiously, particularly where mission-critical systems are concerned. When one of the three centers supporting Amazon’s cloud briefly went dark in 2008, startups relying on the service, including Twitter and SmugMug, reported outages. Apple’s MobileMe cloud-based product for synchronizing data across computers and mobile devices, struggled for months after its introduction when the cloud repeatedly went down. Vendors with multiple data centers that are able to operate with fault-tolerantSystems that are capable of continuing operation even if a component fails. provisioning, keeping a firm’s efforts at more than one location to account for any operating interruptions, will appeal to firms with stricter uptime requirements.