Sunday, July 11, 2010

Windows Azure and Cloud Computing Posts for 7/9/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Updated 7/11/2011 and marked : Recap of Fujitsu/Microsoft cloud partnership, World Wide Partner Conference, OData browser for iPhone, et al.

Updated 7/10/2010 and marked : Fujitsu/Microsoft cloud partnership, et al.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Azure Blob, Drive, Table and Queue Services

Brian Swan’s Accessing Windows Azure Blob Storage from PHP post of 7/8/2010 begins:

A couple of weeks ago I wrote a post that covered the basics for accessing Windows Azure Table storage from PHP. In this post I will do something similar for accessing Windows Azure Blob Storage from PHP. This won’t be a deep-dive into Windows Azure Blob Storage, but it will be a “how to get started with the Microsoft_WindowsAzure_Storage_Blob class (which is part of the Windows Azure SDK for PHP)”-type post.

What is Windows Azure Blob Storage?

Windows Azure Blob Storage is a data store for text and binary data. Access to Blob Storage is done via REST/HTTP. (When you create a Windows Azure storage account (see below), you get three services: blob storage, table storage, and a queue service.) Blob storage is logically divided into containers, blobs and blob metadata. You may create any number of containers which can each contain any number of blobs, and each blob can have metadata associated with it (as key-value pairs). Access control is set at the container level as either public or private. All blobs within a public container are accessible to anyone while blobs within a private container are accessible only to people who have the storage account’s primary access key.  The following diagram shows the basic structure of Windows Azure Blob Storage:

image

For a more detailed look at Windows Azure Blob Storage, see Blob Service Concepts and Blob Service API.

How do I create a storage account?

The steps for creating a storage account are the same whether you want to use the blob, table, or queue stores available in Windows Azure. Follow the steps in this post (under How do I create a storage account?) to create a storage account.

Brian continues with a detailed How do I access Windows Azure Blob Storage from PHP? topic with source code.

Brad Calder delivered a highly detailed Understanding Windows Azure Storage Billing – Bandwidth, Transactions, and Capacity post to the Windows Azure Storage Team blog on 7/9/2010:

image We get questions about how to estimate how much Windows Azure Storage will cost in order to understand how to best build a cost effective application. In this post we will give some insight into this question for the three types of storage costs – Bandwidth, Transactions and Capacity.

When using Windows Azure Blobs, Tables and Queues storage costs can consist of:

Bandwidth – the amount of data transferred in and out of the location hosting the storage account

Transactions – the number of requests performed against your storage account

Storage Capacity – the amount of data being stored persistently

image Note, the content in this posting is subject to change as we add more features to the storage system. This posting is given as a guideline to allow services to be able to estimate their storage bandwidth, transactions and capacity usage before they go to production with their application.

Consumption Model

The following is the consumption offering for Windows Azure Storage that allows you to pay as you go:

  • Storage Capacity = $0.15 per GB stored per month
  • Storage Transactions = $0.01 per 10,000 transactions

Data Transfer (Bandwidth) =

North America and Europe

  • $0.10 per GB in
  • $0.15 per GB out

Asia

  • $0.30 per GB in
  • $0.45 per GB out

There are additional offerings from Windows Azure, please see here for the latest pricing and the full list of pricing offers.

The following gives an overview of billing for these three areas:

Bandwidth – Since applications need to compute over their stored data, we allow hosted services to be co-located with their storage. This allows us to provide free bandwidth between computation and storage that are co-located, and only charge bandwidth for storage when accessed from outside the location it is stored in.

Transactions – Each individual Blob, Table and Queue REST request to the storage service is considered as a potential transaction for billing. Applications can then control their transaction costs by controlling how often and how many requests they send to the storage service. We analyze each request received and then classify it as billable or not billable based upon our ability to process the request and the request’s outcome.

Capacity – We accumulate the size of the objects stored (blobs, entities and messages) as well as their application and system metadata in order to measure the storage capacity for billing.

In the rest of this post we will explain how to understand these three areas for your application.

When Bandwidth is Counted

In order to access Blobs, Tables and Queues, you first need to create a storage account by going to the Windows Azure Developer Portal. When creating the storage account you can specify what location to store your storage account in. The six locations we currently offer are:

  1. US North Central
  2. US South Central
  3. Europe North
  4. Europe West
  5. Asia East
  6. Asia Southeast

All of the data for that storage account will be stored and accessed from the location chosen. Some applications choose a location that is geographically closest to their client base, if possible, to potentially reduce latency to those customers. A key aspect here is that you will want to also choose in the Developer Portal the same location for your hosted service as the storage account that the service needs to access. The reason for this is that the bandwidth for the data transferred within the same location is free. In contrast, when transferring data in or out of the assigned location for the storage account, the bandwidth charges listed at the start of this post will accrue.

Now it is important to note that bandwidth is free within the same location for access, but the transactions are not free. Every single access to the storage system is counted as a single transaction towards billing. In addition, bandwidth is only charged for transactions that are considered to be billable transactions as defined later in this posting.

Another concept to touch on in terms of bandwidth is when you use blobs with the Windows Azure Content Delivery Network (CDN). If the blob is not found in the CDN (or the blob’s time-to-live (TTL) has expired) it is read from the storage account (origin) to cache it. When this occurs, the bandwidth consumed to cache the blob (transfer it from the origin to the CDN) is charged against the storage account (as well as a single transaction). This emphasizes that you should use a CDN for blobs that are referenced enough to get cache hits, before they expire in the cache due to the TTL, to offset the additional time and cost of transferring the blob from your storage account to the CDN.

Here are a few examples:

Your storage account and hosted service are both located in “US North Central”. All bandwidth for data accessed by your hosted service to that storage account is free, since they are in the same location.

Your storage account is located in “US North Central” and your hosted service is located in “US South Central”. All bandwidth for data accessed by your hosted service to the storage account will incur the bandwidth charges listed at the start of this post.

Your storage account is located in “US North Central”, and your blobs are cached and served by one of the Windows Azure CDN edge locations in Europe . Since the Windows Azure CDN edge location is not in the same location as your storage account, when the data is read from your storage account to the Windows Azure CDN for caching it will incur the above bandwidth charges.

Your storage account is located in “US North Central” but accessed by websites and services around the world. Since it isn’t being accessed from a Windows Azure hosted service within the same location the standard bandwidth charges are applied.

Brad continues with the details of How Transactions are Counted, examples of transaction charges from Storage Client Library calls, an explanation of what transactions are billable, and how to estimate storage charges.

Ryan Dunn and Steve Marx produced a 00:38:38 Cloud Cover Episode 18 - ASP.NET Providers episode on 7/9/2010:

imageJoin Ryan and Steve each week as they cover the Microsoft cloud. You can follow and interact with the show at @cloudcovershow.

In this episode:  

  • Learn how to configure and hook-up the ASP.NET providers in Windows Azure web applications.
  • Learn about the differences between using the Windows Azure table provider and the SQL Azure provider.
  • Discover a tip on how to locally use the providers (and table storage) in the development storage service.

Show Links:

SQL Azure Database, Codename “Dallas” and OData

•• Jon Udell promotes an OData interface to Microsoft’s new Excel Web App in his Web spreadsheets for humans and machines post of 6/30/2010 (which I missed when posted):

imageThe Excel Web App currently lacks an API like the one Google provides. I really hope that the Excel Web App will grow an OData interface. In this comment at social.answers.microsoft.com, Christopher Webb cogently explains why that matters:

The big advantage of doing this [OData] would be that, when you published data to the Excel Web App, you’d be creating a resource that was simultaneously human-readable and machine-readable. Consider something like the Guardian Data Store (http://www.guardian.co.uk/data-store): their first priority is to publish data in an easily browsable form for the vast majority of people who are casual readers and just want to look at the data on their browsers, but they also need to publish it in a format from which the data can be retrieved and manipulated by data analysts. Publishing data as html tables serves the first community but not the second; publishing data in something like SQL Azure would serve the second community and not the first, and would be too technically difficult for many people who wanted to publish data in the first place.

The Guardian are using Google docs at the moment, but simply exporting the entire spreadsheet to Excel is only a first step to getting the data into a useful format for data analysts and writing code that goes against the Google docs API is a hassle. That’s why I like the idea of exposing tables/ranges through OData so much: it gives you access to the data in a standard, machine-readable form with minimal coding required, even while it remains in the spreadsheet (which is essentially a human-readable format). You’d open your browser, navigate to your spreadsheet, click on your table and you’d very quickly have the data downloaded into PowerPivot or any other OData-friendly tool.

image Some newspapers may be capable of managing all of their data in SQL databases, and publishing from there to the web. For them, an OData interface to the database would be all that’s needed to make the same data uniformly machine-readable. But for most newspapers — including even the well funded and technically adept Guardian — the path of least resistance runs through spreadsheets. In those cases, it’ll be crucial to have online spreadsheets that are easy for both humans and machines to read.

Kreuger Systems offered an OData Browser for the iPhone as of 7/6/2010:

Overview

imageOData Browser enables you to query and browse any OData source. Whether you’re a developer or an uber geek who wants access to raw data, this app is for you.

It comes with the following sources already configured:

  • Netflix - A huge database of movies and TV shows
  • Open Government Initiative - Access to tons of data published by various US government branches
  • Vancouver Data Service - Huge database that lists everything from parking lots to drinking fountains
  • Nerd Dinner - A social site to meet other nerds
  • Stack Overflow, Super User, and Server Fault - Expert answers for your IT needs

Anything else! If you use SharePoint 2010, IBM WebSphere, Microsoft Azure, then you can use this app to browse that data.

image The app features:

  • Support for data relationship following
  • Built-in map if any of the data specifies a longitude and latitude
  • Built-in browser to navigate URLs and view HTML
  • Query editor that lists all properties for feeds

Use this app to query your own data or to learn about OData.

image

There is a vast amount of data available today and data is now being collected and stored at a rate never seen before. Much, if not most, of this data however is locked into specific applications or formats and difficult to access or to integrate into new uses. The Open Data Protocol (OData) is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today.

Buy it for $1.99 here.

Wayne Walter Berry continues his SQL Server 2008 R2 PowerPivot for Excel series with PowerPivot for the DBA: Part 2 on 7/9/2010:

image In this article I am going to continue to tie some simple terminology and methodology in business intelligence with Transact-SQL – bring it down to earth for the DBA. This is a continuation in a series of blog posts that I started that equates Transact-SQL to PowerPivot.

Measure

imageIf you’re not an OLAP guru, you might be baffled by the frequent references to measures in PowerPivot. Basically, a measure in PowerPivot is a predefined calculation that understands the scope of the cell that it is being evaluated in. This is much like a function with Table-Value parameters in Transact-SQL. Conceptually it is like a function in any language, a predefined calculation that takes input and produces output based on that input. The language in a measure is Data Analysis Expressions (DAX).

The dialog in which the measure is defined has multiple entry points:

  • “New Measure” button in the PowerPivot ribbon
  • Right-click menu item for each table in the field list that says “Add New Measure...”
  • Right-click menu item for each measure in the field list that says “Edit Formula”
  • Right-click menu item for each measure in the Values well below the field list that says “Edit Measure...”

Choosing any of these entry points will bring up the Measure Settings dialog that allows you to specify the name and formula for your measure, and that dialog looks like this:

clip_image001

This measure is then associated with the table that you created it in (Table name drop down above).

DAX is very similar to an expression in Excel; however it has additional attributes for dealing with scope. In fact, a measure can only be used in a PivotTable, because only a PivotTable has the concept of scope.

Scope

Understanding scope is the key to understanding measures. When I talk about scope, I am taking about the number of rows sent to the measure by PowerPivot. For example, in my previous blog post I was using the Adventure Works database to create a PowerPivot example that looked like this:

clip_image003

Which is the same data as this Transact-SQL Statement containing a GROUP BY statement with both OrderDate and ProductCategory.Name:

SELECT    ProductCategory.Name, SalesOrderHeader.OrderDate, SUM(LineTotal)
FROM    Sales.SalesOrderHeader
    INNER JOIN Sales.SalesOrderDetail ON 
        SalesOrderHeader.SalesOrderID = SalesOrderDetail.SalesOrderID
    INNER JOIN Production.Product ON 
        Product.ProductID = SalesOrderDetail.ProductID
    INNER JOIN Production.ProductSubcategory ON 
        Product.ProductSubcategoryID = ProductSubcategory.ProductSubcategoryID
    INNER JOIN Production.ProductCategory ON 
        ProductSubcategory.ProductCategoryID = ProductCategory.ProductCategoryID
GROUP BY ProductCategory.Name, SalesOrderHeader.OrderDate
ORDER BY SalesOrderHeader.OrderDate

If I was using measures, the scope sent to the measure in this case is all the rows that match the GROUP BY and all the columns of the inner join that a SELECT * would return. The measure would then be evaluated for every cell in the PivotTable. So for July 1, 2001 and the category Accessories in the example above that would be 37 rows that looked like this:

clip_image005

All 37 rows would be sent to the measure to evaluate for this cell. My example Sales measure just sums the LineTotal column, which is the same result as letting PowerPivot sum the column in the first example.

If I used the Sales measure I created above the PowerPivot table would look like this:

clip_image007

Just like other languages, the measure (or function) can be changed and all the cells will update automatically. The measure can be thought of as a naming abstraction between PowerPivot and the calculation. In fact you can use the Sales measure in multiple PivotTables, or PivotCharts.

Changing Scope

The DAX language has built in functions to change the scope of the results within the measure. You can filter the results or expand the scope to include more rows. This really is the power of a measure, the ability to take the cells scope and compare it to an expanded or reduced scope.

Summary

In my next blog post in this series I will show how to take and expanded scope in PowerPivot and create a measure that computes a ratio between the cells scope and the product category the cell is in. Plus, I will give you the same results in Transact-SQL.

Sudhir Hasbe’s brief What is Microsoft Codename Dallas? post of 7/8/2010 offers a 00:07:31 Microsoft Codename “Dallas” video segment by Zach Owens and Moe Khosravy:

image Dallas is Microsoft’s Information Marketplace. It allows content providers to publish and sell there data. ISV’s and Developers can easily embed this data into there applications. This is a good video providing basic overview of Dallas.  You can get more information on Dallas at http://www.microsoft.com/dallas.

imageDZone interviews Chris Woodruff (a.k.a. Woody) about his OData activities in this 00:04:00 video segment on the Woody’s Haven for Geeks blog. Woody is best known for his “Deep Fried Bytes” podcasts.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Mike Jones posted Identity “Mash-up” Federation Demo using Multiple Protocols (OpenID and WS-Federation) to the Interoperability @ Microsoft blog on 7/9/2010:

At the last Interoperability Executive Customer (IEC) Council meeting in October 2009, there was broad agreement to involve third party software vendors to work with IEC Council members and Microsoft on specific interop scenarios brought forward by the council members.  We are pleased to report that over the last five months, the council was able to engage in very productive discussions with PayPal on an Identity Management interoperability scenario proposed by Medtronic.

imageMedtronic, PayPal, and Microsoft worked together to produce a multi-protocol federated identity “mash-up” demo using multiple protocols (OpenID and WS-Federation). This demo was shown at the Internet Identity Workshop and to members of the IEC Council.  The demo shows how Medtronic customers could use PayPal identities when signing up for and participating in a medical device trial.

clip_image001

You can view a video of the demo here.

image We called it an “identity mash-up” because claims from the PayPal identity are combined with (“mashed-up” with) additional claims added by Medtronic for trial participants to create a composite Medtronic trial identity.  Medtronic creates “shadow” accounts for trial participants, but from the user’s point of view they’re always just using their PayPal account whenever they have to sign for the trial.

It’s multi-protocol because the PayPal claims are delivered to Medtronic using OpenID 2.0, whereas the claims from Medtronic are delivered to its relying parties using WS-Federation.  It’s interop because the demo uses both .NET and the Windows Identity Foundation on Windows and PHP on Linux, with interoperable identity protocols letting them seamlessly work together.

Southworks, the company that built much of the demo, has released the source code and documentation for a proof-of-concept OpenID/WS-Federation Security Token Service (STS) based on the one used in the demo, should you be interested in prototyping something similar.

We want to thank Medtronic and PayPal for their leadership and partnership of this effort and Southworks for their professionalism, agility, and execution.  We appreciate the opportunity to work with other industry leaders both to understand and demonstrate the interoperability that’s possible with our current product offerings and to inform the planning efforts for our future identity products.”

Mike is a Senior Program Manager on the Federated Identity Team.

Jack Greenfield reported FabrikamJets Example Updated (Really) on 7/9/2010:

image I just discovered that the 2.0 release of the Fabrikam Jets example on code gallery (described in this blog post) contained the original version 1.0 code that worked with OAuth WRAP v0.8, instead of the new version 2.0 code that works with OAuth WRAP v0.9. Apparently, I uploaded the wrong file back in January. The version 2.0 code was checked into our source tree, and has been sitting there all this time. I have just uploaded it as version 2.1. [Bad links to post editor fixed].

Sounds to me like few folks tested v2.0.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Daniel Druker’s SaaS & Cloud Computing and the Channel - Part III is the third in a series of analyses from a financial management and accounting SaaS provider (Intaact) and should interest many WPC10 attendees:

This is the third installment in a series of posts on SaaS / Cloud Computing and the channel – and this is going to be a very long one.

image The topic of the day is barriers to adopting SaaS and cloud applications from the channel partner’s perspective. By the end of this post I think you’ll see that we are at the point where the main issues are down to education and inertia – between 2000 and 2005 there were real issues around technology and economics, but these are solved today – I’ll make the case below, and I think the facts clearly show, that in 2010, for both the channel and for the client, both technology and economics are better with SaaS and cloud computing than for old on-premises software.

I’ve had the pleasure to meet with a great variety of channel partners over my last 10 years or so in the SaaS / cloud computing business – we had 1,800 channel partners at Postini and we are now well above 100 here at Intacct, plus I’m lucky enough to participate in great channel oriented groups like the Information Technology Alliance (the ITA) where hundreds of execs from the reseller community regularly meet to share best practices. Plus I spend a lot of time with other "channel champions" from both the vendor and the reseller communities.

In literally hundreds of meetings with resellers of on-premises software, I’ve seen clear patterns of concerns, questions and issues. We are clearly over the awareness hump – just about every channel partner I’ve spoken with is extremely aware of SaaS and cloud computing – the question in their mind is whether cloud and SaaS represent an opportunity or a threat – and the repeated concerns I hear are mostly about economics and technology.

The rationale for this post is that many of the concerns I consistently hear and the conventional wisdom being repeated aren’t well grounded in fact. This is only natural for channel partners coming from the on-premises world. So I thought it would be helpful to get all of this on the table and out in the open – the top 10 channel concerns about SaaS and Cloud Computing. …

Daniel continues with his detailed list of 10 concerns. Following are links to his two earlier episodes in the series.

Daniel is the SVP of Marketing and Business Development at Intacct, a leading provider of cloud-based financial management and accounting applications, which are currently being used by more than 3,300 businesses.

Elizabeth White reports “Microsoft’s David Chou to discuss building highly scalable & available applications on Windows Azure with Java” in her Windows Azure with Java at Cloud Expo Silicon Valley post of 7/9/2010:

imageMicrosoft's Windows Azure platform is a virtualized and abstracted application platform that can be used to build highly scalable and reliable applications, with Java. The environment consists of a set of application services such as "no-SQL" table storage, blob storage, queues, relational database service, Internet service bus, access control, etc. Java applications can be built using these services via Web services APIs, and your own JVM, without having to be concerned with the underlying server OS and infrastructure.

image In his session at the 7th International Cloud Expo, David Chou, technical architect at Microsoft, will provide an overview of the Windows Azure platform environment, and cover how to develop and deploy Java applications in Windows Azure and how to architect horizontally scalable applications in Windows Azure.

The Microsoft Case Studies team posted its IT Firm [HCL Technologies] Delivers Carbon-Data Management in the Cloud, Lowers Barriers for Customers case study on 7/7/2010:

image Technology firm HCL Technologies, offers an on-premises software application, manageCarbon that helps businesses aggregate, analyze, and manage carbon emissions data. The application, which connects to customers' enterprise systems to extract key emissions data, is growing in popularity due to the increasing regulatory demands placed on businesses to report carbon emissions. HCL wanted to lower the barriers to customer adoption of the technology, reducing the need for customers to make a significant capital investment upfront and shortening the lengthy deployment time. Already familiar with cloud computing, HCL migrated manageCarbon to the Windows Azure platform. As a result, HCL lowered the investment required by customers to use manageCarbon, trimmed deployment to one-quarter of the time, simplified the development and maintenance of the application, and lowered its total cost of ownership.

imageOrganization Profile: Based in India, HCL is a global technology firm with 62,000 employees. Along with its subsidiaries, HCL had consolidated revenues of U.S. $5 billion as of March 2010.

Business Situation: HCL wanted to lower the cost barrier for entry so that more customers could adopt its manageCarbon application, which was traditionally an on-premises application.

Solution: Using an established framework, HCL migrated its on-premises application to the Windows Azure platform, using the Windows Azure Software Development Kit for Java Developers.

InformationWeek::analytics offered an Electronic Health Records: Time to Get Onboard Cloud Computing Brief by Marianne Kolbasuk McGee for download on 7/8/2010:

image Most of the nation's largest hospitals have already deployed electronic health record systems, but less than 20% of the 700,000 practicing doctors are using them. There's a lot at stake if these doctors don't deploy these systems.

Digitized records provide a timely, cost-effective way to share patient information. If physicians aren't using them in their private practices, they lose those benefits, as do the hospitals they work with. Continued use of paper records puts patients at risk for medical mistakes, ill-informed treatment decisions and unnecessary tests because hospitals and doctors don't have easy access to information about recent tests, health histories and other important data.

The push is on to get all doctors using e-health records. Here's how four large healthcare organizations got their practitioners up and running without a lot of fuss.
There are looming financial implications as well. Last year's stimulus legislation provides more than $20 billion in incentives to doctor practices, hospitals and other healthcare organizations that show they're making "meaningful use" of EHRs; meaningful use is likely to require that healthcare providers be able to electronically exchange patient data. At risk are incentive payments of as much as $64,000 for a physician practice and millions of dollars for hospitals, depending on their size. Penalties for non-compliance start in 2015, when physicians and hospitals that treat Medicare patients will see a reduction in fee reimbursements if they aren't complying with meaningful use requirements.

With so much at stake, most large hospitals aren't leaving it to chance that doctors will adopt EHRs. Partners HealthCare, which operates several Boston hospitals, has taken the atypical approach of mandating that its physicians use EHRs. Huntington Memorial Hospital is helping its doctors go digital by giving them a free e-prescription system. Beth Israel Deaconess Medical Center and Inova Health System are offering their physicians subsidized EHR systems.

These four healthcare organizations are taking different approaches, but their goals are the same: to help the independent practices with which they work make the complicated and expensive transition to EHRs.

Download

The Microsoft Case Studies team posted its Software Development Firm [Quest Software] Expands Market Reach with Cost-Efficient Cloud Services case study on 76/2010:

image Software development company Quest Software develops IT management solutions and has helped more than 100,000 enterprise customers worldwide improve IT efficiency. Historically, the company has offered on-premises solutions that require hardware, software licenses, and maintenance resources—costs that most small and midsize businesses often cannot afford. Quest Software wanted to expand its services to the small and midsize business market by employing a software-as-a-service delivery model, but it did not want to rent server space or build out its own data center to host cloud services. After evaluating several options, the company implemented the Windows Azure platform to host its solutions. As a result, the company was able to quickly develop its cloud services, and it gained improved scalability compared to on-premises solutions, enhanced data security, and an expanded market reach.

image Organization Profile: Quest Software develops IT management solutions for customers who use Microsoft products and technologies. The Microsoft Gold Certified Partner has 3,500 employees.

Business Situation: The company historically has served enterprise customers with on-premises solutions, but wanted to expand its reach to small and midsize businesses with a more cost-efficient model.

Solution: Quest Software implemented Windows Azure to deliver two software-as-a-service offerings to customers, and will continue to develop additional IT management services in the cloud.

Return to section navigation list> 

Windows Azure Infrastructure

image See Nikkei.com reported on 7/9/2010 Fujitsu, Microsoft Unite To Take Cloud Computing Global plus the related article below it, as well as stay tuned to the the Worldwide Partners Conference for more details about the new Microsoft/Fujitsu partnership in the cloud.

David Linthicum asserted “25 years ago, many companies pushed back against the PC, citing security and performance concerns -- sound familiar?” in his Déjà vu for IT: The cloud wave mirrors the PC wave post of 7/9/2010 to InfoWorld’s Cloud Computing blog:

image As I'm in cloud computing meetings a lot these days, I'm feeling a bit of déjà vu: There are parallels to how we think about cloud computing today and how we thought about PCs in the early 1980s.

I cut my computing teeth on PCs, which were considered a hobbyist device at the time. However, as I built and programmed them, it was easy to see their potential. But my college did not use PCs, nor did my employer (I worked in a data center on the weekends). Although some people such as myself saw the value of the PC, corporate America clearly did not.

The most surprising thing to me was when I was looking for my first "real" job, I saw "No PC jobs" in the want ads. I was told several times by prospective employers that they considered PCs to be security risks, and they believed PCs did not provide the performance required or have a place in modern computing. One guy would not even allow them in the building.

At the same time, the hype behind the PC was huge, dominating much of the tech press. In some regards, these prospective employers were pushing back on the hype -- or they did not yet understand the potential.

The emergence of cloud computing follows a familiar pattern. Many enterprises believe the cloud lacks the security and performance they require. They are pushing back on the hype, and I've met a few people who won't allow any of their storage and compute functions to run off-premises.

I suspect the adoption pattern of cloud computing will be very much like that of the PC. There will be those that give it a try, considering the hype surround this technology, and once proven, they will take a stepwise approach to increasing its use. Also, there will be those forced to use cloud computing despite their personal feelings, pushed into it by the success of others. Still others will simply go with the flow; ultimately, you can't argue with momentum.

Cloud computing, like the PC, is not a revolution around any particular concept. It's an evolution in how we think about computing, with different and more efficient ways to do the same things done now.

I bet all of those guys that pushed back on PCs back then use them now.

“InfoWorld's experts demystify one of the most critical trends in enterprise IT today and help you deploy it right” when you their Download the Cloud Computing Deep Dive Report:

This 21-page PDF report provides the following benefits:

  • Exclusive book excerpt from Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide [by David Linthicum]
  • Critical deployment issues explained, including how to distinguish among different cloud technologies and choose the ones that fit your needs.
  • Hands-on evaluation of the major cloud platforms from the InfoWorld Test Center.

Site registration is required.

Lori MacVittie (@lmacvittie) adds more fuel to the dev/ops fire by claiming If you thought the integration and collaboration required new networking capabilities, you ain’t seen nothing yet in her F5 Friday: Would You Like Some Transaction Integrity with Your Automation? post of 7/9/2010:

f5friday Anyone who has ever configured a network anything or worked with any of a number of cloud provider’s API to configure “auto-scaling” via a load balancing service recognizes that it isn’t simply point, click, and configure. Certain steps need to be configured in a certain order (based entirely on the solution and completely non-standardized across the industry) and it’s always a pain to handle errors and exceptions because if you want to “do over” you have to backtrack through the completed steps or leave the system cluttered or worse – unstable.

image Developers and system operators have long understood the importance of a “transaction” in databases and in systems where a series of commands (processed in a “batch”) are “all or nothing”. Concepts like two-phase commit and transaction integrity are nothing new to developers and sysops and probably not to network folks, either. It’s just that the latter has never had that kind of support and have thus had to engineer some, shall we say, innovative solutions to recreating this concept.

Infrastructure 2.0 and cloud computing are pushing to the fore the need for transactional integrity and idempotent configuration management. Automation, which is similar to the early concepts of “batch” processing, requires that a series of commands be executed that individually configure the many moving pieces and parts of an overarching architecture that are required in order to “make the cloud go” and provide the basic support necessary to enable auto-scaling.

Because it is possible that one command in a sequence of ten, twenty, or more commands that make up an “automation” could fail, you need to handle it. You can catch it and try again, of course, but if it’s a problem that isn’t easily handled you’d wind up wanting to “roll back” the entire transaction until the situation could be dealt with, perhaps even manually. One way to accomplish this is to enable the ability to package up a set commands as a transaction and, if any command fails the transaction management system automagically hits the “undo” button and rolls back each command to the very beginning, making it like it never happened.

IT’S A LONG, LONG WAY TO TIPPERARY…and FULLY AUTOMATED DATACENTERS

We’re not there yet. If you were waiting for me to pronounce “we have arrived” I’m sorry. We haven’t, but we are at least recognizing that this is a problem that needs solutions. In order for the above scenario to be reality every system, every device, every component that is (or could be) part of the transaction must be enabled with such functionality.

Transactions are essential to maintaining the integrity of components across an orchestration in complex systems. That basically means most – if not all – datacenters will need transactions eventually if they go the route of automation to achieve operational efficiency. This is another one of those “dev” meets “ops” topics in which dev has an advantage over ops merely due to the exigencies of being in development and working with integration and databases and other imagedevelopment-focused systems that require transactional support. Basically we’re going to (one day) end up with layers of transactions: transactions at the orchestration layer that is comprised of individual transactions at the component layer and it will be imperative that both layers of transactions are able to handle errors, be idempotent (to handle restarts), and to provide integrity of the implied shared systems that make up a cloud computing environment (that’s fault isolation, by the way).

The necessity of transactional support for infrastructure 2.0 components is going to become more important as larger and more business critical systems become “cloudified”. The new network has to be more dynamic and that means it must be able to adapt its operating conditions to meet the challenges associated with integration and the dependencies between devices and applications that creates.

For F5, we’ve taken a few of the first steps down the road to Tipperary (and the eventually fully transactional-based automation implementation required to get there) by enabling our BIG-IP shell scripting solution (TMSH) with some basic transactional support. It isn’t the full kitchen sink, it’s just a start, and as you can guess you should “watch this space” for updates and improvements and broader support for this concept across the entire BIG-IP management space.

Lori continues with a TMSH TRANSACTION SUPPORT topic and concludes:

This is only one (small) piece of a much larger puzzle: the automated, dynamic infrastructure (a la infrastructure 2.0). And it’s a piece that’s not even wholly complete yet. But it’s a start on what’s going to be necessary if we’re going to automate the data center and realize the potential gains in efficiency and move even further into the “automated self-healing, self-optimizing” networks of the (further out) future. We can’t get even to a true implementation of “cloud” until we can automate the basic processes needed to implement elastic scalability because otherwise we’d just be shifting man hours from racking servers to tapping out commands on routers and switches and load balancers.

But it’s a step forward on the road that every one is going to have travel.

Bruce Guptil and Mike West co-authored this Saugatuck Research: Markets Need a Cloud Development Framework Research Alert that Saugatuck Technologies published on 7/8/2010:

image What is Happening?: Saugatuck Technology’s latest survey research indicates that more than half of all user firms worldwide plan on utilizing Cloud solutions for even their most core business operations within the next two to three years. And all our research points toward more powerful Cloud solutions, systems and platforms, standing on their own as well as integrated with a wide range of traditional on-premise IT.

This rampant and accelerating use of Cloud-based IT is driving developers within ISVs, Cloud providers, system integrators and user enterprises to explore and demand new development capabilities for Cloud-based and hybrid solutions. Unfortunately, there have been few guidelines and even fewer examples of the types and scope of development capabilities that are needed. A framework or model to provide a starting point, building blocks, and a benchmark for comparative analysis of offerings is needed.

This Research Alert uses Saugatuck’s latest research with ISVs, Cloud developers, and Cloud platform providers to examine the need for a Cloud development framework, and to present a model for such a framework. …

Bruce and Mike continue with the usual Why is it Happening, and Market Impact sections.

<Return to section navigation list> 

Cloud Security and Governance

Daryl Plummer asks Gartner’s Global IT Council on Cloud Computing: Do You Have Rights? in this 7/11/2010 post to the Gartner blogs:

image It has long been apparent to me that Cloud Computing represents a significant change in the relationships between providers of solutions based on technology and the consumers who use those solutions. Whether you are talking about how computing solutions are paid for, who delivers them, or what the contracts for those services look like, you have to deal with the trust that must be established between service providers and service consumers. And, one of the key ways of building trust is to agree on who gets what rights, and who takes on what responsibilities.

image In the past 8 months, I’ve worked with a number of industry players to try to put into words some of the issues that can erode that necessary trust between providers and consumers in the cloud. That effort is part of Gartner’s Global IT Council where we not only looked at the issues, but actually sought to propose some basic approaches to addressing those issues.

The Gartner Global IT Council for Cloud Computing consists of CIOs and senior IT leaders of large global enterprises who work together to create actionable real-world recommendations and drive fundamental changes in the way the IT industry works.

The Council’s list of Rights and Responsibilities for Cloud Computing identifies some of the more interesting “basic truths” that should be self-evident but often are not. It seeks to establish a checklist of elements that should be addressed in any contractual agreement between cloud service providers and consumers. Once that checklist is in hand, a proper discussion of how to most effectively evaluate, select, and consume cloud services can be started. This is necessary even for simple cloud services, but is essential for the most mission critical of business processes supported by cloud computing.

The Council’s preliminary findings and a detailed overview of their Charters can be found at http://www.gartner.com/technology/research/reports/global-it-council.jsp. Join the discussion and help the list grow and change over time as the industry evolves the dialog.

 JP Morganthal offers Cloud Computing Pragmatics in this 7/10/2010 post about cloud computing’s reliance on network performance, the federal government’s need for purpose-specific cloud infrastructures, and the potential for cloud security as a service:

In October of 2009 I was interviewed by GovIT Journal and in that article I presented my view that Cloud Computing is highly dependent upon the network.  The actual quote given was, “Which just goes to show, the telco providers still hold all this stuff by the balls!”  More than ever, based on my work over the past four months as Merlin International’s Chief Architect, I still believe this is a critical and pertinent factor regardless of your Cloud Computing architecture.

Indeed, I have relished these past few months because they have presented me with the opportunity to delve deep into the muscle tissue of Cloud Computing.  One of Merlin’s key areas of success has been in providing networking and data center hardware and software.   While many architects can talk a good game about Cloud Computing, few have actually walked the stack top to bottom and actually touched the underbelly of the beast.  Shoot, I even became a Riverbed Certified Solution Professional, a wickedly-cool WAN optimization product and am now focusing on Network Appliance certifications next.  Understanding these “organs” of the Cloud truly provide unmatched insight into what is achievable and what is hype.

Meanwhile, I’ve been deep in muck gaining real insight into what Federal government customers are dealing with in trying to provide agile infrastructures to support the growing and changing needs of their user base.  It’s real easy for pundits to step up and present a vision for Cloud Computing as a configurable resource that’s capable of meeting all needs, but I really believe that is a misnomer.  In fact, more than ever I believe that we need to specialize Clouds to support a specific purpose.  For example, I advocate that users need separate Cloud Computing infrastructures to support their full-motion video needs and their back office applications and that these should not live on the same Cloud infrastructure; especially if utilizing multicast video capabilities.

 Petri I. Salonen analyzes Bessemer’s Top 10 Cloud Computing Laws and the Business Model Canvas in this ongoing series that was at Law #4 of Bessemer’s Top 10 Cloud Computing Laws and the Business Model Canvas-Forget everything you learned about Software Channels on 7/9/2010:

image I am now in the fourth law in Bessemer’s Top 10 Computing Laws and this has to do with software channels. I have so far addressed in my blog entry that a SaaS vendor needs to really live the life of a SaaS company, be part of the SaaS DNA, I have looked into the financials of a SaaS company and sales operations/sales curve for a SaaS vendor.

In this blog entry, I will be focusing on software channels which in the traditional world have enabled software companies to get scalability to address new markets, new verticals and new geographies.  As with everything that I write about, I want to put context into it and the reference framework from which I am looking at this. First of all, I have run software companies with both direct and indirect (channel) sales. I have built software channels in Europe, US and Latin Americas. I have sold personally large enterprise solutions to end users around the world.

Let’s get to the statements from Bessemer and their view on Software Channels and their applicability in the SaaS world. …

Here are links to Petri’s three earlier episodes:

Read the original Bessemer’s Top 10 Laws of Cloud Computing and SaaS white paper (Winter 2010 version) by Bessemer Venture Partners, which contends "Running an on-demand company means abandoning many of the long-held tenets of software best practices and adhering to these new principles.”

Audrey Watters posted Geo-Awareness & the Cloud on 7/9/2010 to the ReadWriteCloud blog:

map_july10.jpgJust because your data is housed "in the cloud" doesn't mean that earthbound geography can simply be ignored. And while the World Wide Web promises a global and ubiquitous technology, location still matters.

Obviously, data is still housed in a particular place, even if that place is "in the cloud." And while major cloud providers have data centers worldwide - often with locations across several sites in Asia, Europe, and North America - the specific location of these cloud centers as well as the location of the end-users remain important - and complicated.

Already location is often used to help address performance: where someone resides can be used to determine which data center is utilized. Location can also be a factor to restrict or enable access in order to comply with certain export laws, blocking access to applications for residents of certain countries, for example.

But in addition to questions of performance and of access, there can be substantial legal ramifications based on location as well. Different countries tend to have varying requirements and restrictions for the privacy and security of information stored there. Argentina and Germany have very restrictive privacy laws, for example, while Hong Kong and South Africa have minimal restrictions.

location_data_cloud.jpg

One of the promises of cloud technology is the ability to scale infinitely and on demand, but clearly location needs to be part of the equation - not just to ensure performance, but to ensure compliance with local laws.

The relationship between location and the cloud has been the subject of several Forrester reports. And CloudSleuth offers a service so you can do your homework about cloud providers, geography, and performance.

J. D. Meier posted Cloud Security Threats and Countermeasures at a Glance on 7/8/2010:

imageCloud security has been a hot topic with the introduction of the Microsoft offering of the Windows Azure platform.  One of the quickest ways to get your head around security is to cut to the chase and look at the threats, attacks, vulnerabilities and countermeasures.  This post is a look at threats and countermeasures from a technical perspective.

The thing to keep in mind with security is that it’s a matter of people, process, and technology.  However, focusing on a specific slice, in this case the technical slice, can help you get results.  The thing to keep in mind about security from a technical side is that you also need to think holistically in terms of the application, network, and host, as well as how you plug it into your product or development life cycle.  For information on plugging it into your life cycle, see the Security Development Lifecycle.

While many of the same security issues that apply to running applications on-premise also apply to the cloud, the context of running in the cloud does change some key things.  For example, it might mean taking a deeper look at claims for identity management and access control.  It might mean rethinking how you think about your storage.  It can mean thinking more about how you access and manage virtualized computing resources.  It can mean thinking about how you make calls to services or how you protect calls to your own services.

Here is a fast path through looking at security threats, attacks, vulnerabilities, and countermeasures for the cloud …

Objectives

  • Learn a security frame that applies to the cloud
  • Learn top threats/attacks, vulnerabilities and countermeasures for each area within the security frame
  • Understand differences between threats, attacks, vulnerabilities and countermeasures

Overview
It is important to think like an attacker when designing and implementing an application. Putting yourself in the attacker’s mindset will make you more effective at designing mitigations for vulnerabilities and coding defensively.  Below is the cloud security frame. We use the cloud security frame to present threats, attacks, vulnerabilities and countermeasures to make them more actionable and meaningful.

You can also use the cloud security frame to effectively organize principles, practices, patterns, and anti-patterns in a more useful way.

Threats, Attacks, Vulnerabilities, and Countermeasures
These terms are defined as follows:

  • Asset. A resource of value such as the data in a database, data on the file system, or a system resource.
  • Threat. A potential occurrence – malicious or otherwise – that can harm an asset.
  • Vulnerability. A weakness that makes a threat possible.
  • Attack. An action taken to exploit vulnerability and realize a threat.
  • Countermeasure. A safeguard that addresses a threat and mitigates risk.

Cloud Security Frame
The following key security concepts provide a frame for thinking about security when designing applications to run on the cloud, such as Windows Azure. Understanding these concepts helps you put key security considerations such as authentication, authorization, auditing, confidentiality, integrity, and availability into action.

J. D. continues with a few feet of tables and lists covering Cloud Security Frame, Threats and Attacks, Vulnerabilities, Countermeasures, 28 Threats and Attacks Explained, 71 Countermeasures Explained, and a link to Security Development Lifecycle (SDL) Considerations. I’m sure J. D. added “at a glace” to his exceedingly long post’s title in jest.

<Return to section navigation list> 

Cloud Computing Events

•• R. “Ray” Wang posted Research Report: Microsoft Partners – Before Adopting Azure, Understand the 12 Benefits And Risks to the Software Insiders blog on 7/11/2010:

It’s All About The Cloud At WPC10

Attendees at this year’s Microsoft Worldwide Partner Conference 2010 in Washington, D.C. already expect Windows Azure development to be a key theme throughout this annual pilgrimage.  Microsoft has made significant investments into the cloud.   Many executives from the Redmond, WA, software giant have publicly stated that 90% of its development will be focused on the Cloud by 2012.  Delivery of the Cloud begins with the Azure platform which includes three main offerings:

  1. Microsoft Windows Azure
  2. Microsoft SQL Azure (formerly SQL Services)
  3. Microsoft Windows Azure Platform: AppFabric (formerly .NET Services).

Therefore, Microsoft partners must determine their strategy based on what part of the cloud they plan to compete in and which Azure services to leverage.  As with any cloud platform, the four layers include infrastructure, orchestration, creation, and consumption (see Figure 1):

  • Infrastructure. At a minimum, Windows Azure provides the infrastructure as a service.  Data center investments and the related capital expense (capex) is replace with oeprational expenses (opex).  Most partners will take advantage of Azure at the infrastructure level or consider alternatives such as Amazon EC2 or even self provision hosting on partner servers and hardware.
  • Orchestration. Microsoft Windows Azure Platform: AppFabric delivers the key “middleware” layers.  AppFabric includes an enterprise service bus to connect across network and organizational boundaries.  AppFabric also delivers access control security for federated authorization.  Most partners will leverage these PaaS tools.  However, non-Microsoft tools could include advanced SaaS integration, complex event processing, business process management, and richer BI tools.  The Windows AppFabric July release now supports Adobe Flash and Microsoft SilverLight.
  • Creation. Most partners will build solutions via VisualStudio and Microsoft SQL Azure (formerly SQL Services).  Other creation tools could include Windows Phone7 and even Java.  Most partners expect to use the majority of tools from Microsoft and augment with third party solutions as needed.
  • Consumption. Here’s where partners will create value added solutions for sale to customers.  Partners must build applications that create market driven differentiators.  For most partners, the value added solutions in the consumption layer will provide the highest margin and return on investment (ROI).

.NET:.NET (tongue and cheek here) – Microsoft partners and developers can transfer existing skill sets and move to the cloud with ease, once Microsoft irons out the business model for partners on Azure.

Figure 1. Partners Must Determine Which Layer To Place Strategic Bets

screen-shot-2010-03-22-at-105927-pm

Ray continues his analysis with Azure And Cloud Deployment Brings Many Benefits…, …Yet Cloud Models Create New Channel Partner Risks, Figure 2 The Advantages And Disadvantages Of Azure For Partners, The Bottom Line For Microsoft Partners – Success Requires A Focus On Differentiated IP Creation, Figure 3 Cloud Models Force Partners Into Value Added Solutions In The Race For the Largest Chunk Of The Technology Budget, and Your POV topics.

image See Nikkei.com reported on 7/9/2010 Fujitsu, Microsoft Unite To Take Cloud Computing Global plus the related article below it, as well as stay tuned to the the Worldwide Partners Conference for more details about the new Microsoft/Fujitsu partnership in the cloud.

imageI expect many Microsoft partners at WPC 2010 will be very unhappy about the new competition from this 800-lb, ¥5 trillion (USD$50 billion) Asian gorilla. Fujitsu (along with HP) is a WPC 2010 sponsor.

The Windows Azure Platform Partner Hub reported in its Windows Azure at WPC, Award Winners, and Announcements post of 7/10/2010:

Windows Azure at WPC, Award Winners, and Announcements

There is going to be a lot of big news coming out of Washington DC next week during the Microsoft Worldwide Partner Conference (all the prep for the event is why the Windows Azure Partner Hub has seen a bit of neglect the past couple weeks).

I want to give a shout out to our Windows Azure Partner of the Year, Lokad, out of France. We also recognized two finalists, Active Web Solutions Limited from the UK and Cumulux from the United States. Congrats to you and all the nominees for the innovation brought to the platform.

All the cloud geeks should be sure to tune in to DigitalWPC.com to watch Bob Muglia’s keynote at on Monday July 12th around 9am (EST).

I’ll be posting presentations, comments on the announcements, and probably a few pictures from some of the WPC parties.

Hope to see you there.

Alex Williams reported Microsoft Readies For War with New Small Business Division For Cloud Push in this 7/9/2010 post to the ReadWriteCloud:

azure.PNGMicrosoft is making two big bets for the new fiscal year: the cloud and the small business market.

So it's fitting that the company is creating a new division that will be chartered with offering cloud computing services for the small business sector.

According to ChannelWeb, the division will include all of the SMB sales, technical, marketing and distribution resources under a single multibillion dollar division. It's reputed to have as much resources it needs for its efforts.

Microsoft face a heated battle in the SMB market, particularly against Google, which has been making noticeable inroads with Google Apps.

According to CRN, the announcement came through an internal memo, that cited the growing competition in the SMB market. It will be officially announced to partners at the Microsoft Worldwide Partner Conference next week in Washington, DC.

CEO Steve Ballmer is making moves reminiscent of the mid-1990s when Bill Gates declared Microsoft an Internet company. Back then, Netscape served as Microsoft's challenger in the browser market. The software company's push led to Internet Explorer's dominance. Today, Google is the challenger.

But it's a different battle today than in 1996. Google is as much a powerhouse as Microsoft. And Google has all kinds of arsenal behind it, including Android, a mobile operating system that is quickly gaining market share.

As for Microsoft? They love this kind of competition. Get ready, this is going to be a war.

From page 2 of ChannelWeb’s Microsoft Creates New SMB Division To Take Its Cloud Effort To New Heights article of 7/8/2010 by Steven Burke:

image … "There is a realization that we weren't first to market, but now it is time to take all of our solutions and our rich experience in software that everyone is familiar with utilizing and focus it on the cloud," said Vince Menzione, Microsoft general manager, partner strategy for U.S. Public Sector, in a recent interview with CRN. "There is an opportunity to get out and be a market leader."

Microsoft insiders said the structural changes will be formally announced to partners at the Microsoft Worldwide Partner Conference on July 11-15 in Washington, DC.

Menzione, for his part, has pledged that the partner conference will include new partner and pricing models around cloud services. "We are breaking glass within Microsoft," he said. "It (The Cloud) is changing our business models, processes, and product portfolio."

Be sure to read Daniel Duffy’s comment at the end of

Beth Massi will present Creating and Consuming OData Services to the East Bay .NET User group on 7/14/2010 at 6:45 PM:

June meeting – Creating and Consuming OData Services

When:  Wednesday, 7/14/2010 at 6:45 PM

Where: University of Phoenix Learning Center in Livermore, 2481 Constitution Drive, Room 105

Event Description: The Open Data Protocol (OData) is a REST-ful protocol for exposing and consuming data on the web and is becoming the new standard for data-based services. In this session you will learn how to easily create these services using WCF Data Services in Visual Studio 2010 and will gain a firm understanding of how they work. You'll also see how to consume these services and connect them to other data sources in the Azure cloud to create powerful BI data mash-ups in Excel 2010 using the PowerPivot add-in. Finally, we will build our own Excel add-in that consumes OData services exposed by SharePoint 2010.

FUNdamentals Series: Introduction to the new Data Project in Visual Studio 2010 – Deborah Kurata

Agenda

6:00 - 6:30 .NET FUNdamentals
6:30 - 6:45 check-in and registration
6:45 - 7:00 tech talk; announcements and open discussion
7:00 - 9:00 main presentation and give aways

Presenter's Bio: Beth Massi www.BethMassi.com/

Beth Massi is a Senior Program Manager on the Visual Studio team at Microsoft and a community champion for business applications and Visual Basic developers. She has over 15 years of industry experience building business applications and is a frequent speaker at various software development events. You can find her on a variety of developer sites including MSDN Developer Centers, Channel 9 MSDN Developer Centers, Channel 9, and her blog www.BethMassi.com. Follow her on Twitter @BethMassi.

FUNdamentals speaker: Deborah Kurata

Deborah Kurata is cofounder of InStep Technologies Inc., a professional consulting firm that focuses on turning your business vision into reality using Microsoft .NET technologies. She has over 15 years of experience in architecting, designing, and developing successful applications. Deborah has authored several books, including the "Doing Objects in Visual Basic" series (Addison-Wesley), "Best Kept Secrets in .NET" (Apress) and "Doing Web Development: Client-Side Techniques" (Apress).

Cory Fowler wants to Help Me, Help You… Windows Azure at Tech Days starting 9/14/2010 in Vancouver and ending 12/15/2010 in Calgary, according to his 7/9/2010 post:

techdays_magnifyToday at 5pm I received an important email that I had been waiting for, this email confirmed me as being a Session Leader at Tech Days 2010 in Canada.  This means I will be creating the slides and demos that will appear in the “Using Microsoft Visual Studio 2010 to Build Applications that run on Windows Azure”.  I have been speaking on Windows Azure across Canada, and up until now I’ve been showing off what I wanted to show you, now it’s your turn.

It’s a hard knock life, for us

office-space Being a Developer is challenging at times, we have to figure out the problems of the world. Although we all live in the same world, we all face different challenges; currently my challenge is to encompass a little bit of each of your lives into one hour of your time. This leaves me with one question what are the battles you’re going to be taking into the cloud? For this I need your help, Please leave a comment, or Contact me and let me know one thing that you think you can benefit from knowing about Windows Azure.

Ted Sampson claims “Company reportedly intends to replace the hundreds of eliminated jobs with new ones focused on the cloud” in his Microsoft cuts jobs as part of core shift to cloud computing post of 7/8/2010 to InfoWorld’s Cloud Computing blog:

Microsoft cuts jobs as part of core shift to cloud computingAs part of a companywide shift toward cloud computing, Microsoft is cutting hundreds of jobs worldwide with plans to create new cloud-focused positions down the road, according to various reports.

TechFlash has reported that Microsoft is eliminating jobs in the low hundreds in the Seattle region -- and hundreds more globally. Though the figures are imprecise, they don't appear to represent a significant percentage of Microsoft 88,000-plus workforce. nor do they suggest that Microsoft is in immediate financial peril.

Rather, the layoffs are part of a companywide rebalancing effort as Microsoft shifts its core focus to cloud computing, an unnamed spokesman reportedly told ARN. "Microsoft believes its future business is firmly centered on the cloud and we are rebalancing the organization globally in order to create a number of new cloud-specific roles across the business," the spokesperson is quoted as saying. "We have identified roles that we will not be continuing with as part of our organizational structure as we create capacity for roles more aligned to this core cloud focus."

Mary Jo Foley reported Microsoft wants its partners 'All in' with the cloud in this 7/9/2010 post to her All About Microsoft blog for ZDNet:

image Starting July 12, Microsoft’s annual Worldwide Partner Conference [WPC] kicks off in Washington, DC. The company’s loudest messaging at the four-day event will be that Microsoft partners need to be “All In” with the cloud, just like Redmond itself.

Microsoft will be highlighting many of its partners that have managed to transition their businesses so as to be more cloud-centric. But company officials also will attempt to convince the rest of the nearly 10,000 expected attendees that it’s time for them to be leading with cloud services like Microsoft’s Business Productivity Online Suite (BPOS), the forthcoming Windows InTune systems management software/service and the Azure cloud platform.

(I’m especially interested in how Microsoft plans to get partners involved in selling Azure. So far, the Softies have published a number of case studies highlighting developers who’ve built new applications on Azure, but I’ve heard/seen very little about how Microsoft’s reseller community is supposed to get invovled/paid for pushing Azure to the masses.)

Getting partners on board with Microsoft’s cloud push is critical for the Redmondians, as Microsoft relies heavily on integrators, resellers, independent software vendors and OEMs to act as its primary salesforce. While the Microsoft brass warned the company’s partners a few years ago that Microsoft was planning to get into selling hosted services (and they needed to “move up the stack” and get out of the way or risk being run down), Microsoft partners still have a lot of questions about the cloud and their place in it. …

Mary Jo continues with conjecture about other related topics to be disclosed at WPC 2010.

The Windows Partner Conference 2010 Team added a Microsoft Cloud Services page to the DigitalWPC site on about 7/8/2010:

Attend the cloud keynotes and sessions at WPC 2010 to learn the following:

  • image Learn how customers and partners move to the cloud with Microsoft cloud services.
  • Get an update on Microsoft’s cloud strategy, cloud offerings, and programs for partners.
  • Understand business opportunities by partner types with the cloud - including best practices and action items.

[Links to WPC Content]

Cloud Services Resources:

Visit the Microsoft Partner Network Portal for more information beyond WPC.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Sam Mamudi reported Microsoft, Fujitsu team up on cloud computing: WSJ for MarketWatch on 7/10/2010:

image NEW YORK (MarketWatch) -- Microsoft Corp. and Japan's Fujitsu Ltd. will likely announce in the coming week an agreement to join forces in cloud computing, according to a media report Saturday.

imageMicrosoft … and Fujitsu … already have decided to team up, The Wall Street Journal reported in its online edition, citing an unnamed person familiar with the matter.

Cloud computing is seen as the next big step in software services, providing online software, resources and information to computers on demand.

The Journal said the joint effort will give Microsoft access to Fujitsu's data centers and customer base. It will see Fujitsu offer Microsoft's Windows Azure, which gives Internet-based access to Windows software stored at data centers.

The Journal added that Fujitsu will set up platforms for cloud computing at data centers in the U.K., Australia, Singapore, Germany and the U.S. by the end of the fiscal year. Fujitsu plans to spend $1.1 billion on cloud computing-related businesses during this fiscal year, an amount that includes research and development as well as capital expenditure.

The paper added that on Friday Fujitsu President Masami Yamamoto said the company wants to broaden its global alliances in an effort to boost its cloud computing-related services.

Nikkei.com reported on 7/9/2010 Fujitsu, Microsoft Unite To Take Cloud Computing Global via the Wall Street Journal and CloudTweaks (see related article below):

imageFujitsu Ltd. (FJTSY, 6702.TO) and Microsoft Corp. (MSFT) will share data centers worldwide in a bid to catch up to Google Inc. (GOOG) and other pioneers in the business of providing software and computing services online, the Nikkei reported Friday.

image The effort will combine Microsoft software with Fujitsu customer service to speed both firms’ expansion into cloud computing.

image Fujitsu operates 90 or so data centers in 16 countries. As early as this year, it will begin hosting Microsoft cloud services at its Tatebayashi center in Gunma Prefecture. It plans to do the same at locations in the U.S., the U.K., Singapore and elsewhere, equipping them with the necessary technology. In deciding to work with Microsoft, Fujitsu acknowledges that its own cloud services have limited prospects for growth abroad.

Microsoft is racing to expand its cloud services worldwide, having opened massive data centers in Chicago and Dublin last year. But the U.S. firm has been stretched thin in customer support and other areas and will seek to reinforce them in cooperation with Fujitsu. Microsoft also believes that teaming with Fujitsu will help it make customers of globalizing Japanese companies.

The partners are considering joint investment in new data centers, which cost tens of billions of yen to build.

Microsoft this January introduced Windows Azure, which gives businesses Internet-based access to Windows software stored at Microsoft data centers instead of on their own computers. Through its partnership with Microsoft, Fujitsu will try to tap this base of Windows users.

Salesforce.com, a leader in cloud services, has about 77,000 customers worldwide, including the Ministry of Economy, Trade and Industry and Sompo Japan Insurance Inc. Google invested around 700 billion yen in its cloud computing business from 2006 to 2009. Among its customers in Japan is toilet manufacturer Toto Ltd. (5332.TO).

Both firms are pushing more aggressively into Japan, threatening domestic information technology giants. Fujitsu will seek to counter this challenge by working with Microsoft to build a global presence in cloud computing.

The world market for cloud computing will grow to $55.5 billion in 2014 from $16 billion in 2009, reckons U.S. research firm IDC. Japan’s IT industry is hampered by its inability to offer the same level of cloud services worldwide even as more domestic firms globalize.

imageI expect many Microsoft partners at WPC 2010 will be very unhappy about the new competition from this 800-lb, ¥5 trillion (USD$50 billion) Asian gorilla. Fujitsu (along with HP) is a WPC 2010 sponsor.

Note: Reading the Nikkei or WSJ articles requires a paid subscription.

Pat Romanski reported Fujitsu Pins Future On Cloud Computing Services in this 7/9/2010 post:

After drastic restructuring in its device businesses, Fujitsu Ltd. is counting on cloud computing-related services to lead its long-term earnings growth and overseas expansion.

The president of the Japanese technology firm said Friday that he expects cloud computing-related businesses to generate about ¥1.3 trillion to ¥1.5 trillion in revenue in the fiscal year ending March 2016. Such businesses currently generate revenue of only about ¥100 billion.

"Our medium- and long-term growth depends on cloud computing," said President Masami Yamamoto at a press briefing.

image Yamamoto added that he expects about 30% of Fujitsu's technology service operations will be cloud computing-related in the fiscal year through March 2016.

His comments come as Fujitsu tries to concentrate more on its mainstay technology services operations, after going through restructuring to reduce its exposure to volatile and capital-intensive businesses such as semiconductor production and hard disk drives. The company is trying to pitch itself to corporate customers as an all-in-one provider of hardware, software and services in the style of International Business Machines Corp., and is pushing to get more revenue from abroad.

"We no longer have any loss-making segments...it's time for us to be more aggressive," Yamamoto said.

Pinning its hope on the expected worldwide diffusion of next-generation computing services that are accessed online, Fujitsu plans to spend Y100 billion on cloud computing-related businesses in the current fiscal year, including on research and development as well as for capital expenditure.

The company also plans to have 5,000 cloud computing specialists on its staff by the end of March 2012. Yamamoto said that Fujitsu's clients will benefit from those specialists when they adopt the so-called software-as-a-service business model, which is expected to gradually replace the traditional model of selling software in packages to be installed on individual computers.

Fujitsu plans to set up platforms for cloud computing at its existing data centers in the U.K., Australia, Singapore, Germany and the U.S. by the end of the current fiscal year. It is also spending several billion yen on a new data center currently under construction in southern China, which will begin to operate next year, Yamamoto said.

The company also plans to strengthen its global alliances to make its cloud computing-related services more comprehensive and competitive. "We are looking for partners who can provide what our services are missing," Yamamoto said. In a recent interview with Dow Jones Newswires and The Wall Street Journal, Yamamoto said the company was looking to buy technology service providers with a strong customer base or software companies who provide so-called "middleware," a type of software that makes applications compatible with different computer platforms.

In its medium-term earnings goals set last year, Fujitsu continues to aim for an operating profit of ¥250 billion and revenue of ¥5 trillion in the fiscal year through March 2012. …

Stuart Wilson reported Flore leaves Fujitsu Technology Solutions on 7/9/2010:

Fujitsu Technology Solutions recently announced that it had parted company with president and CEO Kai Flore (pictured)Fujitsu Technology Solutions recently announced that it had parted company with president and CEO Kai Flore. In a short statement, Fujitsu Technology Solutions confirmed that Flore would leave the company and his position as president and CEO has ceased with immediate effect. The company’s chairman, Richard Christou, will take on the role of executive chairman until Flore’s successor is identified. The search for a new president and CEO is already underway. The management council will now report into Christou.

imageChristou is corporate senior executive VP of Fujitsu Limited, responsible for the group’s business in all markets outside Japan. Christou has steered business outside Japan since June 2008 as president of the global business group. From 2000 to 2007, Christou was executive chairman of European subsidiary Fujitsu Services, expanding it into the second-largest services provider to the UK government and Fujitsu’s largest single business outside Japan.

Fujitsu has not divulged reasons for Flore’s sudden departure. Flore had played a major role in defining Fujitsu Technology Solutions remit and services offering. Fujitsu Technology Solutions employs more than 13,000 people and is part of the global Fujitsu Group.

YouReader adds background to Flore’s departure in its Fujitsu Technology Solutions loses CEO post of 7/8/2010:

imageA bald statement from Fujitsu Technology Solutions (FTS) says president and CEO Kai Flore is no longer in post. A search for a new President & CEO is already underway and Richard Cristou is keeping the seat warm until the new CEO is found.

Fujitsu Technology Solutions loses CEOFlore became FTS CEO on November 3rd, 2008, when Fujitsu bought out the Siemens interest in Fujitsu Siemens Computer (FSC) , after being appointed chief strategy officer in 2007. Prior to that he was the FSC CFO. One source says he has now been fired.

Now Christou is no bagman from below. He is a, wait for it, Corporate Senior Executive Vice President of Fujitsu Limited, and responsible for the group’s business in all markets outside Japan; a direct report to God in other words. He was a Corporate SVP and head of the EMEA operations of Fujitsu from March 2007 to June 2008 when he ascended the ladder to become a Corporate Senior Exec VP.

Flore resigned or was ejected on or just before June 24. Christou then took on the role of Executive Chairman at FTS, having been the chairman since March 2007. It looks like he fired Flore.

Earlier in June there was another big exec deckchair move at Fujitsu, with Tony Doye joining from Unisys to become president and CEO of Fujitsu America. Christou provide a laudatory quote in the announcement release: "Fujitsu is pleased to welcome Tony Doye to the helm at Fujitsu America. Tony comes to us with proven expertise in creating and delivering solutions that improve client service levels and increase innovation. His deep experience in information and communications technology services will usher in a new phase of growth as Fujitsu expands in North America."

In March 2009 Farhat Ali was the CEO for Fujitsu America and it appears he was not the right man to "improve client service levels and increase innovation [and] usher in a new phase of growth." Out he went.

Is one person responsible for these CEO-level changes and is that Richard Christou, or does the responsibility lie higher still? What prompted them?

We note that on April 1st, 2010 Masami Yamamoto, who had been a corporate senior vice president and president of Fujitsu's Systems Products Business group, became president of the entire Fujitsu conglomerate, with a focus on getting the group back to profitability.

So we have a new overall Fujitsu group boss taking charge in April this year with the profitability focus. In June Fujitsu America gets a new CEO and there is talk of a "new phase of growth," while also in June the FTS CEO seems to have been dismissed.

The thread that's emerging here is one of Fujitsu's new boss insisting that subsidiaries in America and Europe deliver much more growth, and two exec heads have been put on the block and chopped off to show he means business. FTS was not immediately able to answer any questions on the matter.

Flore and Doye aren’t the only recent problem in Fujitsu’s executive ranks. The tail of the preceding post provides another example:

image… Fujitsu is also trying to overcome its recent scandal surrounding the departure of a former president. The company initially said in September that then-president Kuniaki Nozoe was stepping down because of illness. After Nozoe went public with a claim of wrongful dismissal, Fujitsu changed its explanation, saying he had associated with an investment fund with suspected ties to organized crime--claims that he denies.

"As you can see in the improvements in our earnings, the dispute with Mr. Nozoe has had very little impact on our businesses," Yamamoto said. "Our clients continue to trust us."

James Urquhart warns Amazon APIs as cloud standards? Not so fast in his 7/9/2010 post to C|Net’s Wisdom of Clouds blog:

image commentary Last month, as a part of the Structure conference in San Francisco, I had the privilege of moderating a panel on the subject of hybrid cloud computing. The panelists were some of the early pioneers of cloud computing, and included the likes of Marten Mickos, CEO of Eucalyptus; Michael Crandell, CEO of Rightscale; and Joe Weinman, vice president of Strategy and Business Development at AT&T.

Amazon Web ServicesAt one point during the conversation, Mickos made a statement to the effect that Eucalyptus sees the Amazon Web Service APIs as a "candidate for a standard much like the PC standard of the '80s," the latter having enabled innovation of operating systems and applications thanks to a single hardware standard shared by a wide variety of computer manufacturers.

Ellen Rubin, vice president of products and founder of cloud gateway appliance vendor CloudSwitch, wrote a post in response to Mickos' claim that critically examined whether or not it is possible for the Amazon APIs to be a standard today:

If there were an industry standard, Amazon certainly has a strong claim for it. They're the clear leader, with technology second to none. They've made huge contributions to advance cloud computing. Their API is highly proven and widely used, their cloud is highly scalable, and they have by far the biggest traction of any cloud. So full credit to Amazon for leading the way in bringing cloud computing into the mainstream. But it's a big leap from there to saying that Amazon should be the basis for an industry standard.

I've heard the decree that the Amazon APIs are a "defacto standard" for about a year now, first from Simon Wardley (who was at Linux vendor Canonical at the time) and later from others including cloud blogger and Enomaly CTO Reuven Cohen.

While I agree that the AWS EC2 and S3 APIs are today the market-leading infrastructure-as-a-service APIs if measured by number of users, I agree with Ruben that that position doesn't necessarily equate to a long-term standard. There are three reasons for this:

The Amazon APIs define only one set of cloud features: Amazon's. Yes, Amazon is by far the leader in cloud server and storage services, and yes, they have by far the strongest ecosystem supporting their capabilities. However, EC2 is actually a strictly defined server and network architecture that leaves little room for innovation in distributed application architectures and infrastructure configuration.

Now if the development world figures out how to overcome all performance, availability, and security issues using this architecture, that's not such a big deal. However, there is a reason that enterprise infrastructure architectures evolved the way they did, and there are many who believe that the shortcomings of Amazon's architecture will limit its effectiveness for a variety of applications.

In fact, the one area where Amazon EC2 has caused headaches for some people is in the realm of I/O perfomance. Both network and storage access has proven to have unpredictable latencies. One lead developer of a well known open-source database told me he has looked to take some clients to other cloud environments because of this issue.

Similarly, with storage, you get a basic set of features that Amazon has enabled for you. You can obviously deploy any database you'd like on EC2, but that means you won't be using the "standard" Amazon API. I'm not sure that's the whole idea here.

Do four vendors a standard make? In reality, while many vendors and developers consume the Amazon EC2 and S3 APIs, only four vendors actually deliver them: Amazon, Eucalyptus, Cloud.com, and newcomer Nimbula. While it is impressive that these products (two of which are open source) would see the EC2 APIs as the mark to shoot for, it is important to keep my previous point in mind. What these projects have done is define themselves as feature-compatible with AWS, which again makes them right for some workloads, but not so much for others.

Any standard has to start somewhere, though, so one could look at this as the beginning of what will become a "must have" API set, at least for flat topology, high scale cloud environments. There is a problem, though. Amazon hasn't done anything to ensure the API is in the public domain and legally available for commercial resale. All three of these companies are in some sense playing with fire.

The good news is that most (and maybe all?) can replace the AWS API with some other API option with relative ease. However, from a marketing perspective, they'd have to start over again with promoting the new API as the new standard.

It's just too early for almost anything to be a cloud standard. The truth is, nobody in this industry can accurately predict today what the next 10 years of computing will look like. What architectures will "win"? What edge cases will be critical to the function of commerce? What new technologies will change the core requirements of the cloud computing operations model?

In a recent blog post, Enomaly's [Reuven] Cohen revised his position on the AWS API, and points out another key factor hindering its declaration as a standard: today, does anybody other than product vendors really care about portable cloud APIs? Most users of the cloud are, in fact, focusing their efforts on one cloud vendor or another. They have yet to feel the pain of converting key operations code due to a change in cloud vendor. It just isn't a high-demand aspect of cloud computing right now [see article below].

This may be further supported by the lack of buzz around the apparent breakdown of the Open Cloud Computing Interface standard effort in the Object Grid Forum. After a disagreement regarding licensing, it appears that a key member is taking critical intellectual property and parting ways. There really hasn't been much outrage expressed by the general cloud user community, which tells me the standard wasn't that important to them at this point.

There are plenty of reasons to desire a consistent, standard cloud computing API. However, there are also many reasons why it is just premature to declare a single winner--or any winner, for that matter. As good at the AWS EC2 and S3 APIs are for their respective contexts, they are just popular APIs, and not yet anything that can be declared a de facto standard for the entire cloud community.

Reuven Cohen asks Do Customers Really Care About Cloud API's? in this 7/9/2010 post:

Interesting post by Ellen Rubin of CloudSwitch asking if Amazon is the Official Cloud Standard? Her post was inspired by a claim that Amazon’s API should be the basis for an industry standard. Something I've long been against for the simple reason that choice / innovation is good for business. Although I agree with Ellen that AWS has made huge contributions to advance cloud computing. And also agree that "their API is highly proven and widely used, their cloud is highly scalable, and they have by far the biggest traction of any cloud". But the question I ask is do cloud customers really care about the API, so much as the applications and sevice levels applied higher up the stack?

At Enomaly we currently have customers launching clouds around the globe, each of which have their own feature requests ranging from various storage approaches to any number of unique technical requirements. Out of all the requests we hear on a daily basis, the Amazon API is almost never is requested. Those who do request it are typically in the government or academic spaces. When it is, it's typically part of a broader RFP where it's mostly a check box and part of a laundry list of requirements. When pushed the answer is typically, -- it's not important. So I ask why the fascination with the AWS API's as a sales pitch when it appears neither service providers or their end customer really care? More to the point, why aren't there any other major cloud providers who support the format other than Amazon? The VMware API or even the Enomaly API are more broadly deployed if you count the number of unique public cloud service providers as your metric.

An API from a sales point of view isn't important because you're not selling an API. You're selling the applications that sit above the API and mostly those applications don't really care what's underneath. As a cloud service provider you're selling a value proposition, and unfortunately an API provides little inherent value other than potentially some reduction in development time if you decide to leave. Actually the really hard stuff is in moving Amazon machine images away from EC2 in a consistant way, which Amazon through their AMI format have made a practically impossible mission. [Paravirt, really?]

I'm not saying API's aren't important for cloud computing, just that with the emergence of meta cloud API such as LibCloud, Jclouds and others, programming against any one single unique cloud service provider API is no longer even a requirement. So my question to those who would have you believe the AWS API is important is again -- why? Is it because your only value is that in which there is little other than your API support? Or is there something I'm missing?

Frédéric Faure’s Cloud AWS Infrastructure vs. Physical Infrastructure analysis (translated to English for the High Scalability blog and posted on 7/8/2010) begins:

This is a guest post by Frédéric Faure (architect at Ysance) on the differences between using a cloud infrastructure and building your own. Frédéric was kind enough to translate the original French version of this article into English.

I’ve been noticing many questions about the differences inherent in choosing between a Cloud infrastructure such as AWS (Amazon Web Services) and a traditional physical infrastructure. Firstly, there are a certain number of preconceived notions on this subject that I will attempt to decode for you. Then, it must be understood that each infrastructure has its advantages and disadvantages: a Cloud-type infrastructure does not necessarily fulfill your requirements in every case, however, it can satisfy some of them by optimizing or facilitating the features offered by a traditional physical infrastructure. I will therefore demonstrate the differences between the two that I have noticed, in order to help you make up your own mind.

The Framework

Cloud
There are several types of Cloud possibilities and I will stick with the AWS types which are infrastructure-oriented services, rather than the Google-type services (GAE – Google App Engine), to mention just one, which offers a running environment for your web applications developed with the APIs provided (similar to a framework). In fact, regarding the latter, we can’t speak for clients (they are the ones holding the credit card) about infrastructure management, because we upload our application using the APIs provided and leave the entire infrastructure management to the service provider. It doesn’t mean it’s less about Cloud computing, but simply about a Cloud service that’s more PaaS-oriented than infrastructure-oriented.

XaaS

Several abstraction layers: each editor directs its service to one or more layers. …

Frédéric continues with his physical infrastructure analysis by comparing infrastructures based directly on hardware to those based on virtualized environments.

Chuck Hollis distinguishes Commodity Clouds vs. Differentiated Clouds in this 7/9/2010 post:

image Randy Bias (@randybias, CEO of Cloudscaling) tweeted earlier today that -- compared to things like a Vblock -- the clouds that he was capable of building were about 1/10th the cost [tweet link added.]
Now, that's an interesting discussion, to be sure. 

And I'm not here to berate Randy -- or anyone else -- who agrees with this line of thinking. Randy appears to be a smart guy, and very experienced as well.

It's not a new thought -- the broader industry debate around commodity vs. differentiated has been around for a very long time indeed, and isn't going away anytime soon. It's moved from hardware, to software and now to the cloud.

But how does this historic debate play here? And especially in the context of service providers who are looking at this more as a business proposition, as opposed to an intellectual debate?

A Few Disclaimers
We really don't have any head-to-head standardized comparisons to go look at, but I will theoretically grant a key point: raw computing ingredients (compute, memory, physical storage, etc.) are damn cheap when you go shopping in the bargain section of the IT supermarket.

Heck, the last terabyte of storage I bought for my home was less than $90. I bet it's even cheaper now. As a matter of fact, it was an Iomega product (an EMC division), so we're no strangers to the allure of cheap and cheerful tech.
But there's a bit more to the picture that is worth considering.

Someone Has To Assemble The Pieces
Randy's firm is what I call a "cloud builder". They understand the requirements, select the components, assemble the pieces, build some operational tools, and eventually hand it over to the service provider.

I am presuming that Randy's services (or anyone else like him) are not free.  Given his level of expertise, I would expect that he deserves to command a premium in the market.  So I think a more useful comparison is probably 'cloud delivered ready for use' rather than 'prices for raw ingredients'

Make no mistake -- selecting components, integrating them, qualifying their boundaries, characterizing their behavior -- that's a lot of heavy lifting.  When you buy something pre-integrated like a Vblock, all that effort is already baked in.

Who Ya Gonna Call?
Let's say something breaks. I'm not talking about a simple component failure, I'm talking about several pieces that aren't interacting the way they're supposed to.  None of the componentry vendors have invested in support models that can handle how things interact, rather than the individual pieces themselves.
Add in the inevitable upgrades, patches, enhancements, etc. -- more heavy lifting.

If the end result is to be used for commercial purposes, those are costs that have to be identified and figured in somewhere. With the Vblock model, those costs are certain and visible -- less so with other commodity-based models.

So we've gone from "cost of ingredients" to "cost of delivered cloud" to "cost to support and operate". 

There's more, but I think you get the picture ...

What Can It Do?
Take something like a Vblock.  You'll see a lot of differentiated technology.  Differentiated virtualization from VMware. Differentiated converged server and storage from Cisco.  Differentiated storage, backup, replication, management and security from EMC.

All of that differentiation was done with a purpose: more performance, better availability, tighter integration, stringent security, etc. As a result, it can support mainstream IT workloads using a cloud operational model -- just about all of them. That's useful, as we'll see in a moment.

If we take even one or two aspects -- like "performance" or "availability" -- unfortunately there are no good metrics for independent comparisons. BTW, these terms have subtly different meanings when discussing workloads specific to enterprise IT vs. generic compute images.

Speaking strictly on behalf of EMC, we are quite familiar with what commodity clouds can do, and have built a few of our own. We have a good idea as to what they do well, and what they don't do well.

In all fairness, we do encounter use cases (and associated business models) where -- yes -- the commodity-based cloud is an attractive option.  But -- at present -- there's not a lot of these use cases to go around, and there doesn't appear that there's a lot of money to be made, since -- well -- they're commodity clouds, and hence undifferentiated.

Which brings me to ...

Who's Going To Want It?
By far, the largest target market for cloud services are traditional enterprise IT organizations. 

And I am continually amazed at how so many people think that "it's cheaper!" will somehow overcome all the other valid concerns these people have when considering any form of an external IT service.

Not to make an unpleasant analogy, but consider health care -- especially in more developed countries. Here in the USA, we're very concerned with controlling rising health care costs -- but not at the price of substandard care!

Are there enterprise-class workloads that could possible run in a commodity cloud?  Perhaps, but examples are hard to come by. Certainly not the meat-and-potatoes stuff like large databases and critical business processes. At least, not in the near term :-)

The Service Provider Business Angle

Given my exposure to SP business models, I can make a strong case for many of them to have a rock-bottom, low-cost IaaS offering. If someone is shopping price -- and price alone -- you need to have something to get into the discussion.

As a standalone business model though, I find it singularly unattractive. There are a limited number of paying customers, more and more people are entering the market, and prices continue to fall.

However, as an "on ramp" to more differentiated services -- and hence more profitable -- it is a far more attractive proposition. Create an environment where it's easy to move people up to more -- more performance, more availability, more security, more control, more functionality, etc. -- and do so seamlessly and non-disruptively.

The real question in my mind?  Does it make sense to do this with one infrastructure and model that can flex upwards, or invest in two distinct architectural and management models -- one optimized for sheer cost, and one optimized for the delivery of richer services?

My argument has been the former, and -- so far -- it's getting a fair amount of acceptance.

That being said, it's still early days :-)

With All Due Respect
I think the commodity vs. differentiated discussion is a good one.  It was good 20 years ago, and it'll probably be good 20 years from now.

The "good enough" school keeps getting better. What used to require differentiated technology can sometimes be done using a commodity approach.  However, most experienced practitioners have learned to look beyond the simple prices of the ingredients.

That being said, differentiation hasn't stopped either -- it keeps getting better as well. 

The real challenge for service providers? Keeping ahead of both waves :-)

Chuck is VP -- Global Marketing CTO for EMC Corporation, which just exited the cloud computing business and purchased “big data” vendor Greenplum. See the Other Cloud Computing Platforms and Services section of my Windows Azure and Cloud Computing Posts for 7/5/2010+ post.

The preceding post is the third in a series of conversations between Chuck and Randy Bias which began with Chuck’s My Discussions With A Cloud Builder of 6/28/2010 and Randy’s Building A Commodity Cloud with EMC? reply of 6/30/2010.

<Return to section navigation list> 

blog comments powered by Disqus