Monday, May 17, 2010

Windows Azure and Cloud Computing Posts for 5/17/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in May 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

Thermous posted an Azure Library for Lucene.Net (Full Text Indexing for Azure) to the MSDN Code Gallery on 5/12/2010:

image Project description
This project allows you to create Lucene Indexes via a Lucene Directory object which uses Windows Azure BlobStorage for persistent storage.
About
This project allows you to create Lucene Indexes via a Lucene Directory object which uses Windows Azure BlobStorage for persistent storage.
Background: Lucene.NET

Lucene is a mature Java based open source full text indexing and search engine and property store. Lucene.NET is a mature port of that to C#.

Lucene provides:

  • Super simple API for storing documents with arbitrary properties
  • Complete control over what is indexed and what is stored for retrieval
  • Robust control over where and how things are indexed, how much memory is used, etc.
  • Superfast and super rich query capabilities
    • Sorted results
    • Rich constraint semantics AND/OR/NOT etc.
    • Rich text semantics (phrase match, wildcard match, near, fuzzy match etc)
    • Text query syntax (example: Title:(dog AND cat) OR Body:Lucen* )
    • Programmatic expressions
    • Ranked results with custom ranking algorithms
AzureDirectory

AzureDirectory smartly uses local file storage to cache files as they are created and automatically pushes them to blob storage as appropriate. Likewise, it smartly caches blob files back to the a client when they change. This provides with a nice blend of just in time syncing of data local to indexers or searchers across multiple machines.

With the flexibility that Lucene provides over data in memory versus storage and the just in time blob transfer that AzureDirectory provides you have great control over the composibility of where data is indexed and how it is consumed.
To be more concrete: you can have 1..N Worker roles adding documents to an index, and 1..N searcher Web roles searching over the catalog in near real time.

(Remember that each Worker and Web role incurs individual compute charges of $0.12/hour.)

Thermous continues with with sample code and a reference to “a LINQ to Lucene provider on CodePlex, which allows you to define your schema as a strongly typed object and execute LINQ expressions against the index.”

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

David Robinson introduces the SQLAzureHelperClass in his Vertical Partitioning in SQL Azure: Part 1 post of 5/17/2001 to the SQL Azure Team blog:

image SQL Azure currently supports 1 GB and 10 GB databases. If you want to store larger amounts of data in SQL Azure you can divide your tables across multiple SQL Azure databases. This article will discuss how to use a middle layer to join two tables on different SQL Azure databases using LINQ. This technique vertically partitions your data in SQL Azure.

In this version of vertically partitioning for SQL Azure we are dividing all the tables in the schema across two or more SQL Azure databases. In choosing which tables to group together on a single database you need to understand how large each of your tables are and their potential future growth – the goal is to evenly distribute the tables so that each database is the same size.

There is also a performance gain to be obtained from partitioning your database. Since SQL Azure spreads your databases across different physical machines, you can get more CPU and RAM resources by partitioning your workload. For example, if you partition your database across 10 - 1 GB SQL Azure databases you get 10X the CPU and memory resources. There is a case study (found here) by TicketDirect, who partitioning their workload across hundreds of SQL Azure databases during peak load.

When partitioning your workload across SQL Azure databases, you lose some of the features of having all the tables in a single database. Some of the considerations when using this technique include:

  • Foreign keys across databases are not support. In other words, a primary key in a lookup table in one database cannot be referenced by a foreign key in a table on another database. This is a similar restriction to SQL Server’s cross database support for foreign keys.
  • You cannot have transactions that span databases, even if you are using Microsoft Distributed Transaction Manager on the client side. This means that you cannot rollback an insert on one database, if an insert on another database fails. This restriction can be negated through client side coding – you need to catch exceptions and execute “undo” scripts against the successfully completed statements.
SQLAzureHelper Class

In order to accomplish vertical partitioning we are introduc[ing] the SQLAzureHelper class, which:

  • Implements forward read only cursors for performance.
  • Support[s] IEnumerable and LINQ
  • Disposes of the connection and the data reader when the result set is no longer needed.

This code has the performance advantage of using forward read only cursors, which means that that data is not fetched from SQL Azure until it is needed for the join.

[Download the SQLAzureHelper class: SqlAzureHelper.cs]

The code to get the result sets from SQLAzureHelper class looks like this:

var colorDataReader = SQLAzureHelper.ExecuteReader(
    ConfigurationManager.ConnectionStrings["ColorDatabase"].ConnectionString,
    sqlConnection =>
    {
        SqlCommand sqlCommand =
            new SqlCommand("SELECT ColorName, CompanyId FROM Colors",
                sqlConnection);
        return (sqlCommand.ExecuteReader());
    });

var companyDataReader = SQLAzureHelper.ExecuteReader(
    ConfigurationManager.ConnectionStrings["CompanyDatabase"].ConnectionString,
    sqlConnection =>
    {
        SqlCommand sqlCommand =
            new SqlCommand("SELECT CompanyId, CompanyName FROM Companies",
                sqlConnection);
        return (sqlCommand.ExecuteReader());
    });

The result sets return from the two SQL Server databases [for] a join by LINQ [below].

LINQ

LINQ is a set of extensions to the .NET Framework that encompass language-integrated query, set, and transform operations. It extends C# and Visual Basic with native language syntax for queries and provides class libraries to take advantage of these capabilities. You can learn more about LINQ here. This code is using LINQ as client-side query processor to perform the joining and querying of the two result sets.

var query =
    from color in colorDataReader
    join company in companyDataReader on
        (Int32)color["CompanyId"] equals (Int32)company["CompanyId"]
    select new
    {
        ColorName = (string)color["ColorName"],
        CompanyName = (string)company["CompanyName"]
    };

foreach (var combo in query)
{
    Console.WriteLine(String.Format("{0} - {1}", combo.CompanyName, combo.ColorName));
}

This code takes the result sets and joins them based on CompanyId, then selects a new class comprised of CompanyName and ColorName.

Connections and SQL Azure

One thing to note is that the code above doesn’t take into account the retry scenario mention in our previous blog post. This has been done to simp[li]fy the example. The retry code needs to go outside of the SQLAzureHelper class to completely re-execute the LINQ query.

In our next blog post we will demonstrate horizontal partitioning using the SQLAzureHelper class.

I’m glad to see the beginning of some concrete advice for SQL Azure database partitioning. However, the forthcoming availability of 50-GB Azure databases will considerably reduce the need for partitioning in departmental-level projects.

The Codename “Dallas” Team published an 18-page Microsoft Codename "Dallas" Whitepaper on 4/22/2010:

Microsoft Codename "Dallas" is a new cloud service that provides a global marketplace for information including data, web services, and analytics. Dallas makes it easy for potential subscribers to locate a dataset that addresses their needs through rich discovery. When they have selected the dataset, Dallas enables information workers to begin analyzing the data and integrating it into their documents, spreadsheets, and databases.

Similarly, developers can write code to consume the datasets on any platform or simply include the automatically created proxy classes. Applications from simple mash-ups to complex data-driven analysis tools are possible with the rich data and services provided. Applications can run on any platform including mobile phones and Web pages. When users begin regularly using data, managers can view usage at any time to predict costs.

Dallas also provides a complete billing infrastructure that scales smoothly from occasional queries to heavy traffic. For subscribers, Dallas becomes even more valuable when there are multiple subscriptions to different datasets: although there may be multiple content providers involved, data access methods, reporting and billing remains consistent.

For content providers, Dallas represents an ideal way to market valuable data and a ready-made solution to e-commerce, billing, and scaling challenges in a multi-tenant environment – providing a global marketplace and integration points into Microsoft’s information worker assets.

Just a reminder.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Dave Kearns asserts “Data breeches can occur when not enough attention is paid to account and access governance” in a preface to his Revealing the 'cracks' in provisioning post of 5/17/2010:

At the recent European Identity Conference, Cyber-Ark's Shlomi Dinoor (he's vice president of Emerging Technologies) emphasized to me that nothing is ever 100% in IdM. While our topic was "Security and Data Portability in the Cloud" he wanted to remind me that provisioning -- the oldest of IdM services -- was still somewhat problematic. He did this by pointing me to a recent article in Dark Reading: "Database Account-Provisioning Errors A Major Cause Of Breaches."

In the article author Ericka Chickowski points to a recent data breech:

"Take the case of Scott Burgess, 45, and Walter Puckett, 39, a pair of database raiders who were indicted this winter for stealing information from their former employer, Stens Corp. Burgess and Puckett carried out their thievery for up to two years after they left Stens simply by using their old account credentials, which were left unchanged following their departures. Even after accounts were changed, the duo were subsequently able to use different log-in credentials to continue pilfering information."

The problem is that too often we concentrate on the mechanisms of provisioning (and even de-provisioning) without paying enough attention to account and access governance.

But even more problematic can be those accounts that aren't particularly identified with a user.

Phil Lieberman, of Lieberman Software (who was also with me in Munich), says that organizations: "have to ask themselves the question, 'Where do we have accounts? Tell me all of the places where we have accounts, and tell me all the things they use these accounts for.'" He goes on to say: "And the second question is, 'So we're using these accounts -- when were those passwords changed? And if we're using those accounts, what is the ACL [access control list] system we're using, and when was the last time we checked the ACL system?' And finally, 'We have audit logs being generated by these databases -- are we analyzing these audit logs looking for patterns that indicate abuse?'"

Lieberman and Dinoor both represent companies in the "emerging" (in quotes, because the discipline goes back dozens of years, yet it's a hot topic today) Privileged User Management (PUM) space, also called PAM (Privileged Access Management) or PIM (Privileged Identity Management). PUM is the discipline to create, maintain and remove critical accounts (administrator on Windows, root on Unix, the DbA on a database and so on). These accounts represent the "cracks" in provisioning through which data gets breeched. If reading the article noted above gives you pause, you should check out the offerings from Cyber-Ark and Lieberman Software. It might help you sleep better at night.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Eugenio Pace posted WAG - Part 1 - Release Candidate of “patterns & practices - Windows Azure Guidance” to CodePlex on 5/17/2010:

Recommended Download

Source Code WAG-Sample Code-RC source code, 3310K, uploaded Today - 7 downloads

Other Available Downloads

Documentation WAG-Docs-RC documentation, 1571K, uploaded Today - 7 downloads

Release Notes
"Release Candidate" for Part 1 of the Windows Azure Guide
Highlights of this release are:
  • Code samples complete.
  • Fixed few bugs on "Dependency Checker".
  • Many bug fixes in the samples.
  • All chapters of the guide.

As usual, make sure you read the "readme" file included in the release.

More background and links to resources are on the WAG Home Page.

MonitorGrid announced the availability of their MonitorGrid server health monitoring service on 5/17/2010:

image The MonitorGrid cloud app runs on Azure and is wired with Linxter. Linxter allows for secure, reliable, two-way communication, regardless of the number of intermediary networks involved and regardless of whether or not they are secure.

MonitorGrid is powered by Linxter and Windows Azure.

  • No credit card required to sign up.
  • Your first two servers are on us!
  • MonitorGrid currently supports Microsoft Servers
    Maximum Features
    • Securely monitor servers across multiple networks and domains
    • View current and historical performance reports
    • Issue remote restarts, both on demand and scheduled
    • Transfer files to one or many servers
    Minimal Setup
    • Easy-to-install service, easy-to-use web app
    • Customizable performance thresholds and notification settings
    • Firewall friendly – no changes to your security environment needed
    • Cloud-based – no additional infrastructure required
  • I’m signing up to compare MonitorGrid with mon.itor.us and Pingdom. You’ll need to follow the instructions form this 00:19:07 Linxter Azure Integration Tutorial video to add the Linxter server features to your Azure project. You can download the Azure deom solution file from the Linxter Developer site.

    Stand by for results.

    Brian Johnson interviewed Linxter CEO Jason Milgram (@jmilgram) in this 00:11:59 BizSpark Startup Linxter Launches Azure Based MonitorGrid Channel9 Webcast of 5/5/2010:

    image BizSpark Startup Linxter is launching a new product called MonitorGrid. It's a server health service that was conceived and developed as part of one of our BizSpark Incubation Weeks last year.

    I was able to talk to Linxter CEO Jason Milgram today about MonitorGrid and about the future of the project.

    Jason is the presenter of the Linxter Azure Integration Tutorial mentioned in the preceding article.

    Panagiotis Kefalidis (@pkefal) asks Windows Azure – Is Thread spawning from Worker Roles the paspartu? and concludes “No” in this 5/18/2010 (Athens time) post:

    Paspartu is French for “one size fits all”. Recently I’ve been coming across posts explaining and “promoting” the idea of spawning threads inside a worker role each one of them with a unique work to be done. All are sharing the same idea and all of them are describing the same thing.

    The idea

    You have some work to do, but you want to do it with the most efficient way, without having underutilized resources, which is one of the benefits of cloud computing anyway.

    The implementation

    You have a worker process (Worker Role on Windows Azure) which processes some data. Certainly that’s a good implementation but it’s not a best practice. Most of the time, your instance will be underutilized, unless your doing some CPU and memory intensive work and you have a continuous flow of data to be processed.

    In another implementation, we created a Master-Slave pattern. A master distributes work to other slave worker roles, roles are picking up their work, do their stuff, return result and start over again. Still, in some cases that’s not the best idea either. Same cons as before. Underutilized resources, high risk of failure. If the master dies, unless properly designed, your system dies. You can’t process any data.

    So, another one appeared. Inside a worker role, spawn multiple threads, running their own processes or methods, doing their work and return result. Underutilization is minimized, Thread Pool is doing all the hard work for us and as soon as .NET 4.0 is supported on Windows Azure, parallelization is easy and, allow me to say, mandatory. But what happens if the worker instance dies? Or restarts? Yes, your guess is correct. You lose all threads and all the processing done by that moment, is lost, unless you persist it somehow. If you had multiple instances of your worker role to imitate that behavior, that wouldn’t happen. You’ll only lose data from the instance that died.

    As Eugenio Pace says “You have to be prepared to fail” and he’s right. Every single moment, your instance can die, without a single notice and you have to be prepared to deal with it.

    Oh, boy.

    So really, there is no single solution or best practice. For me, it’s best guidance. Depending on your scenario, one of the solutions above or even a new one, can fit better for you than for others. Every project is unique and has to be treated as such. Try to think out of the box and remember that this is deep water for everyone. It is just some of us swim better..

    Maarten Balliauw’s wrote in his Taking Care of a Cloud Environment (slides) post of 5/17/2010:

    It looks like I’m only doing sessions lately :-) Here’s another slide deck for a presentation I did on the Architect Forum last week in Belgium.

    Abstract: “No, this session is not about greener IT. Learn about using the RoleEnvironment and diagnostics provided by Windows Azure. Communication between roles, logging and automatic upscaling of your application are just some of the possibilities of what you can do if you know about how the Windows Azure environment works.”

    Slides: Taking care of a cloud environment

    View more presentations from Maarten Balliauw.

    InformationWeek India reports “Bangalore based SportingMindz has migrated 22yardz, a cricket match analysis product, to the Windows Azure Platform” in its Cloud computing comes to IPL3 story of 5/17/2010:

    With IPL Season 3 occupying the mindshare of cricket fans today, sportsmen are gearing up to put their best foot forward in the cricket arena. In this competitive scenario, technology is expected to play a key role.

    image Vendors too are looking to cater to this attractive market through a variety of delivery models. The Cloud is a natural fit in this overall strategy. For example, SportingMindz, a Bangalore based organization providing analytical solutions and services to sports organizations, has partnered with Microsoft India for the IPL3 series. The firm has migrated its cricket match analysis product, 22yardz, to the Windows Azure Platform. 22yardz is currently being used by Royal Challengers Bangalore and Kings XI Punjab. [Emphasis added.]

    22yardz is a cricket match analysis software designed to analyze the different aspects in a live match scenario giving the detailed statistics along with the strategy of oppositions and player analysis in all departments of the match with seamless integration of videos.  The cloud model has helped SportinMindz address pain points such as performance, scalability and availability.

    Microsoft Research’s eScience Group posted a Video: Microsoft Windows Azure Cloud Platform Aiding Water Researchers in California on 5/14/2010:

    Microsoft Research’s eScience Group is focused on researching ways that information technology (IT) can help solve scientific problems. Dr. Catharine van Ingen, a Partner Architect in Microsoft Research’s eScience Group, talks in this video about how she and others in Microsoft Research have worked with scientists at the University of California, Berkeley and Lawrence Berkeley National Laboratory to address the computing needs in managing Northern California’s Russian River Valley watershed. In this project, Microsoft's Windows Azure cloud-computing platform was used in helping these researchers to manage massive amounts of data in scalable way.

    To watch the video and learn more about how Windows Azure was used, click here.

    Return to section navigation list> 

    Windows Azure Infrastructure

    Bill Hilf announced on 5/17/2010 that Microsoft will start Modeling the World:

    Technology is transforming our ability to measure, monitor and model how the world behaves. This has profound implications for scientific research and can transform the way we tackle global challenges such as health care and climate change. This transformation also will have a huge impact on engineering and business, delivering breakthroughs and discoveries that could lead to new products, new businesses – even new industries.

    Today, we’re proud to introduce Microsoft’s Technical Computing initiative, a new effort focused on empowering millions of the world’s smartest problem-solvers.  We’ve designed this initiative to bring supercomputing power and resources to a much wider group of the scientists, engineers and analysts who are using modeling and prediction to solve some of the world’s most difficult challenges.

    Our goal is to create technical computing solutions that speed discovery, invention and innovation.  Soon, complicated tasks such building a sophisticated computer model – which would typically take a team of advanced software programmers months to build and days to run – will be accomplished in an afternoon by a single scientist, engineer or analyst.  Rather than grappling with complicated technology, they’ll be able to spend more time on important work.  

    As part of this initiative we’re also bringing together some of the brightest minds in the technical computing community at www.modelingtheworld.com to discuss the trends, challenges and opportunities we share. Personally, I think this site provides a great interactive experience with fresh, relevant content—I’m incredibly proud of it. Please tune in and join us—we welcome your ideas and feedback.

    In terms of technology, the initiative will focus on three key areas:

    1. Technical computing to the cloud: Microsoft will help lead the way in giving scientists, engineers and analysts the computing power of the cloud.  We’re also working to give existing high-performance computing users the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly.  
    2. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores. But most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test, and troubleshoot.  We know that a consistent model for parallel programming can help more developers unlock the tremendous power in today’s computers and enable a new generation of technical computing. We’re focused on delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud.    
    3. Develop powerful new technical computing tools and applications: Scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power using simpler tools to increase the speed of their work, and we’re building a platform with this objective in mind. We expect that these efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration.

    The path we’ve taken to arrive at this initiative is built on a foundation of great technology and underpinned by a strong vision for bringing the power of technical computing to those who need it most. Microsoft is committed to this business, and I am looking forward to working with our industry partners and customers to help bring about the next wave of discovery. 

    Graphic credit: Microsoft Corp.

    Mary Jo Foley rings in with a Microsoft launches Technical Computing Initiative 2.0 post of 5/17/2010 post to her All About Microsoft blog for ZDNet:

    image Microsoft is creating a new Technical Computing Group, which company officials unveiled on May 17, and is launching (again) a corporate Technical Computing Initiative.

    (I say “again” because Microsoft launched a Technical Computing Initiative in 2004, as documented in this 2007 Microsoft Research Technical Computing white paper.)

    The new group falls under Bob Muglia, who is President of Microsoft’s Server and Tools business, but will work closely with various groups in Microsoft Research, company officials said. The three areas of focus of the group and the broader initiative will be cloud, parallel-programming and new technical computing tools. There is a new technical computing community Web site, www.modelingtheworld.com, launching as part of the effort.

    If you’re interested in particulars regarding the technical tools, here’s what the Softies are saying (via a spokesperson):

    “Windows HPC (High Performance Computing) Server 2008 and all of the capabilities in Visual Studio 2010 that allow developers to take advantage of parallelism (e.g., parallel profiler and debugger and the ConcRT (concurrent) runtime are examples of technology the Technical Computing group has already delivered. In the future we’ll be delivering Technical Computing services on top of Azure that will integrate with desktop applications from Microsoft and partners.” [Emphasis added.]

    Microsoft already has been working on all of these areas. A couple of months ago, the Microsoft Research team announced it was working with the National Science Foundation to make cloud resources available to engineers and scientists, for example. Last year, Microsoft created a new eXtreme Computing Group, which was focused on applying exascale computing technologies. And the Softies have been working on new parallel-processing and multi-core tools and techniques for the past couple of years.

    Today’s announcement builds on these existing initiatives, Microsoft officials said.

    Todd Bishop expands on the Modeling the World project in his Microsoft takes wraps off stealth plan to boost scientific modeling article of 5/17/2010 for the Puget Sound Business Journal:

    image Microsoft has been quietly building a team of hundreds of people with the mission of giving the world's scientists and engineers the ability to develop and work with complex models of natural and manmade systems much more quickly and easily than they can today.

    "It's one of the largest-growth teams in the company right now, and overall one of the biggest bets that we're making strategically," said Bill Hilf, a Microsoft general manager working on the Technical Computing initiative.

    The company released the first details of the initiative in an article on its website this afternoon. It also launched an associated site, at www.modelingtheworld.com. The company says real-time scientific modeling could help society understand and address some of the world's biggest environmental and global health problems.

    As part of the Technical Computing initiative, Microsoft says it's developing a technology platform that will help developers build desktop applications that can tap into large volumes of data and easily harness powerful computers in server clusters and data centers. In addition, the company is developing a new set of technical computing services for its Azure cloud-computing system, to help scientists make better use of the company's worldwide data centers. [Emphasis added.]

    The team is also working on ways of developing software better tuned for machines with multiple processors, or computing cores. …

    Dustin Amrhein asserts “Effective cloud management solutions are a must” in his Management Solutions for the Cloud essay of 5/17/2010:

    The role of cloud management solutions in the enterprise world is becoming increasingly important. With the interest and adoption of cloud in the enterprise steadily rising, solutions that help an organization to effectively harness, orchestrate, and govern their use of the cloud are floating to the top of the needs list. Developing and delivering solutions in this arena is no small task, and one made even tougher by enterprise user expectations and requirements. Just what are some of the enterprise requirements and expectations for cloud management solutions?

    First things first, users expect cloud management solutions to be broadly applicable. What do I mean by that? Take for instance a recent discussion I had with an enterprise user about a management solution for cloud-based middleware platforms. The solution that was the topic of our discussion enables users to create middleware environments, virtualize them, deploy them into a cloud environment, and manage them once they are up and running.

    During the course of that discussion, the user told me: "I want one tool to do it all." In this case, all referred to the ability to support multiple virtualization formats, varying hardware platforms, different operating system environments, all cloud domains, and a plethora of middleware software. Of course, the user acknowledged it was a bit of an overreach because when a tool "does it all" it often means that it does nothing, and when I pressed a bit more the real desire was for a single, unified management interface. This of course points back to the notion of open cloud solutions that I wrote about a while back. You will never get a tool that does it all, but if you get open tools, chances are you can build a centralized interface that exposes the capability of many tools, and thus logically presents a "single tool that does it all" to your end users.

    In many cases, enterprises adopt cloud computing as a more efficient and agile approach to something they already do today. For example, if I put it into the context of the part of the cloud I deal with, users may leverage the cloud as a means to standup and tear down application environments in a much faster and simpler manner than their traditional approach. No one will argue that faster and simpler is good, but that does not mean you can or should sacrifice the insight and control into these processes that the organization requires. If the enterprise requires a request/approval workflow process for commissioning and decommissioning application environments, the cloud management solution must provide the necessary hooks. In a more generalized sense, cloud management solutions must enable integration into an enterprise's governance framework. Without this integration, the truth is it is likely inapplicable for enterprise use.

    If I have learned one thing from users over the past year with respect to enterprise-ready cloud management solutions it is this: Auditability is huge! Organizations want to know who is doing what, when they are doing it, how long they are doing it for, and much more. Users pretty well assume that a cloud management solution provides insight into these kinds of metrics. The obvious use case here is the ability to track cloud usage statistics among various users and groups to facilitate cost allocation and/or chargeback throughout the enterprise. Another, perhaps less obvious, use case concerns configuration change management. The ability to very quickly determine what was changed, when it was changed, and who changed it is crucial when a cloud management solution and the underlying cloud is distributed among a wide set of enterprise users.

    The fact is that we are in the beginning phase of the emergence of need for cloud management solutions, and basic requirements and expectations are still in the formative stage. The few listed here are just a start, and some of what I hear most commonly. It will be interesting to watch the shift and increase in these expectations, especially as enterprises adopt federated, highly heterogeneous cloud environments. I certainly welcome any feedback or insight you may have into the need for cloud management solutions.

    Dustin makes a persuasive case. Microsoft must devote more resources to management solutions for Windows Azure, Azure AppFabric, and SQL Azure.

    Lori MacVittie claims In cloud computing environments the clock literally starts ticking the moment an application instance is launched. How long should that take? in her When (Micro)Seconds Matter post to the F5 DevCentral blog:

    Mechanical_StopwatchThe term “on-demand” implies right now. In the past, we used the term “real-time” even though what we really meant in most cases was “near time”, or “almost real-time”.  The term “elastic” associated with scalability in cloud computing definitions implies on-demand. One would think, then, that this means that spinning up a new instance of an application with the intent to scale a cloud-deployed application to increase capacity would be a fairly quick-executing task.

    That doesn’t seem to be the case, however.

    blockquoteDealing with unexpected load is now nothing more than a 10 minute exercise in easy, seamlessly integrating both cloud and data center services. 

                -- Cloud computing, load balancing, and extending the data center into a cloud, The Server Room

    A Twitter straw poll on this subject (completely unscientific) indicated an expectation that this process should (and for many does) take approximately two minutes in many cloud environments. Minutes, not seconds. Granted, even that is still a huge improvement over the time it’s taken in the past. Even if the underlying hardware resources are available there’s still all of the organizational IT processes that need to be walked through – requests, approvals, allocation, deployment, testing, and finally the actual act of integrating the application with its supporting network and application delivery network infrastructure. It’s a time-consuming process and is one of the reasons for all the predictions of business users avoiding IT to deploy applications in “the cloud.”

    IT capacity planning strategy has been to anticipate the need for additional capacity early enough that the resources are available when the need arises. This has typically resulted in over-provisioning, because it’s based on the anticipation of need, not actual demand. It’s based on historical trends that, while likely accurate, may over or under-estimate the amount of capacity required to meet historical spikes in demand.

    IS “FASTER” GOOD ENOUGH?

    Cloud computing purports to provide capacity on-demand and allow organizations to better manage resources to mitigate the financial burden associated with over-provisioning and the risks to the business by under-provisioning. The problem is that provisioning resources isn’t an instantaneous process. At a minimum the time associated with spinning up a new instance is going to delay increasing capacity by minutes.

    complexificationVirtual images don’t (yet) boot up as quickly as would be required to meet an “instant on” demand. The processes by which the application is inserted into the network and application delivery network, too, aren’t instantly executed as there are a series of steps that must occur in the right order to ensure accessibility.

    An instance that’s up but not integrated into the ecosystem is of little use, after all, and the dangers associated with missing a critical security step increase risk unnecessarily.

    The end result is that capacity planning in the cloud remains very much an anticipatory game with operators attempting to prognosticate from historical trends when more capacity will be required. Operations staff needs to be just as vigilant as they are today in their own data centers to ensure that when the demand does hit a cloud-based application the capacity to meet the demand is already available. If the cloud computing environment requires a “mere ten minutes” to provision more capacity, then the operations staff needs to be ten minutes ahead of demand. It needs to project out those ten minutes and anticipate whether more capacity will be required or not. …

    Lori continues her argument and concludes:

    If this test isn’t part of your standard “cloud” acquisition process, it should be, because “fast enough” is highly dependent on whether you need capacity available in the next hour, the next minute, or the next second.

    Geva Perry (@gevaperry) reported the availability of the CloudChasers Podcast: Battle of the Public Clouds on 5/15/2010:

    Geva Perry

    A couple of weeks ago the good folks at Novell invited me to participate in a podcast they are sponsoring called CloudChasers.

    You can listen to it here or on iTunes. Here is the description:

    Battle of the Public Clouds: Who is Winning? – April 22, 2010

    Ps.jwbdifre.170x170-75Many prognostications about the public cloud focus on three key vendors: Amazon, Google and Microsoft. This week on cloudchasers, we’ll check out the numbers, platforms and competing visions as we look at each vendor’s place in the market today and in the future. Also, with so many companies jumping on the cloud bandwagon, are there others who are more appropriate for this list? Who do YOU think will be the winner? Will there be just one winner?

    Host: Matthew T. Grant

    Guests:

    <Return to section navigation list> 

    Cloud Security and Governance

    Mary Branscombe quotes Ray Ozzie: “Facebook is doing us all a favour" in her Microsoft's Ray Ozzie on the privacy issues of cloud computing story for TechRadar.com:

    When he joined Microsoft, Microsoft's chief software architect Ray Ozzie got a chance to take a step back and look at the technology industry.

    What he saw was that the PC wasn't the centre of the computing universe any more – but like Nvidia's Jen-Hsun Huang, he told the Future in Review conference this week that he doesn't think it's going away any time soon either - he also had words to say about the cloud, online privacy, HTML 5 and Apple.

    "The world that I see panning out is one where individuals don't shift from 'I'm using exclusively this one thing called a PC as a Swiss army knife for everything I do' to using a different Swiss army knife. The beauty of what's going on in devices is you can imagine a device.

    "Previously you could imagine software and build it but hardware was very hard and took a long time to build. Now you can imagine end-to-end device services.

    "So there's probably a screen in the car that federates with the phone when you bring it into the car. Will we have a device with us that's always on? Yes. We call it a phone but it's a multi-purpose device.

    "Will we also carry something of a larger form factor that we can quickly type on? For many of us, the answer is yes." And what will it look like? "The clamshell style of device is a very useful thing and I think it will be with us for ever. I think there is a role for the desktop too…"

    Office, Docs and better productivity

    Ozzie's pet projects at Microsoft include the Azure cloud service and the social computing tools like the Spindex social aggregator, the Outlook Social Connector and Facebook Docs (which all come out of the new Fuse Labs Microsoft site near his home town of Boston). …

    Mary continues the story and concludes

    With only a hint of irony, he says "Facebook is doing us all a favour by pushing the edge and causing the conversations to be very broad."

    Should Microsoft be moving faster in mobile, in browsers, in the cloud, into this future world? "We're very impatient in the technology industry," Ozzie points out. "We get very enamoured with the next shiny object. Let's get real here. How many years have any of these things actually been out? How many years have we all been using these pocket internet companions?

    "It's actually been a relatively small number of years. We haven't even seen the TV get lit up yet as a communication device; we haven't seen all the screens on the wallbeing lit up as devices. Every single one of these is going to get lit up as a similar kind of device."

    Whitfield Diffie claims “[The Cloud] Will Destroy Current Thinking, and Maybe That's a Good Thing” in a Whit Diffie Examines Cloud Security Aspects post of 5/17/2010:

    Speaking in Australia, noted cryptographer and IT security pioneer Whit Diffie commented on Cloud Computing's potential to destroy current security approaches, but improve security overall for the masses.

    "At worst [cloud computing] will fundamentally destroy the current security paradigm," he said. "But on the other hand it's going to substantially improve the average level of security of ordinary shleps who didn't pay any attention to the matter."

    Diffie's presentation was another example of the global nature of Cloud Computing, and was made as the world turns its eyes toward Europe next month for Cloud Expo Europe in Prague June 21-22. He became famous in the 70s with breakthrough security-key research, and served as a Distinguished Engineer at Sun for almost 20 years (and was also a Sun Fellow). He was one of those that haven't made the transition from Sun to Oracle, and now serves as a VP with ICANN.

    Diffie said he believes that "cloud computing will become very widespread," and that "there's going to be a tremendous security gain by pushing things into standard security practices" if companies start to adapt government security contract models. "Contracts will have to occur very fast" to cater to demand for services needed for only a few minutes or fractions of a second...You've got to know whether those people are capable of fulfilling the contract. They've gone through a set of bureaucratic hurdles so that all of a sudden if a secret contract comes up it can be awarded overnight - there's very little example of that in the civilian world."

    He also warned against a rise of that seemingly invariable tendency for any business that's run by humans: proprietary methods. "Above the (open-source) GPL, everything Google does is a trade secret," he noted.

    <Return to section navigation list> 

    Cloud Computing Events

    Michael Coté’s Moving beyond befuddlement into cloud comforting – CA World post of 5/17/2010 analyzes CA Technologies’ position as a cloud computing vendor:

    CA & Cloud

    [IT is going to become] a manager of a dynamic supply chain of internal and external resources to deliver business services to internal and external clients.
    –Ajei Gopal, CA World day two keynote

    CA’s a favorite whipping boy for IT insiders. Their giant, long-lived portfolio and name change inducing events gets most people to snicker when you mention “CA.” They’re a classic big spend enterprise vendor: comprehensive, enterprise priced, and rarely innovation-leading. So, their string of acquisitions of relatively young and hip companies in recent past has left folks befuddled. What exactly is CA doing with 3Tera, Nimsoft, NetQoS, Oblicore, and others?

    Stated reasons have been access to new markets (SMB and MSP with Nimsoft) and jamming in cloud (3Tera and, to an extent, NetQoS). The first few days of CA World in Las Vegas have reinforced that messaging: CA is all over the cloud, but with the experienced hand of an elder company. They’re not going to shatter the peace of your glass house…unless you want them to. The cloud is here, but we’ll trickle it in, or gush it in – you pick the speed. Hybrid clouds are the thing, getting around security concerns like “I don’t know where my data is.”

    Cloud Comfort

    The tone and agenda so far indicates that CA believes their customers are afraid of using cloud technologies, unsure about it. Both keynotes have revolved less around technology, and more around soothing IT about becoming cloud friendly.

    It’s like the old folk-lore about instant messaging in enterprises. “No one in my shop is using IM!” CIOs would decry as employee installed and used public IM clients by the thousands. More important, the ease of installing that technology and the heightened communication it brought made IM invaluable, no matter how non-enterprise (that is, not under the control of the corporation for purposes for security, compliance, and SLAs).

    So far, CA’s doing a good job talking the cloud talk – even with rational insertions of technologies that do the visionary stuff. At least in presenting, they seem to understand the mapping of cloud practices – mass-automating, charge back-cum-metered billing, etc. – to existing IT management practices and their own technologies. Most folks CA’s size would skim over the actual products you used to put cloud theory into practice. …

    Michael concludes:

    You can smell big consulting deals looming around there: 6 months to discover all the IT services a company has and then sort out road-maps for cloudizing – or not – each. Then a cycle for acquiring technologies to manage and run the cloud, and so on.

    How do you make an in-run around that buzz kill cycle? No one really know at this point. The middleware plus infrastructure portfolio that VMWare is building up starts to look interesting: slapping up a bunch of light-weight interfaces on lumbering legacy and, hopefully, allowing new development to keep legacy IT from tricking into schedule friction. Or you could isolate the old from the new. Who knows? Being caught in this quagmire is the whole point. The guiding question is how any cloud-tooler, like CA, is going to help prevent IT from getting stuck with more legacy IT, cloud-based or not.

    Paulo Del Nibletto reports “Company's new CEO outlines a win at all costs cloud computing strategy” as a preface to his CA adds to name and its cloud strategy story of 5/17/2010 for ITBusiness.ca:

    imageThis year's CA World is the second time in the last three events where CA augmented its corporate name. Company CEO Bill McCracken announced to the more than 7,000 attendees that CA will now be called CA Technologies.

    The former IBM Corp. (NYSE: IBM) PC boss also revealed CA's cloud strategy, which will be to bring security to the cloud. A task that many analysts have said has stunted the cloud's growth.

    McCracken told the story about how CA talked about a switch from SAP to Salesforce.com on the cloud to solve its dilemma of providing company employees with sales data on a unified system across all of its geographies.

    McCracken was short on specifics leaving that for the rest of the conference, but he was clear on CA's go-to-market cloud strategy. CA will leave it up to the customer.

    McCracken said that for CA if it’s on-premise or in a Software-as-a-Service model customers will be asking for different kinds of cloud services and that CA would be offering solutions either from a channel partner or managed service providers that add value or through internal direct sales.

    “For us its going to be on premise or SaaS or both. It could affect our base. We know that but it will happen so we need to make it happen and that is one of the reasons why we bought Nimsoft. Customers decide. We will not decide for them,“ he said.

    McCracken also said that virtualization will be an integral part of CA's strategy. He said that to support cloud services customers need to virtualize first. CA announced three new programs for this line of business: Virtual Automation, Virtual Assurance and Virtual Configuration that will manage physical and virtual machines and help customers move from the glass house to the virtual world and eventually the cloud.

    Microsoft Bizspark announced on 5/17/2010 its Bizspark Camp Chicago to be held 5/21/2010 from 8:00 AM to 9:00 PM CDT at  Clarity Consulting, 1 N Franklin St., Suite 3400,
    Chicago , IL 60606:

    image Learn about Windows Azure and Windows Phone development together in this day packed with training and coding. Register now for your chance to learn the latest development techniques with Windows Azure and Windows Phone and your chance to win a Zune HD.

    Light breakfast and lunch included, followed by a networking reception.

    REQUIREMENTS (make sure you have this ready to go before coming):

    InfoQ and Trifork announce QCon is coming back to San Francisco in 2010, November 1 – 5:

    Geva Perry

    This 4th annual San Francisco enterprise software development conference designed for team leads, architects and project management is back! Bloggers wrote about 32 of the 60 sessions at last year’s event, read this article to see what the attendees said. There is no other event in the US with similar opportunities for learning, networking, and tracking innovation occurring in the Java, .NET, Ruby, SOA, Agile, and architecture communities.

    According to this tweet @gevaperry (above) will host the Cloud track.

    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    John Brodkin notes “CA also adopts new name: CA Technologies” in his CA teams with Cisco and delivers new cloud, virtualization technologies post to NetworkWorld’s Data Center blog:

    imageCA is expanding its partnership with Cisco and unveiling several new management products to improve cloud computing and virtualization deployments, the company said in a series of announcements Monday at CA World in Las Vegas. The new products are based partly on technology acquired in CA's recent buying spree, in which the company purchased vendors 3Tera, Oblicore and Cassatt.

    CA also said it is changing its name slightly from CA, Inc. to CA Technologies, to reflect a broad strategy of managing "IT resources from the mainframe to the cloud, and everything in between."

    CA's partnership with Cisco includes integration of CA system management software with Cisco's Unified Computing System, letting IT pros control the Cisco technology from within the CA management interface.

    CA's Spectrum Automation Manager, a server provisioning and automation tool; CA's eHealth Performance Manager, which tracks device performance; and CA's Spectrum Infrastructure Manager, which performs network configuration management, fault isolation and root cause analysis, will all work with Cisco's UCS.

    Separately from the Cisco partnership, CA is announcing a variety of software tools to manage virtual computing resources and cloud-based systems.

    While 60% of CA's $4.2 billion business is related to the mainframe, the company is making cloud computing one of its main focuses, along with security, software-as-a-service-based IT management and virtualization management, says Tom Kendra, vice president of enterprise products and solutions.

    Managing the new virtualization layer and cloud-based services is no easy task, in part because the technologies have been installed in addition to -- rather than replacements of -- existing IT infrastructure, and require a heterogeneous management approach, Kendra says..

    CA's cloud strategy centers around a new "Cloud-Connected Management Suite" that includes four products. Those include Cloud Insight, for assessing how internal and external IT services relate to business priorities; Cloud Compose, for creating, deploying and managing composite services in a cloud; Cloud Optimize, which optimize use of both internal and external IT resources for cost and performance; and Cloud Orchestrate, which "will provide workflow control and policy-based automation of changes to service infrastructures."

    CA also announced three virtualization products, which are Virtual Assurance, for monitoring, event correlation and fault and performance management in virtual environments; Virtual Automation, which provides automated self-service virtual machine life-cycle management; and Virtual Configuration, which manages sprawl, and tracks configuration changes to meet regulatory compliance and audit needs.

    The products will start hitting the market in June, but CA did not offer more specific availability or pricing information.

    The Cloud News Service reports “Company CEO Delivers Keynote at Customer Event in Las Vegas,” as a preface to its CA's McCracken: Cloud Computing is Happening Now post of 5/17/2010:

    image In his keynote address at CA World 2010, Bill McCracken, chief executive officer of CA Technologies, told 7,000 attendees that the technology industry is at an inflection point, and that business will embrace virtualization and cloud computing in order to remain competitive.

    "When economic conditions, technology advances, and customer needs align, transformation happens," said McCracken. "As we emerge from the global economic downturn, we have a tremendous opportunity to leap forward and embrace change, or risk being left behind."

    McCracken also described a vision for how all businesses will evolve. "People still ask if I think the cloud is really going to happen.  I say no; I don't think it's going to happen.  I know it is going to happen because it is happening now. Virtualization and cloud computing will enable businesses to adapt to rapidly changing market and customer needs. We will be right there to help our customers gain a competitive advantage as this critical inflection point in our industry takes hold."

    "Running IT in a cloud-connected enterprise will be more like running a supply chain, where organizations can tap into the IT services as needed - specifying when, where and precisely how they are delivered," he said. "This has never been more important, because business models no longer change every few years or even once a year.  Cycles are increasingly shorter, which puts a whole new set of demands on the CIO and on the organization."

    McCracken also spoke about the evolution of the company name from CA to CA Technologies.

    "The name CA Technologies acknowledges our past and points to our future as a leader in delivering the technologies that will revolutionize the way IT powers business agility," said McCracken. "We are executing on a bold strategy, where IT resources -- from the cloud to the mainframe and everything in between -- are delivered with unprecedented levels of flexibility."

    IT professionals and customers from around the world are attending CA World to get insights into what is happening in the IT management space and to learn how to best leverage CA Technologies to maximize their organization's IT capabilities.  The user conference kicks off today and ends May 20.

    Amazon Web Services’ latest Newsletter of 5/17/2010 includes several announcements:

    image In this newsletter, we are excited to announce Amazon CloudFront's access log feature is now enabled for streaming distributions - read more below. This month's newsletter also highlights AWS's global expansion with our new Singapore Region, Amazon VPC availability in Europe, and Amazon RDS availability in Northern California. In May and June, we have a full calendar of events taking place around the world and many virtual events, we hope you can join us.

    Just Announced: Amazon CloudFront Access Logs For Streaming
    We're excited to announce Amazon CloudFront's access log feature is now enabled for streaming distributions. Now, every time you stream a video using AWS’s easy to use content delivery service, you can capture a detailed record of your viewer’s activity. In addition, Amazon CloudFront will record the edge location serving the stream, the viewer's IP address, the number of bytes sent, and several other data elements. There are no additional charges for access logs, beyond normal Amazon S3 rates to write, store and access the logs. You can read more about this new feature. …

    Bob Warfield’s Amazon Stealing the Cloud post of 5/17/2010 analyzes Amazon’s dominant market position in IaaS:

    Not so risky anymore. Business Week cover from 2006.

    I saw a spate of recent articles that had some pretty amazing statistics and news bits on Amazon Web Services and competitors.   In no particular order:

    • A survey of 600 developers by Mashery reported that 69% of respondents said Amazon, Google, and Twitter were the most popular API’s they were using.
    • Even the Federal Government is turning to the Amazon Cloud to save money.  Sam Diaz reports the move of Recovery.gov will amount to hundreds of thousands of dollars.  We found tremendous savings at Helpstream from our move to the Amazon Cloud.
    • Derrick Harris at GigaOm suggests its time for Amazon to roll out a PaaS to remain competitive.  As an aside, are you as tired of all the “*aaS” acronyms as me?  Are they helping us to understand anything better?  BTW, I think the move Harris suggests would be the wrong move for Amazon because it would lead to them competiting with customers who are adding PaaS layers to Amazon.  They should stay low-level and as language/OS agnostic as possible in order to remain as Switzerland.  Let Heroku-like offerings be built on top of the Amazon infrastructure by others.  Amazon doesn’t need to add a PaaS and they don’t need to add more value because they’re afraid of being commoditized.  As we shall see below, they are the commoditizers everyone else needs to be afraid of.
    • Amazon and Netflix jointly published a great case study and announced Netflix would move more infrastructure into Amazon’s Cloud.  I had a chance to talk to the Netflix folks early on about their Amazon activities.  Smart people.  Amazon needs more big organization and big brand case studies to accelerate their Cloud dominance.  Big loves to follow what Big does. …

    Bob continues by citing more paeans to Amazon’s prowess in the IaaS market and concludes:

    What does it all mean?

    imageIf nothing else, Amazon has a pretty amazing lead over other would-be Cloud competitors.  And they’re building barriers to entry of several kinds:

    • Nobody but Amazon has the experience of running a Cloud service on this scale.  They can’t help but be learning important things about how to do it well that potential competitors have yet to discover.
    • There is a growing community of developers whose Cloud education is all about Amazon.  Software Developers as a group like to talk a good game about learning new things, but they also like being experts.  When you ask them to drop their familiar tools and start from scratch with something new you take away their expert status.  There will be a growing propensity among them to choose Amazon for new projects at new jobs simply because that is what they know.
    • Economies of Scale.  Consider what kind of budget Amazon’s competitors have to pony up to build a competing Cloud infrastructure.  A couple of small or medium-sized data centers won’t do it.  Google already has tons of data centers, but many other companies that haven’t had much Cloud presence are faced with huge up front investments that grow larger day by day to catch up to Amazon.
    • Network effects.  There is latency moving data in and out of the Cloud.  It is not significant for individual users, but it is huge for entire applications.  The challenge is to move all the data for an application during a maintenance window despite the latency issue.  However, once the data is already in the Cloud, latency to other applications in the same Cloud is low.  Hence data is accretive.  The more data that goes into a particular Cloud, the more data wants to accrete to the same Cloud if the applications are at all interconnected.

    It’s going to be interesting to watch it all unfold.  It’s still relatively early days, but Amazon’s competitors need to rev up pretty soon.  Amazon is stealing the Cloud at an ever-increasing rate.

    <Return to section navigation list> 

    blog comments powered by Disqus