Monday, April 26, 2010

Windows Azure and Cloud Computing Posts for 4/26/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table and Queue Services

Rob Gillen recommends External File Upload Optimizations for Windows Azure in this detailed post of 4/26/2010:

I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting.

Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards).

Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately).

We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here.

We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here.

Rob continues with more details of his test regimen>

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Marcello Lopez RuizMicrosoftDataServicesRootUri, or How to customize WCF Data Service URIs post of 4/26/2010 points to Peter Qian’s Overwriting the service root URI in Wcf Data Service of 3/24/2010:

Peter has a great write-up about this, so I won't go into much detail here. Go read his post, then come back for a bit more context.

You have always been able to get a data service to process requests using a specific URI as the root. The data service doesn't have some magic knowledge about where it's being hosted; instead, it relies on the IDataServiceHost2 interface for this.

This is a "big hammer" kind of customization, though, because it requires you to completely reimplement the host, including things like header processing, some error handling, and managing request and response payloads.

In the current version, you can keep relying on the built-in support for WCF hosting, and simply use the technique that Peter described to tweak the service for the very common case of wanting to provide the illusion that the service is hosted at a different URI, one that for example includes a session token.

Enjoy!

PS: remember the companion MicrosoftDataServicesRequestUri property; again, Peter does a good job explaining how that works.

David Lemphers explains WCF, REST and URL Rewriting with Windows Azure! in this 4/26/2010 post:

So there is nothing I find nicer than a well formatted, REST based URL. Take for example: http://myhost/customer/2/orders [a]s a way to retrieve all orders for customer number 2.

There is also nothing I like more than WCF! The flexibility to declaratively control the behavior of my web service is perfect for situations where I need to respond quickly to heterogeneous client calling requirements (REST, SOAP, etc).

And finally, I love Windows Azure, nothing more really to say there. So how do you build a RESTful WCF Service in Windows Azure?

Start with a new Cloud project:

image

Then add a new WCF Service Web Role:

image

Now, this is enough to get a WCF service going in Windows Azure, so just hit F5 and you’ll get this:

image

David continues with his tutorial.

The OData Team released the Open Data Protocol - Client Libraries under Apache License Version 2.0 of January 2004 http://www.apache.org/licenses/ on 4/15/2010.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

The .NET Connected Framework Team announced the availability of Windows Server AppFabric Beta 2 Refresh for Visual Studio 2010/.NET 4 RTM on 4/16/2010:

Today we are pleased to announce a Beta 2 Refresh for Windows Server AppFabric.  This build supports the recently released .NET Framework 4 and Visual Studio 2010 RTM versions—a request we’ve had from a number of you.  Organizations wanting to use Windows Server AppFabric with the final RTM versions of .NET 4 and Visual Studio 2010 are encouraged to download the Beta 2 Refresh today.  Please click here for an installation guide on installing the Beta 2 Refresh.  We encourage developers and IT professionals building ASP.NET applications or applications that use WCF or WF and run on IIS to download the Beta 2 Refresh and provide feedback at http://connect.microsoft.com/dublin/feedback or via our forum at http://social.msdn.microsoft.com/Forums/en-US/dublin/threads/.

Windows Server AppFabric is a set of application services focused on improving the performance and management of Web and Composite applications.  To deliver these benefits, Windows Server AppFabric provides distributed caching technology and pre-built management and monitoring infrastructure that utilize familiar .NET skills. 

Currently in Beta 2, Windows Server AppFabric enhances the Application Server role in Windows Server and is available as a free download.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Bill Zack points out the availability of A Windows Azure Sample Application: Bid Now in this 4/26/2010 post to the InnovateShowcase blog:

Remember the DinnerNow sample application? Well, that was a connected web application and was introduced and updated last year before Windows Azure was available. The Bid Now sample application, along with source code, has just been released and available for download. This online auction application is designed to show how developers can create highly scalable applications using Worker Roles and Windows Azure Storage -- Queues, Tables and Worker Roles, plus Live ID based authentication. Before deploying it to Windows Azure, you can debug and test the application your local development box.

image

For more information see here

Brian Sommer’s Convergence – Microsoft debuts Dynamics GP 2010 (and a bit of cloud) post of 4/26/2010 to the Enterprise Irregulars blog from Convergence 2010 reports:

Cloud CRM & Azure Connect with On-Premise Dynamics ERP

At Convergence 2010, the annual Microsoft confab for Dynamics GP (Great Plains), SL (Solomon), AX (Axapta) and NAV (Navision), Microsoft showcased a number of integration points between its new Dynamics GP, Dynamics CRM and other Microsoft tools such as SharePoint and Azure Portal tool. Approximately 8500 attendees made it to this relatively upbeat event.

Of the demonstrations I saw, Microsoft showed how Unified Communications (UC) were part of most Microsoft applications and tools. For example, they showed how their cloud-based CRM product would integrate with their on-premise or hosted Dynamics GP product. If a user needed to contact a creditor, accounts receivable clerk or other individual, all that was required was a click on a small screen icon to launch a telephone call, email or video conversation.

The Dynamics GP product was enhanced with additional BI (business intelligence) reports, enhanced Excel Pivot functionality, 15 new tailored user roles and more embedded Microsoft Office forms. In total, the product ships with 380 web services and 400 SQL or Excel reports. The other Dynamics products were upgraded late last year.

  • One of the demos gave attendees a peak at the new Windows Phone 7 user interface. Unfortunately, it was a very brief peek.
  • Dynamics CRM Online has multilingual support and will be sold in 32 markets by year-end.
  • Overall, the products showed well and offered a mix of on-premise and SaaS products. Microsoft has stated that they are “All-in” when it comes to the cloud.

In one-on-one interviews I had with Microsoft executives, I learned that:

  • Multi-tenancy isn’t really there yet for the Dynamics GP products. The products can be hosted by third parties with prior enhancements getting easier and easier to roll forward to newer releases.

Return to section navigation list> 

Windows Azure Infrastructure

Chris Czarnecki asserts Microsoft Azure Platform as a Service (PaaS) Stands Out From the Crowd in this 4/26/2010 post to the Learning Tree International blog:

In a recent interview with Information Week editors Microsoft’s CEO Steve Balmer sent the message that Microsoft is ‘all in’ when it comes to cloud computing. The message Steve Balmer sent regarding Azure in comparison to other cloud providers was interesting, stating that ‘ There is nobody with an offer like ours (Azure) in the market today, not even close. We’re actually trying to help people do what they really need to do for the modern time’.

He compared three other major PaaS vendors, Amazon, Salesforce.com and Google to Microsoft Azure claiming ‘Amazon basically say give us your VMs and you can still muck around at the low level, they are taking todays complexity and putting it into the cloud’. Salesforce.com was dismissed as ‘not being a general purpose programming platform for large scale deployment’ and Google as ‘ kind of its own weird, funny proprietary environment’.

To me the most striking aspect of the interview is Steve Balmers view that the way all applications are developed for the cloud has to be changed to take advantage of the cloud. There is some validity in this view but not for all applications that will run in the cloud. Some applications will be required to scale to millions of users and have the associated large scale storage requirements – but may other applications will not require this level of scalability.

It is my firm belief that the majority of applications that are or will be deployed to the cloud will do so to make use of the cost effective server provisioning, reduced administration costs as well as transparent scalability when required. Amazon, Salesforce.com and Google all provide solutions that meet these requirements.

Microsoft Azure certainly differs from other vendors offerings. It requires a new skill set for developers and architecture for applications making use of the Azure system and libraries. There is an elegance in much of this, but nothing compelling that other vendors do not offer. Of the four major PaaS providers, Microsoft Azure most certainly has the feeling of still being in an embryonic stage and the lack of provision for migrating existing Web application to the platform seems to be a major omission in the Microsoft cloud strategy. It will be interesting to see how this evolves over the coming months.

Dave Durkee asks “The competition among cloud providers may drive prices downward, but at what cost?” as a preface to this Why Cloud Computing Will Never Be Free post to current issue of Communications of the ACM. Here’s the opening:

[article image]The last time the IT industry delivered outsourced shared-resource computing to the enterprise was with timesharing in the 1980s when it evolved to a high art, delivering the reliability, performance, and service the enterprise demanded. Today, cloud computing is poised to address the needs of the same market, based on a revolution of new technologies, significant unused computing capacity in corporate data centers, and the development of a highly capable Internet data communications infrastructure. The economies of scale of delivering computing from a centralized, shared infrastructure have set the expectation among customers that cloud computing costs will be significantly lower than those incurred from providing their own computing. Together with the reduced deployment costs of open source software and the perfect competition characteristics of remote computing, these expectations set the stage for fierce pressure on cloud providers to continuously lower prices.

This pricing pressure results in a commoditization of cloud services that deemphasizes enterprise requirements such as guaranteed levels of performance, uptime, and vendor responsiveness, much as has been the case with the Web hosting industry. Notwithstanding, it is the expectation of enterprise management that operating expenses be reduced through the use of cloud computing to replace new and existing IT infrastructure. This difference between expectation and what the industry can deliver at today's nearzero price points represents a challenge, both technical and organizational, which will have to be overcome to ensure large-scale adoption of cloud computing by the enterprise. …

Graphics Credit: Gary Neill

David Makogon offers links to Azure team blogs and voting sites in this 4/26/2010 post to the RDA Blogs.

Matt Prigge asks “Like it or not, cloud-based ‘IT as a service’ is coming -- and you're in its crosshairs. Are you up to the fight?” in a preface to his You vs. cloud computing post of 4/26/2010 to InforWorld’s Information Overload blog:

Unless you've been living under a rock, you've heard the blare of that hypemonster known as cloud computing. The cloud is revolutionizing the way we do IT, cutting costs, increasing efficiency, and making perfect toast every time! Such cheerleading about a field that barely exists is enough to make anyone tune out.

But don't change the channel quite yet. This is important: The cloud is growing up and it wants you, your infrastructure, and quite possibly your job. Your organization's management is reading some of the same trade magazines you're starting to ignore -- and the cloud's promise of better, faster, cheaper IT sounds good to them.

Fortunately, you have an edge: You know it's coming. Unlike the legions of well-paid software developers who promptly found themselves out of a job after they were outsourced early in the last decade, we in IT at least have had an ample chance to see what's coming and adapt before it arrives in full force. In reality, the growing prospect of the wholesale outsourcing of much of our infrastructure is an excellent opportunity to up our game and do what we do better. …

David continues with the reasons that “Why cloud computing is inevitable.”

BBC News reports “The UK will spend over £1 billion on cloud computing by 2012 - twice as much as today - researchers predict” in its Cloud computing to double by 2012 story of 4/26/2010:

Cloud skies over cityThis would mean more consumers and businesses subscribing to web-based services, such as Google Apps.

Cloud-based services currently account for around 7.5% of the £8 billion UK software market, according to research company TechMarketView. But others say cloud computing is hyped and will complement traditional desktop software rather than replace it.

"In the old days, big companies used to generate their own electricity. But they do not do that any more", said Philip Carnelley, senior analyst at TechMarketView. "Software is going the same way - let others do the processing."

TechMarketView predicts cloud services will be worth around £1.2 billion per year in the UK by 2012. "This is not just analysts hyping things up", says Philip Carnelley, a senior analyst at the company. "It is a genuine shift."

Cloud computing means that people do not have to invest in powerful computers and software to store their data. Instead, they can outsource their needs to cloud companies, which charge subscription fees.

Hybrid evolution

But not everyone agrees that cloud computing will replace traditional software which processes data locally. "The amount of cloud computing is quite small at the moment, so even if it does double that is not such a big deal", says analyst Laurent Lachal at rival research firm Ovum. "The IT industry loves to concentrate on a topic for a few months and then turn against it. There will be a backlash by the end of the year."

Lachal does not dismiss cloud computing, but he thinks its limitations make it more of an add-on to software. "It's becoming a hybrid system - for example you create your work on software on your PC, and then you save it and share it through the cloud."

Not as hyped as prognostications from the colonies.

Lori MacVittie’s The IT Optical Illusion essay of 4/26/2010 comes from Interop 2010:

vase-faces-optical-illusion

Everyone has likely seen the optical illusion of the vase in which, depending on your focus, you either see a vase or two faces. This particular optical illusion is probably the best allegorical image for IT and in particular cloud computing I can imagine.

Depending on your focus within IT you’re either focused on – to borrow some terminology from SOA – design-time or run-time management of the virtualized systems and infrastructure that make up your data center. That focus determines what particular aspect of management you view as most critical, and unfortunately makes it difficult to see the “big picture”: both are critical components of a successful cloud computing initiative.

I realized how endemic to the industry this “split” is while prepping for today’s “Connecting On-Premise and On-Demand with Hybrid Clouds” panel at the Enterprise Cloud Summit @ Interop on which I have the pleasure to sit with some very interesting – but differently focused – panelists.

See, as soon as someone starts talking about “connectivity” the focus almost immediately drops to … the network. Right. That actually makes a great deal of sense and it is, absolutely, a critical component to building out a successful hybrid cloud computing architecture. But that’s only half of the picture, the design-time picture. What about run-time? What about the dynamism of cloud computing and virtualization? The fluid, adaptable infrastructure? You know, the connectivity that’s required at the application layers, like access control and request distribution and application performance.

Part of the reason you’re designing a hybrid architecture is to retain control. Control over when those cloud resources are used and how and by whom. In most cloud computing environments today, at least public ones, there’s no way for you to maintain that control because the infrastructure services are simply not in place to do so. Yet. At least I hope yet; one wishes to believe that some day they will be there. But today, they are not. Thus, in order to maintain control over those resources there needs to be a way to manage the run-time connectivity between the corporate data center (over which you have control) and the public cloud computing environment (which you do not).

That’s going to take some serious architecture work and it’s going to require infrastructure services from infrastructure capable of intercepting requests, inspecting the request in context of the user and the resource requested, and applying the policies and processes to ensure that only those clients you want to access those resources can access them, and those you prefer not access them are denied.

It will become increasingly important that IT be able to view its network in terms of both design and run-time connectivity if it is going to successfully incorporate public cloud computing resources into its corporate cloud computing – or traditional – network and application delivery network strategy. …

Lori continues with links to related articles.

Vivek Bhatnagar’s Windows Azure Buying Process and Account Management post of 4/25/2010 begins: 

Last week I had an opportunity to meet many CIOs in “CIO conference” at Microsoft campus. Some of them have already deployed workloads on Windows Azure. In all my discussion, there was a common question on Windows Azure account management. Therefore I decided to write new blog post on Windows Azure account management.

Windows Azure account management is an integrated process involving two portals i.e. Microsoft Online Commerce Portal (MOCP) and Windows Azure Developer Portal (DevPortal).

Microsoft Online Commerce Portal (MOCP) is a Web portal you use to try or buy subscriptions to Microsoft Online Services. Windows Azure Developer Portal (DevPortal) is a web portal you use to create services and deploy your code. A subscription on MOCP is mapped to a project in DevPortal.

Windows Azure buy work flow is below:

WA AccountVivek continues with detailed instructions for the Windows Azure buy workflow.

<Return to section navigation list> 

Cloud Security and Governance

Paul Venezia posits “McAfee's update fiasco shows even trusted providers can cause catastrophic harm” in his McAfee's blunder, cloud computing's fatal flaw post of 4/26/2010 to InforWorld’s The Deep End blog:

Thanks for proving my fears well founded, McAfee.

A while ago, I wrote a piece about not trusting the cloud for a variety of reasons, predominately security and the potential for a third party to ruin my company whether it meant to or not. McAfee's massive blunder last week provided a case in point for that argument.

Granted, you can't really call McAfee a cloud vendor. McAfee's play is sort of the cloud model in reverse; instead of customers placing important assets on McAfee's cloud, customers download and install McAfee's software on their important assets -- desktops and servers -- and trust McAfee to issue updates without manual supervision. McAfee betrayed that trust in the worst way possible: It took down thousands and thousands of customer systems.

The same thing can and will happen to cloud vendors and their customers, but the damage could be far worse. While the McAfee debacle caused primarily Windows XP SP3 desktops and workstations to crash, servers and the corporate data stored on them were unaffected. If a similar situation were to happen to a real cloud vendor, the situation would be reversed. The time and aggravation required to reimage, repair, or reinstall hundreds or thousands of corporate desktop pales in comparison to the specter of massive data loss or long-term application and resource unavailability due to third-party problems. This should worry anyone who places trust in any cloud they don't control. …

Paul’s arguments fall far short of proving organizations that use off-premises PaaS are more vulnerable to amateurish quality control failures than those who run all IT operations in on-premises data centers. This is especially true of an upgrade bug that obliterated clients’ network connectivity.

<Return to section navigation list> 

Cloud Computing Events

The Triangle .NET User Group announced an OData Workshop featuring Chris (Woody) Woodruff from 6/2/2010 at 8:30 PM to 6/5/2010 2010 at 12:00 PM (EDT) to be held at the Jane S. McKimmon Conference & Training Center, North Carolina State University, 1101 Gorman Street, Raleigh, NC 27606:

Abstract

The Open Data Protocol (OData) is an open protocol for sharing data. It provides a way to break down data silos and increase the shared value of data by creating an ecosystem in which data consumers can interoperate with data producers in a way that is far more powerful than currently possible, enabling more applications to make sense of a broader set of data. Every producer and consumer of data that participates in this ecosystem increases its overall value.

OData is consistent with the way the Web works - it makes a deep commitment to URIs for resource identification and commits to an HTTP-based, uniform interface for interacting with those resources (just like the Web).   This commitment to core Web principles allows OData to enable a new level of data integration and interoperability across a broad range of clients, servers, services, and tools.

OData is released under the Open Specification Promise to allow anyone to freely interoperate with OData implementations.

In this talk Chris will provide an in depth knowledge to this protocol, how to consume a OData service and finally how to implement an OData service on Windows using the WCF Data Services product.

Agenda

  • Introductions (5 minutes)
  • Overview of OData (10 minutes)
  • The OData Protocol (1 hour)
  • 15 minute break
  • Producing OData Feeds (1 hour)
  • Consuming OData Feeds (1 hour)

David Makogon reports that he’s presenting two sessions at the Central Maryland Association of .NET Professionals (CMAP) Code Camp Spring 2010 on 5/8/2010 from 8:30 AM to 5:30 PM EDT at the Loyola University Maryland Graduate Center in Columbia, MD.

Microsoft's Azure cloud-computing platform is live! And the tools are free! However, It's a bit tricky to get all set up properly, and it's even trickier to push code into the cloud and not be charged for it! Join me in this session and learn the proper way to get started and to launch your first Azure app!

Microsoft Azure consists of several "moving parts" - web roles, worker roles, tables, blobs, queues, service bus, diagnostics... In this session, we'll go over the purpose of each, and then build a demo that shows how this stuff works.

Brian Loesgen recommends a Microsoft Application Infrastructure Virtual Launch Event subtitled “Cloud benefits delivered” on 5/20/2010:

If you read my blog then you have an interest in BizTalk Server, Windows Server AppFabric, Azure, Windows Azure AppFabric, WCF, etc… That means that you would also be really interested in the virtual launch event coming up on May 20th 2010 (8:30AM Pacific Time). Details are below, the event site is http://www.appinfrastructure.com. …

Want to bring the benefits of the cloud to your current IT environment?
Cloud computing offers a range of benefits, including elastic scale and never-seen-before applications. While you ponder your long-term investment in the cloud, you can harness cloud benefits in your current IT environment now. Join us on May 20 at 8:30 A.M. Pacific Time to learn how your current IT assets can harness the benefits of the cloud on-premises—and can readily connect to new applications and data running in the cloud. Plus, you’ll hear some exciting product announcements, and a keynote on Microsoft’s latest investments in the application infrastructure space aimed at delivering on-demand scalability, highly available applications, a new level of connectivity, and more! Save the date!

David Hoerster advertises his OData session at the CMAP Code Camp Spring 2010:

Exposing your data to client applications is a common requirement across most applications; however, there are many ways to accomplish this. Each application seems to implement it a different way which leads to inconsistency across your application spectrum. With the release of the .NET Framework 3.5, Microsoft has introduced WCF Data Services (was ADO.NET Data Services) which is a collection of classes and standards to allow you to expose your data consistently and securely to your client applications. We'll focus on WCF Data Services in .NET 4, in which Microsoft has beefed up the offering, and also the data protocol known as OData.

tbtechnet announced Windows Azure Virtual Boot Camp III May 3rd - May 10th 2010 on 4/26/2010:

After the huge success of the Windows Azure Virtual Boot Camp I and II, here comes…

Virtual Boot Camp III

Learn Windows Azure at your own pace, in your own time and without travel headaches.

Windows Azure one week pass is provided so you can put Windows Azure and SQL Azure through their paces.

NO credit card required.

You can start the Boot Camp any time during May 3rd and May 10th and work at your own pace.

The Windows Azure virtual boot camp pass is valid from 5am USA PST May 3rd through 6pm USA PST May 10th. …

tb continues with instructions for getting your virtual Boot Camp pass.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

No significant articles today.

<Return to section navigation list> 

blog comments powered by Disqus