Monday, January 17, 2011

Windows Azure and Cloud Computing Posts for 1/15/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3 
• Updated 1/17/2011: Sharding links in the SQL Azure Database and Reporting section.
• Updated 1/16/2011 with later articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

See Graham Calladine explained Windows Azure Platform Security Essentials: Module 3 – [Blog, Queue and Table] Storage Access in a 00:22:35 Channel9 video segment posted on 1/15/2011 in the Cloud Security and Governance section below.

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Cihan Biyikoglu published a list of blog posts and white papers about SQL Azure Federations on 12/16/2011 in a SQL Azure Federation answer to a question posted in the Windows Azure Platform - SQL Azure, Windows Azure Storage & Data forum:

imageWe have talked about SQL Azure Federations technology at PDC and PASS this year and since then, we pushed out a number of posts and papers out. To help bring all these together, here is the list;

imageWe will announce any news about partner programs to get a preview of the technology. Please stay tuned. In the meantime, if you have any questions about the technology, I am happy to help. You can reach me through the forums or through  the blog http://blogs.msdn.com/b/cbiyikoglu/.

Here are a few additional sharding-related articles in reverse date order:

imageSee also Understanding Federated Database Servers: SQL Server 2008 R2 in the MSDN Library. SQL Server federation uses the Enterprise edition’s Distributed Partitioned Views. Related topics include:

Sharding is a complex topic and SQL Azure Federation technology will automate much of the administrative work and programming required. This article will up updated periodically as new posts and other publications become available.

Andrew Glover wrote Java development 2.0: Sharding with Hibernate Shards, Horizontal scalability for relational databases, on 8/31/2010 for IBM’s DeveloperWorks site. From the abstract:

Sharding isn't for everyone, but it's one way that relational systems can meet the demands of big data. For some shops, sharding means being able to keep a trusted RDBMS in place without sacrificing data scalability or system performance. In this installment of the Java development 2.0 series, find out when sharding works, and when it doesn't, and then get your hands busy sharding a simple application capable of handling terabytes of data.

Following are some of the Hibernate Shard resources for Java, which Andy listed in his article:

Ayende Rahien issued SQL Azure, Sharding and NHibernate: A call for volunteers, which has many interesting comments about SQL Azure and sharding, on 9/6/2009. His NHibernate Shards: Progress Report appeared on 10/18/2009. Stay tuned for more resources related to [N]Hibernate Shards.


Matt Masson [pictured below] answered Why does my package run slower in BIDS than DTEXEC? on 1/15/2011:

image Jamie Thomson recently suggested that you should run your packages using DTEXEC when doing performance testing, as it (typically) performs much better then when the same package is run in BIDS. If you haven’t already read it, go do that now. It has a pretty picture in it and everything. I’ll wait.

imageThe purpose of this post was to call attention to 1) acknowledge the truthiness of Jamie’s post, 2) call attention to some of the great comments on the post, and 3) shed some light on why this is.

The Comments

  • John Welch reminds people that running DTEXECUI is not the same as running DTEXEC. As we’ll see, it has some of the same overhead as BIDS does, and does not perform as well as running DTEXEC directly.
  • Chris Randall called out that pressing Ctrl-F5 within BIDS invokes DTEXEC. There is a bit more startup cost, but actual package performance should be the same.

Why is BIDS slower than DTEXEC?

There are a number of factors, most of which are purely theoretical for me – I haven’t done deep analysis to determine the actual cost of each of these items, but they should give you the general idea….

  1. Startup – When starting the debug process, Visual Studio will save all open files, synchronize the project files, and switch to the debugging mode/view.
  2. Events – BIDS listens for a number of events thrown by the package so it can update the row counts, change the box colors, and populate the Execution Results tab.
  3. Debugging – SSIS needs to interact with the Visual Studio debugging interfaces to allow for breakpoints, script debugging, and data viewers.
  4. Child Packages – When running packages with Execute Package Tasks, the child packages will be opened and displayed in BIDS. When this happens, you’re paying the cost of de-serializing the package, determining the layout, drawing the shapes, hooking up additional listeners, etc.
  5. COM Interop – The SSIS runtime is COM based (native), while the BIDS designer code is .NET (managed). There’s overhead cost anytime the process needs to cross native/managed boundaries. There’s also an inter-process communication overhead for Visual Studio (devenv.exe) and the SSIS debug host (DtsDebugHost.exe).
  6. Memory – In my experience, this has been the biggest factor for slow BIDS performance, especially when dealing with large packages. More on this in the next section.

Is BIDS always slower than DTEXEC?

No. For smaller packages (single data flow, source –> destination), there might not be a difference. In some cases, BIDS might even be a little faster – which I assume is because the package object is reused, and doesn’t need to be loaded fresh from disk. Generally, larger packages will perform better with DTEXEC, because they have more work for BIDS to do (more objects to draw, more events to filter, etc). Memory (RAM) can become a factor with large packages as well. Since Visual Studio is a 32bit process, it has a 2gb memory limit. You can easily hit the point where BIDS will start swapping to disk if you have multiple large packages open, are using a number of IDE extensions, or have multiple project types loaded.

Conclusion

If something performs slowly in BIDS, chances are it will perform the same or faster with DTEXEC. Perf testing different designs in BIDS, like seeing if is using OLE DB Command is slower than doing things in a batch, is just fine to do in BIDS. If you’re trying to get an idea of whether your package execution time fits into your ETL batch window, make sure you measure your results using DTEXEC.

Matt’s advice is likely to apply if you use SSIS to migrate data to SQL Azure.


See Graham Calladine explained Windows Azure Platform Security Essentials: Module 3 – [SQL Azure] Storage Access in a 00:22:35 Channel9 video segment posted on 1/15/2011 in the Cloud Security and Governance section below.


<Return to section navigation list> 

MarketPlace DataMarket and OData

• The Programming4Us site published Programming with SQL Azure : Record Navigation in WCF Data Services on 1/16/2011:

Let's talk a few more minutes about the WCF Data Service you built and how you can use that to navigate through records. Record navigation is one of the things that really gets me excited about WCF Data Services. Let's dive right in.

imageIf your project is still running, stop the project and open the Solution Explorer and navigate to the data service. For simplicity, you'll do this right from the solution. Right mouse click the data service and select View in Browser. The service will fire up and what you see is a REST (Representational State Transfer) based service on top of your relational database, a mere XML representation of the service and the entities exposed via the service, shown in Figure 1. You see the entities listed because you set the entity set rights to ALL. If you were to go back to the code a few pages back where you set the entity set rights, and comment those lines out, you would not see the entities listed (Docs, TechGeoInfoes, and Users).

Figure 1. Viewing the WCF Data Service via REST

The question though, is how do you view the data that you really want to see. The answer is simple. Specify an entity at the end of your URI, and you'll see data for that entity. For example, specify the following URI to see data on the Users entity:

http://localhost:51176/TechBioDataService.svc/Users

Take care to get the case correct. Entity names are case sensitive. Specify users instead of Users, and you'll get a "The webpage cannot be found" error message.

Take time out now to mistype an entity name and see the resulting message. That way you'll more readily recognize the problem when you make same mistake inadvertently. Sooner or later, you all make that mistake.

1. Disabling Internet Explorer's Feed Reading View

At this point you either get an XML representation of the data in the Users table, or the web page shown in Figure 2. If it's the latter, then you need to go turn off the feed reading view in Internet Explorer IE. That is because IE thinks that the data coming back is the type you would get in an RSS feed. You can see in the message that the browser thinks you're trying to view an RSS feed.

Figure 2. RSS Feed Page

To fix the RSS feed issue you need to turn off this feature in Internet Explorer. With IE open, from the Tools menu select Options which open the Internet Options dialog. This dialog has a number of tabs along the top which you might be familiar with. Select the Content tab and on that tab click the Settings button under Feeds and Web Slices.

Clicking the Settings button will display the Settings dialog, and on this dialog you need to uncheck the Turn on feed reading view checkbox, shown in Figure 3. Click OK on this dialog and the Internet Options dialog.

Figure 3. Disabling Feed Viewing

2. Viewing the Final Results

Back on your web page, press F5 to refresh the page. What you should get back now is a collection of Users, shown in Figure 4, by querying the underlying database for the Users.

However, you aren't done yet because there is still so much more you can do here. For example, the page you're currently looking at displays all the Users, but what if you want to return a specific user?

Looking at the data you can see that the each record contains the id of the specific row, and you can use that to your advantage by including that in your URI. For this example let's use ID 113. Modify the URI by appending the number 113 to the end of the URI enclosed in parenthesis, as shown in Figure 5.

By loading the URI which includes the id of a specific record, I can now drill down further and return just the record I am looking for. This is just like applying a WHERE clause to a T-SQL query, in this case WHERE ID = 113. In this case I have queried the underlying store for a specific user by passing the appropriate ID in the URI.

Additionally I can return a specific field by adding the field I want to the URI, such as:

http://localhost:51176/TechBioDataService.svc/Users(113)/Name

Specifying the specific field along with the id will return just the field you request. In the code snipped above, the value in the Name column for User ID 113 is returned, as shown in Figure 6.

You can also use this same methodology to navigate between tables. For example, you could do the following to return documents for a specific User ID:

http://localhost:51176/TechBioDataService.svc/Users(113)/Docs

Figure 4. Viewing All Users

Figure 5. Viewing a Specific User

Figure 6. Viewing the Name of a Specific Users

While this information isn't critical to connect to SQL Azure it's good information to have so you know how REST services work and can benefit from its functionality in your application. While this article did not go deep into the Entity Framework or REST technology, there are plenty of good books by APress or information on MSDN about this technology. I highly recommend that you explore these technologies further to enhance your SQL Azure applications.


• Ron Jacobs explained WCF FaultContract and FaultException best practices in his WCF Spike FaultContract, FaultException<TDetail> and Validation post of 1/14/2011:

image Ready to have some fun? Today I spent the day investigating WCF Faul[t]Contracts and FaultException and some best practices for argument validation.  I’m going to do the same in a future post on Workflow Services but I felt it best to really understand the topic from a WCF point of view first.

imageInvestigation Questions
  1. What happens when a service throws an exception?
  2. What happens if a service throws a FaultException?
  3. What happens if the service operation includes a FaultContract and it throws a FaultException<TDetail>?
  4. How can I centralize validation of DataContracts?
Scenario 1: WCF Service throws an exception
Given
  • A service that throws an ArgumentOutOfRangeException
  • There is no <serviceDebug> or <serviceDebug includeExceptionDetailInFaults="false" /> in web.config
When
  • The service is invoked with a data that will cause the exception
Then
  • The client proxy will catch a System.ServiceModel.FaultException
  • The FaultCode name will be “InternalServiceFault”
  • The Fault.Reason will be

"The server was unable to process the request due to an internal error.  For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the <serviceDebug> configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework 3.0 SDK documentation and inspect the server trace logs."

Conclusions

No surprises here, probably anyone who has done WCF for more than 5 minutes has run into this.  For more information see the WCF documentation Sending and Receiving Faults.  You might be tempted to just turn on IncludeExceptionDetailInFaults but don’t do it because it can lead to security vulnerabilities. Instead you need a better strategy for dealing with exceptions and that means you need to understand FaultException

Scenario 2: WCF Service throws a FaultException

As we saw in the previous example, WCF already throws a FaultException for you when it encounters an unhandled exception.  The problem is in this case that we want to let the caller know they sent an invalid argument even when they are not debugging.

Given
  • A service that throws a FaultException
public string GetDataFaultException(int data){
    if (data < 0)    {
        // Let the sender know it is their problem
        var faultCode =  
          FaultCodeFactory.CreateVersionAwareSenderFaultCode(
                ContractFaultCodes.InvalidArgument.ToString(), ContractConstants.Namespace);
         var faultReason = string.Format(Resources.ValueOutOfRange, "Data", Resources.ValueGreaterThanZero);
         throw new FaultException(faultReason, faultCode);
    }     return "Data: " + data;
}
When
  • The service is invoked with a data value that will cause an exception
Then
  • The client proxy will catch a System.ServiceModel.FaultException
  • The FaultCode.Name will be “Client” (for SOAP 1.1) or “Sender” (for SOAP 1.2)
  • The Fault.Reason will be Argument Data is out of range value must be greater than zero
Conclusions

The main thing is that the client now gets a message saying that things didn’t work and its their fault.  They can tell by looking at the FaultException.Code.IsSenderFault property.  In my code you’ll notice a class I created called CreateVersionAwareSenderFaultCode to help deal with the differences between SOAP 1.1 and SOAP 1.2.  You will find it in the sample.

Some interesting things I learned while testing this

  • You should specify a Sender fault if you determine that there is a problem with the message and it should not be resubmitted
  • BasicHttpBinding does SOAP 1.1 while the other bindings do SOAP 1.2 so if you are working with both you need to do the version aware sender fault code.
  • FaultException.Action never shows up on the client side so don’t bother with it.
  • FaultException.HelpLink and FaultException.Data (and other properties inherited from Exception) do not get serialized and will show up empty on the client side so you can’t use them for anything.

While this is better than throwing unhandled exceptions, there is an even better way and that is FaultContracts

Scenario 3: WCF Service with a FaultContract
Given
  • A service operation with a FaultContract
  • and a service that throws a FaultException<TDetail>
var argValidationFault = new ArgumentValidationFault{
    ArgumentName = "Data",    Message = string.Format(Resources.ValueOutOfRange, "Data", Resources.ValueGreaterThanZero),    HelpLink = GetAbsoluteUriHelpLink("DataHelp.aspx"),};
 throw new FaultException<ArgumentValidationFault>(argValidationFault, argValidationFault.Message, faultCode);
When
  • The service is invoked with a data value that will cause an exception
Then
  • The client proxy will catch a System.ServiceModel.FaultException
  • The FaultCode.Name will be “Client” (for SOAP 1.1) or “Sender” (for SOAP 1.2)
  • The Fault.Reason will be Argument Data is out of range value must be greater than zero
  • The FaultDetail will be the type TDetail which will be included in the generated in the service reference
Conclusions

This is the best choice.  It allows you to pass all kinds of information to clients and it makes your error handling capability truly first class.  In the sample code one thing I wanted to do was to use the FaultException.HelpLink property to pass a Url to a help page.  Unfortunately I learned that none of System.Exception’s properties are propagated to the sender.  No problem, I just added a HelpLink property to my ArgumentValidationFault type and used it instead in FaultException.Details.HelpLink

Some interesting things I learned while testing this

  • You can specify an interface in the [FaultContract] attribute but it didn’t seem to work on the caller side for catching a FaultException<T>
Recommendation

Use [FaultContract] on your service operations.  You should probably create a base type for your details and perhaps have a few subclasses for special categories of things.  Remember that whatever you expose in the FaultContract is part of your public API and that versioning considerations that apply to other DataContracts apply to your faults as well.


The Microsoft Web Camps Training Kit site made available an OData Introduction: Demo Demo Overview in DOCX format in January 2011:

imageThis demo introduces you to the OData standard, how to query Odata, and also shows you the Dallas initiative.

The demo is a script for Web Camp presenters and is a bit dated, based on use of the term “Dallas initiative” for Azure Market Place DataMarket. If you’re making an OData presentation, you might find it useful. Here’s a screen capture of section 2:

image

The Training Kit includes similar materials for ASP.NET MVC, IE9 & HTML5, JQuery, WebApps and WebMatrix.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Alik Levin explained Windows Azure AppFabric Access Control Service (ACS) v2– Returning Friendly Error Messages Using Error URL Feature in a 1/15/2011 post:

imageWindows Azure AppFabric Access Control Service (ACS) v2 has a feature called Error URL that allows a web site to display friendly message in case of error during the authentication/federation process. For example, during the authentication with Facebook or Google a user asked for consent after successful authentication. If the user denies ACS generates an error that could be presneted to a user in a friendly manner. Another case is when there is a mis-configuration at ACS level, for example, no rules generated for specific identity provider which results in error generated by ACS.

image722322How to show friendly error message for these cases, branded with the look an feel as the rest of my website?

The solution is using Error URL feature available through ACS management portal. ACS generated JSON encoded error message and passes it to your error page. You need to specify the URL of the error page on ACS management portal so that ACS will know where to pass the information. Your error page should pars the JSON encoded error message and render appropriate HTML for the end user. Here is a sample JSON encoded error message:

{"context":null,"httpReturnCode":401,"identityProvider":"Google","timeStamp":"2010-12-17 21:01:36Z","traceId":"16bba464-03b9-48c6-a248-9d16747b1515","errors":[{"errorCode":"ACS30000","errorMessage":"There was an error processing an OpenID sign-in response."},{"errorCode":"ACS50019","errorMessage":"Sign-in was cancelled by the user."}]}

Step 1 – Enable Error URL Feature

To enable Error URL feature for your relying party:

  1. Login to http://portal.appfabriclabs.com.
  2. On the My Projects page click on your project.
  3. On the Project:<<YourProject>> page click on  Access Control link for desired namespace.
  4. On the Access Control Settings: <<YourNamespace>> page click on Manage Access Control link.
  5. On the Access Control Service page click on Relying Party Applications link.
  6. On the Relying Party Applications page click on your relying party application.
  7. On the Edit Relying Party application page notice Error URL field in Relying Party Application Details section.
  8. Inter your error page URL. This is the page that will receive JSON URL encoded parameters that include the error details.
Step 2 – Process JSON Encoded Error Message

To process JSON encoded error message generated by ACS:

  1. Add an aspx web page to your ASP.NET application. And give it a name, for example, ErrorPage.aspx. It will serve as an error page that will process the JSON encoded error message sent from ACS.
  2. Add the following labels controls to the ASP.NET markup:
    <asp:Label ID="lblIdntityProvider" runat="server"></asp:Label>
    <asp:Label ID="lblErrorMessage" runat="server"></asp:Label>
  3. Switch to the page’s code behind file, ErrorPge.aspx.cs.
  4. Add the following declaration to the top:
    using System.Web.Script.Serialization;
  5. Add the following code to Page_Load method:
    JavaScriptSerializer serializer = new JavaScriptSerializer();
    ErrorDetails error = serializer.Deserialize<ErrorDetails>( 
    Request["ErrorDetails"] ); lblIdntityProvider.Text = error.identityProvider; lblErrorMessage.Text = string.Format("Error Code {0}: {1}",
    error.errors[0].errorCode,
    error.errors[0].errorMessage);
  • The above code is responsible for processing JSON encoded error messages received from ACS. You might want to loop through the errors array as it might have additional more fine grained error information.
Step 3 – Configure Anonymous Access To The Error Page

To configure anonymous access to the error page:

  1. Open web.config of your application and add the following entry: 
      <location path="ErrorPage.aspx">
        <system.web>
          <authorization>
            <allow users="*" />
          </authorization>
        </system.web>
      </location>
  2. This will make sure you are not get into infinite redirection loop.
Step 4 – Test Your Work

To test Error URL feature:

  1. Login to http://portal.appfabriclabs.com.
  2. On the My Projects page click on your project.
  3. On the Project:<<YourProject>> page click on  Access Control link for desired namespace.
  4. On the Access Control Settings: <<YourNamespace>> page click on Manage Access Control link.
  5. On the Access Control Service page click on Rule Groups link.
  6. On the Rule Groups page click on the rule group related to your relying party.
  7. WARNING: This step cannot be undone. If you are deleting generated rules they can be easily generated once more. On the Edit Rule Group page delete all rules. To delete all rules check all rules in the Rules section and click Delete Selected Rules link.
  8. Click Save button.
  9. Return to your web site and navigate to one of the pages using browser.
  10. You should be redirected to your identity provider for authentication – Windows LiveID, Google, Facebook, Yahoo!, or ADFS 2.0 – whatever is configured for your relying party as identity provider.
  11. After successful authentication your should be redirected back to ACS which should generate an error since no rules defined.
  12. This error should be displayed on your error page that you created in step 2, similar to the following:

uri:WindowsLiveID
Error Code ACS50000: There was an error issuing a token.

Another way to test it is denying user consent. This is presented when you login using Facebook or Google.

Related Books
Related Info


Alik Levin updated the How To: Create My First Claims Aware ASP.NET application Integrated with ACS v2 wiki page of the Access Control Service Samples and Documentation (Labs)project on CodePlex on 1/14/2011 (missed when published):

Applies To
  • image722322Microsoft® Windows Azure® AppFabric Access Control Service version 2.0 (ACS v2.0)
Overview

image This topic describes the simple scenario of integrating ACS v2.0 with an ASP.NET Relying Party application. By integrating your web application with ACS v2.0, you factor the features of authentication and authorization out of your code. In other words, ACS v2.0 provides the mechanism for authenticating and authorizing users to your web application. In this practice scenario, ACS v2.0 authenticates users with a Google identity into a test ASP.Net Relying Party application.

Summary of Steps

Important

Before performing the following steps, make sure that your system meets all of the .Net framework and platform requirements that are summarized in Prerequisites for Windows Azure AppFabric Access Control Service.

To integrate ACS v2.0 with an ASP.NET Relying Party application, complete the following steps:

  • Step 1 - Create a Windows Azure AppFabric Project
  • Step 2 - Add a Service Namespace to a Windows Azure AppFabric Project
  • Step 3 – Launch the ACS v2.0 Management Portal
  • Step 4 – Add Identity Providers
  • Step 5 - Setup the Relying Party Application
  • Step 6 - Create Rules
  • Step 7 - View Application Integration Section
  • Step 8 - Create an ASP.Net Relying Party Application
  • Step 9 - Configure trust between the ASP.NET Relying Party Application and ACS v2.0
  • Step 10 – Test the integration between the ASP.NET Relying Party application and ACS v2.0
Step 1 - Create a Windows Azure AppFabric Project

For detailed instructions on how to create a Windows Azure AppFabric Project, see How to: Create a New Windows Azure AppFabric Project.

Step 2 - Add a Service Namespace to a Windows Azure AppFabric Project

For detailed instructions on how to Add a Service Namespace to a Windows Azure AppFabric Project, see How to: Add a Service Namespace to a Windows Azure AppFabric Project.

Step 3 – Launch the ACS v2.0 Management Portal

This section describes how to launch the ACS v2.0 Management Portal. The ACS v2.0 Management Portal allows you to configure your ACS Service Namespace by adding identity providers, configuring relying party applications, defining rules and groups of rules, and establishing the credentials that your relying party trusts.

To launch the ACS v2.0 Management Portal

  1. On the Project page, once the service namespace you created in Step 2 is active, click Access Control.

    You are redirected to the page that displays your project ID, allows you to delete the Service Namespace, or launch the ACS v2.0 Management Portal.

  2. To launch the ACS v2.0 Management Portal, click Manage Access Control.

Step 4 – Add Identity Providers

This section describes how to add identity providers to use with your relying party application for authentication. For more information about identity providers, see Relying party, Client, Identity Provider, ACS.

How to add identity providers

  1. On the ACS v2.0 Management Portal, click Identity Providers.

  2. On Identity Providers page, click Add Identity Provider, and then click Add button next to Google.

  3. The Add Google Identity Provider page prompts you to enter a login link text (the default is Google) and an image URL. This URL points to a file of an image that can be used as the login link for this identity provider (in this case, Google). Editing these fields is optional. For this demo, do not edit them, and click Save.

  4. On Identity Providers page, click Return to Access Control Service to go back to the ACS v2.0 management portal main page.

Step 5 - Setup the Relying Party Application

This section describes how to setup a Relying Party Application. In ACS, a Relying Party Application is a projection of your web application into the system. It defines the URLs for your application, token format preference, token timeout, token signing options, and token encryption options. For more information about relying party applications, see Relying party, Client, Identity Provider, ACS

How to setup a relying party applicaiton

  1. On the ACS v2.0 Management Portal, click Relying Party Applications.

  2. On Relying Party Applications page, click Add Relying Party Application.

  3. On Add Relying Party Application page, do the following:

    1. In Name, type the name of the relying party application. For this demo, type TestApp.
    2. In Mode, select either Enter settings manually (if you want to configure your relying party application settings manually) or Import WS-Federation metadata (if you want to upload a WS-Federation metadata document with the settings for your relying party application.
    3. In Realm, type the URI that the security token issued by ACS v2.0 applies to. For this demo, type http://localhost:7777/.
    4. In Return URL, type the URL that ACS v2.0 returns the security token to. This field is optional. For this demo, you can type http://localhost:7777/
    5. In Token format, select a token format for ACS v2.0 to use when issuing security tokens to this relying party application. For this demo, select SAML 2.0. For more information about SAML tokens, see Tokens SAML, SWT and all about tokens..
    6. In Token encryption policy, select an encryption policy for tokens issued by ACS v2.0 for this relying party application. For this demo, accept the default value of None. For more information about token encryption policy, see Tokens SAML, SWT and all about tokens.
    7. In Token lifetime (secs):, specify the amount of time for a security token issued by ACS v2.0 to remain valid. For this demo, accept the default value of 600. For more information about tokens, see Tokens SAML, SWT and all about tokens.
    8. In Identity providers, Select the identity providers to use with this relying party application. For this demo, accept the checked defaults (Google and Windows Live ID). For more information about relying party applications, see Relying party, Client, Identity Provider, ACS
    9. In Rule groups, select rule groups for this relying party application to use when processing claims. For this demo, accept Create New Rule Group that is checked by default. For more information about rule groups, see Relying party, Client, Identity Provider, ACS
    10. In Token signing, select whether to sign SAML tokens using the default service namespace certificate, or using a custom certificate specific to this application. For this demo, accept the default value of Use service namespace certificate (standard). For more information about token signing, see Tokens SAML, SWT and all about tokens.
  4. Click Save.

  5. On Relying Party Applications page, click Return to Access Control Service to go back to the ACS v2.0 management portal main page.

Step 6 - Create Rules

This section describes how to define rules that drive how claims are passed from identity providers to your relying party application. For more information about rules and rule groups, see Rules and rule groups.

How to create rules

  1. On the ACS v2.0 Management Portal main page, click Rule Groups.

  2. On Rule Groups page, click Default Rule Group for TestApp (since you named your relying party application TestApp.

  3. On Edit Rule Group page, click Generate Rules.

  4. On Generate Rules: Default Rule Group for TestApp page, accept the identity providers selected by default (in this demo, Google and Windows Live ID), and then click Generate.

  5. On Edit Rule Group page, click Save.

  6. On Rule Groups page, click Return to Access Control Service to return to the main page of the ACS v2.0 management portal.

Step 7 - View Application Integration Section

You can find all the information and code necessary to modify your relying party application to work with ACS v2.0 on the Application Integration page.

How to view the Application Integration page

  1. On the ACS v2.0 Management portal main page, click Application Integration.

    The ACS URIs that are displayed on the Application Integration page are unique to your service namespace.

    For this demo, it is recommended to keep this page open in order to perfrom future steps quickly.

Step 8 - Create an ASP.Net Relying Party Application

This section describes how to create an ASP.Net Relying Party application that you want to eventually integrate with ACS v2.0.

How to create an ASP.NET Relying Party application

  1. To run Visual Studio 2010, click Start, click Run, type devenv.exe and press Enter.

  2. In Visual Studio, click File, and then click New Project.

  3. In New Project window, select either Visual Basic or Visual C# template, and then select ASP.NET MVC 2 Web Application.

  4. In Name, type TestApp, and then click OK.

  5. In Create Unit Test Project, select No, do not create a unit test project and then click OK.

  6. In Solution Explorer, right-click TestApp and then select Properties.

  7. In the TestApp properties window, select the Web tab, and under Use Visual Studio Development Server, click Specific port, and then change the value to 7777.

  8. To run and debug the application you just created, press F5. If no errors were found, your browser renders an empty MVC project.

    Keep Visual Studio 2010 open in order to complete the next step described in the section below.

Step 9 - Configure trust between the ASP.NET Relying Party Application and ACS v2.0

This section describes how to integrate ACS v2.0 with the ASP.NET Relying Party application that you created in the previous step.

How to configure trust between the ASP.NET Relying Party Application and ACS v2.0

  1. In Visual Studio 2010, in Solution Explorer for TestApp, right-click TestApp and select Add STS Reference.

  2. In Federation Utility wizard, do the following:

    1. On Welcome to the Federation Utility Wizard page, in Application URI, enter the application URI and then click Next. In this demo, the application URI is http://localhost:7777/.

      Note

      The trailing slash is important because it matches the value you entered in ACS v2.0 Management Portal for your relying party application. For more information, see Step 5 - Setup the Relying Party Application.

    2. A warning pops up: ID 1007: The Application is not hosted on a secure https connection. Do you wish to continue? For this demo, click Yes.

      Note

      In a production environment, this warning about using SSL is valid and should not be dismissed.

    3. On Security Token Service page, select Use Existing STS, enter the WS-Federation Metadata URL published by ACS v2, and then click Next.

      Note

      You can find the value of the WS-Federation Metadata URL on the Application Integration page of the ACS v2.0 Management Portal. For more information, see Step 7 - View Application Integration Section.

    4. On STS signing certificate chain validation error page, click Next.
    5. On Security token encryption page, click Next.
    6. On Offered claims page, click Next.
    7. On Summary page, click Finish.

    Once you successfully finish running the Federation Utility wizard, it adds a reference to the Microsoft.IdentityModel.dll assembly and writes values to your Web.config file that configures the Windows Identity Foundation in your ASP.NET MVC 2 Web Application (TestApp).

  3. Open Web.config and locate the main system.web element. It might look like the following:

    <system.web>
        <authorization>
          <deny users="?" />
        </authorization>

    Modify Web.config to enable request validation by adding the following code under the main system.web element:

        <!--set this value-->
        <httpRuntime requestValidationMode="2.0"/>
     

    After you perform the update, this is what the above code fragment must look like:

     
       <system.web>
        <!--set this value-->
        <httpRuntime requestValidationMode="2.0"/>
        <authorization>
        <deny users="?" />
        </authorization>
Step 10 – Test the integration between the ASP.NET Relying Party application and ACS v2.0

This section describes how you can test the integration between your relying party application and ACS v2.0

How to test the integration between the ASP.NET Relying Party application and ACS v2.0

  1. Keeping Visual Studio 2010 open, press F5 to start debugging your ASP.NET relying party application.

    If no errors were found, instead of opening the default MVC application, your browser is redirected to a Home Realm Discovery page that is hosted by ACS v2.0 hosted that asks you to choose an identity provider.

  2. Select Google.

    The browser will load the Google login page.

  3. Enter test Google credentials, and accept the consent UI shown at the Google website.

    The browser will post back to ACS, ACS will issue a token, and post that token to your MVC site. Once the MVC site loads you should see something like the following. Notice the name is automatically populated with data from Google and ACS.


See Graham Calladine explained Windows Azure Platform Security Essentials: Module 3 Storage Access in a 00:22:35 Channel9 video segment posted on 1/15/2011, which includes a link to Windows Azure Platform Security Essentials: Module 2 – Identity Access Management of 12/26/2010.


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

Neudesic offered on 1/7/2011 a What's New From PDC: Windows Azure AppFabric and Windows Azure Connect Webcast (missed when published):

image[By] David Pallmann, General Manager of Custom Application Development, Neudesic

At PDC 2010, Microsoft announced new capabilities for Windows Azure. In this webcast you'll learn about new AppFabric features included the distributed cache service and the future AppFabric Composition Service. You'll also learn about the new Windows Azure Connect virtual networking capability which allows you to link your IT assets in the cloud to your on-premise IT assets.

Requires site registration to view the Webcast.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Torsten Langner explained how to Run Batch Workload on a Mixed Infrastructure (Windows Azure Worker Nodes & On-Premise HPC Server 2008 R2 Compute Nodes) in a post to the Windows HPC Team Blog of 1/15/2011:

image With the introduction of SP1 of HPC Server 2008 R2 it is possible to run workload on Windows Azure. If you want to be able to off-burst your batch application (= extending the infrastructure from classic on-premise servers to cloud based Azure worker nodes) you need to set specific environment variables within the on-premise server infrastructure.

imageHere is a sample of how to upload a batch based application (azuretestbatch.exe) to Azure workder nodes and run a parametric sweep workload across the full cluster (on-premise compute nodes + Azure worker nodes):

Create the Windows Azure Package & Upload It

Use the hpcpack create command to create a *.zip file. Once the package got created, upload it to the Windows Azure storage by using the hpcpack upload command. Specify the node template of your Windows Azure worker nodes. Additionally specify a relative path – in our sample its named test. This forces the hpc sync command which we call later to unzip the package content to the folder named test:

C:\Users\tlangner>hpcpack create AzureTestPackage.zip
C:\temp\folder1

C:\Users\tlangner>hpcpack upload AzureTestPackage.zip
/nodetemplate:AzureNodeTemplate /relativepath:test

You can test your upload by calling the hpcpack view command. As you can see in the lines below the targeted physical directory on the Windows Azure worker node disks is %CCP_PACKAGE_ROOT%\test. The environment variable %CCP_PACKAGE_ROOT% only exists on Windows Azure worker nodes.

C:\Users\tlangner>hpcpack view azuretestpackage.zip
/nodetemplate:AzureNodeTemplate
Connecting to head node: head
Querying for Windows Azure Worker Role node template "AzureNodeTemplate"
Windows Azure Worker Role node template found.
Retrieving Azure account name and key.
Found account: hpc*** and key: ********
Package Name:           azuretestpackage.zip
Uploaded:               06.01.2011 10:43:44
Description:            Node
Template:      AzureNodeTemplate
Target Azure Dir:       %CCP_PACKAGE_ROOT%\test

After the hpcpack upload command has finished, it is time to deploy the package content to the Windows Azure worker nodes (ie. local disks of the cloud machines). This is done by calling the hpcsync command:

C:\Users\tlangner>hpcsync /nodetemplate:AzureNodeTemplate
Deploy the Package Content Locally

In order to run the batch workload on the “classic” compute nodes, too, it is helpful to deploy the package using the same environment variables as they are are available in Windows Azure. In this sample we want the CCP_PACKAGE_ROOT variable link to the local C:\TEMP folder of every machine in the cluster:

C:\Users\tlangner>cluscfg setenvs "CCP_PACKAGE_ROOT=C:\TEMP"

Then, let’s copy the zip package content to the target directory of every “classic” compute node in the cluster. The content will be deployed to the C:\TEMP\test directory:

C:\Users\tlangner>clusrun /nodegroup:computenodes xcopy
\\head\temp\AzureTestBatch\AzureTestBatch\bin\Debug\*.*
^%CCP_PACKAGE_ROOT%^\test
Run the Mixed Workload

After deploying the package content to the whole cluster it’s time to run the batch job:

C:\Users\tlangner>job submit /nodegroup:X
/parametric:1-1000 /workdir:^%CCP_PACKAGE_ROOT%^\test
azuretestbatch.exe 5000


• Erik Oppendijk explained Adding assemblies to the GAC in Windows Azure with Startup Tasks in a 1/14/2011 post to the InfoSupport Blog Community:

image Startup tasks are the new way to run a script or file in Windows Azure, and we can run them under Elevated (Administrator) context. For this to work, add the assembly you want to add to the GAC to your project and set copy local to true.

Edit the .CSDEF file and add a Startup task with executionContext elevated:

<ServiceDefinition name="MyProject" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
   <WebRole name="MvcWebRole1">
      <Startup>
         <Task commandLine="RegisterGAC.cmd" executionContext="elevated" taskType="simple" />
      </Startup>

Now add the RegisterGAC.cmd and gacutil.exe to your project, set the build action to none, and copy to output directory to Copy Always for both files!

In the RegisterGAC.cmd enter the following 2 lines:

gacutil /nologo /i .\Microsoft.IdentityModel.dll  
exit /b 0

Remember that the startup task runs from the /BIN folder. There is also the possibility to set the taskType:

  • Simple - needs to complete before the role continues
  • Background - runs parallel with the role (startup)
  • Foreground - runs parallel with the role, but needs to finish before the role can shutdown

Also remember that the right .Net 4.0 gacutil.exe is in this directory: C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\NETFX 4.0 Tools


• Bill Zack explained How to Survive with Windows PowerShell in a 1/14/2011 post to the Ignition Showcase blog:

image Whether you are a developer or an IT pro working at an ISV you need to know the basics of PowerShell, Microsoft’s premier command shell and scripting language. If you need to automate Windows Azure Deployments, make maximum effective use of the Azure PowerShell cmdlets or customize a VMRole at startup time then this tool is for you.

image

imageThis PowerShell Survival Guide will give you those basics.  It includes links to Essential Windows PowerShell Resources, Sample PowerShell Scripts and modules, Forums, User Groups, videos and more.


Alex Williams asked (and partially answered) Is Windows Azure Suitable for Startups? in a 1/14/2010 post to the ReadWriteCloud blog:

image Last week we wrote about the services that Y Combinator startups use. For hosting, Amazon Web Services and Rackspace were the most popular services.

Thumbnail image for azure.PNGWindows Azure was noticeably absent. But why? A topic turned up on Quora, citing our post and wondering why the lack of adoption.

The topic has sparked a bit of discussion. The consensus: Azure is better suited for larger enterprise operations.

image That does make sense to some degree. We'll just have to see by how much.The people who make up Azure are a smart group. Under Bob Muglia, a real push started to attract more developers. That became apparent at the Professional Developers Conference, which was not a huge event but did have a core group there developing on numerous platforms such as Windows 7, Windows Phone 7 and the Azure marketplaces. Muglia is leaving Microsoft so questions are out there but the structure is in place for his successor.

Further, the tablet market will continue to drive a startup community to Azure. And BizSpark is no wilting effort. The official Microsoft blog states some of the results for the program since the effort began in 2008:

Since then, about 35,000 startups from all over the world have enrolled in the program, which provides early-stage businesses with fast and easy access to software development tools and platforms as well as a network of 2,500 network partners - including venture capital firms, university incubators, consultants and angel investors - interested in helping startups grow.

Microsoft will also focus on synchronizing connected devices this year, which will be a driving feature as devices are optimized for users. That alone will help drive development. Our bet is that synchronization will be a driving factor in the sale of Windows Phone 7 and tablet devices.

Commenters on Quora have a different perspective. They state that Azure is just not the right choice for startups, citing better suitability for large enterprises, the costs and productivity differences between it and platforms such as Heroku, which actually runs on Amazon Web Services.

What Commenters Say About Why Azure is Better for the Fortune 500

Startups are disruptive. Microsoft caters to Fortune 500 companies. Using Azure means startups adopting one vendor as opposed to an open source community with large pools of available software engineers.

The costs are higher compared to platforms such as Heroku. It can cost at least $40 per month for each site a user sets up. To set up an app on Heroku is free.

It can take longer. One developer commented that for typical Web apps they are ten times more productive with Ruby on Rails than the Microsoft. Rails was built for the Web. Microsoft originally built its stack for the desktop[*].

Azure is a platform, which is a same reason that Google App Engine is not represented either. Amazon and Rackpace are infrastructure providers, which provide the flexibility to build your own stack, which gives far more flexibility than a platform, which at some point will pose limits to the developer.

On a final note, there are some praises for Windows Azure. Martin Wawrusch critiqued Azure but ended by saying that it is a great platform and he wholeheartedly recommends it. He even said that the SQL Azure engine is years ahead of the competition, serving as a good fit for backend processing for a startup. He does also say he would however not use it for any front end work in a startup environment.

There are still open questions about Azure. It's not ruled out as a platform. It has potential for considerable success. The question is its comparison to other offerings on Amazon or Rackspace. For now, those offerings are more popular and should continue to be so for at least the year ahead.

* Here’s what I had to say in answer to the aggregated comments as of 1/15/2011:

imageThe Microsoft Partner Network offers 750 free hours/month of an Extra Small Compute instance, free blog, queue and table storage with a moderate amount of free bandwidth, a free SQL Azure Web database instance (1 GB max size), and other freebies in its Cloud Essentials Pack. Details are at http://oakleafblog.blogspot.com/.... Pricing for Azure commercial usage levels is very similar to that of Amazon Web Services.

Windows Azure can emulated continuous deployment with its Azure Fabric Emulator which runs on the developer's PC.

Productivity for ASP.NET or ASP.NET MVC web development is very close to that for on-premises or hosted IIS. Windows Azure is a platform as a service (PaaS) offering, which handles availability issues with automatic replication, supports automating horizontal scaling up and down, and eliminates the need for Windows developers to upgrade or patch the operating system.

Microsoft's Case Studies group has many examples of startups using the Windows Azure platform at http://www.microsoft.com/Windows....

For more comparisons of Windows Azure with its competitors, see David Pallman answered the What Platform to Choose? question in Picking a Lane in Cloud Computing on 1/14/2011 in the Windows Azure Infrastructure section below.


<Return to section navigation list> 

Visual Studio LightSwitch

image2224222No significant articles today.


Return to section navigation list> 

Windows Azure Infrastructure

• Brian Loesgen posted Debunking a couple of Azure Myths on 1/15/2011:

After about three months in the field talking with ISV partners, enterprise developers and developers in general, I’ve noticed a couple of Azure misconceptions that seem to be recurring, so I thought I’d post something here for people to find to try to correct that.

Myth #1: “Azure is from Microsoft so it’s something that only .NET developers can use”

Actually, anything that runs in Windows will run on Azure. Microsoft’s goal is to provide the best cloud platform, PERIOD, regardless of your choice of languages or development tools. It was announced at PDC the the JRE would be present in Azure worker/web roles and that Java would be a first class citizen. Sure, the Visual Studio integration is great, but we also just released Eclipse integration for the latest V1.3 SDK.

In fact, my very first project when I started my Azure Architect Evangelist role was running Tomcat and Google Web Toolkit, migrating Postgress to SQL Azure. That’s pretty far from the typical .NET stack! As you can see below, we support Java, PHP, Python, Ruby, Tomcat, Zend and more! So, if you use any of those, you can take advantage of our near $3 billion dollar (so far) investment in 6 Azure datacenters worldwide, as well as other capabilities like our on-prem/off-prem bridging, federated security model (supporting oAuth 2, Google ID, Facebook , LiveID and more).

image

Microsoft is making significant investments in interoperability. If you want to see more about Azure interoperability (as well as other interoperability initiatives), I suggest you check out the Interoperability Bridges site.

image

Myth #2: “Azure is not done because they keep releasing new stuff”

Frankly, I was surprised the first time I heard this. Then I heard it a few more times. At PDC 2010 we released/announced probably more than a dozen new Azure capabilities. However, the reality is that we (Microsoft) are investing BILLIONS into our cloud initiatives, which includes Azure. The strong stream of announcements and releases is the realization of our vision, and the fruits of our investments.

If you look back, the only REALLY fundamental shift that happened was SQL Azure (where the team responded incredibly quickly to customer feedback). All other announcements and releases have been aligned with the vision and path that we have been on for many years now. Azure is absolutely done and in use by thousands. Will there be more pieces to come in the future? Absolutely! We embarked on the Platform-as-a-Service path 4 years ago, we have not and will not deviate. On a related theme, we are also being very open about our plans, and actively soliciting customer feedback. Got an idea for something you’d like added, or would you like to provide feedback on proposed features? Go to http://www.mygreatwindowsazureidea.com.


• Frank Gartland asked Your Weekend Look Cloudy? Free Windows Azure Training Just Released! and reminded developers about Windows Azure training in a 1/14/2011 post to Microsoft Learning’s Born to Learn Blogs:

Just before the Holidays, Microsoft hosted another new and exclusive Jump Start virtual training event, this time covering the Windows Azure Platform. What a success! “Building Cloud Applications using the Windows Azure Platform” was tailored for application architects and developers interested in leveraging the cloud.

While many of the attendees were already building a pilot project or planning to migrate an application, around 35% were searching for real-world answers as they consider whether or not the Windows Azure Platform fits their needs. They were in for a treat since all 12 hours of this training was led by two of the most respected authorities on Microsoft development technologies, David S. Platt and Manu Cohen-Yashar. Learn strategies for your cloud application lifecycle, know your options for storage in the cloud, understand the realities of Security using Azure and find out all your team should consider to truly design for scale and elasticity.

The feedback has been great, but due to the Holidays many of you weren’t able to join us back in December. So whether you were with us and want to refresh and reinforce what you learned, or you want to check it out for the first time, all 12 hours of the HD-quality videos were just released and are ready to go! Just browse this list and get started!

Session 01: Windows Azure Overview
Session 02: Introduction to Compute
Session 03: Windows Azure Lifecycle, Part 1
Session 04: Windows Azure Lifecycle, Part 2
Session 05: Windows Azure Storage, Part 1
Session 06: Windows Azure Storage, Part 2
Session 07: Introduction to SQL Azure
Session 08: Windows Azure Diagnostics
Session 09: Windows Azure Security, Part 1
Session 10: Windows Azure Security, Part 2
Session 11: Scalability, Caching & Elasticity, Part 1
Session 12: Scalability, Caching & Elasticity, Part 2, and Q&A

By the way, feel free to check out the course materials and code samples while you’re watching.

Get Access to the Windows Azure Platform for the Labs and learn about training & certification options by visiting the Windows Azure Online Portal.


• Pat Romanski described a “Forecast for Cloud Computing Across Key U.S. Cities Calls for New Lines Of Business, More IT Services, and Job Growth” in her Microsoft Survey Highlights the US Top "Cloud-Friendly" Cities article of 1/14/2011:

image Microsoft on Wednesday named some of the country’s top “cloud-friendly” U.S. cities. The rankings are based on the results of an extensive survey in which 2,000 IT decision makers nationwide discussed how they are adopting and using cloud computing.

The forecast for cloud computing across key U.S. cities calls for new lines of business, more need for IT services, and potential job growth, according to a new survey released on Wednesday by Microsoft.

Microsoft released the results of the study this week after interviewing more than 2,000 IT decision-makers in 10 U.S. cities.

The cities are ranked based on how local businesses are adopting and using cloud computing solutions – including hiring vendors to migrate to the cloud, seeking IT professionals with cloud computing experience, and creating new lines of business based on cloud platforms. The survey indicates that cloud computing is not only a growing sector of the IT services community, but helping to create new businesses and jobs locally.

Atlanta, Ga. Atlanta ranks in the middle of the pack of “cloud friendly” cities. The majority (62 percent) of IT decision makers at large companies in Atlanta currently employ, or plan to implement, cloud-based e-mail and communications tools, like IM and voice, compared with 36 percent of those at small businesses.

Boston, Mass. Ranked as the most “cloud-friendly” city for large companies, Boston boasts a high percentage of companies that view cloud services as an opportunity to be more innovative and strategic. Nearly half (46 percent) of large businesses have one or more cloud projects planned and underway, and more than half are already using the cloud for e-mail, communication and collaboration.

Chicago, Ill. The Windy City ranks 9 out of 10 in the survey of “cloud-friendly” cities. The tide may be turning because half of IT decision makers at large companies there say cloud computing is an opportunity to be more strategic. Small businesses are also beginning to see the benefits, with 39 percent stating they are encouraged to deploy cloud services because they are cost-effective.

Dallas, Texas Dallas ranks third among the most cloud-ready cities for large companies. Of IT decision makers, 46 percent believe the cloud is an engine of innovation, where only 37 percent of those surveyed nationally believe so. Of the local small companies, 46 percent say they are encouraged to buy cloud services for reliable security, almost double the response of enterprise companies (29 percent).

Detroit, Mich. The Motor City ranks near the bottom of our rankings for “cloud-friendly” cities, but nearly half (47 percent) of IT decision makers in Detroit see the cloud as an avenue for creating business advancements, saying the cloud is an engine of innovation. The majority (51 percent) of respondents agree that investing in IT during the next five years will increase profitability.

Los Angeles/Orange County, Calif. Los Angeles and Orange County rank fourth among the most cloud-ready cities for small companies. Of the IT decision makers surveyed, nearly half (46 percent) are investing in cloud services. Small businesses agree that the use of cloud services helps to ensure they always have the latest upgrades available to them, and focusing more strategic initiatives will reduce IT workload.

New York, N.Y. A city of great contrast in cloud adoption between small and large businesses, New York businesses boast the highest number of enterprises nationwide using cloud-based applications. While nearly half (46 percent) of large companies have cloud projects actively underway, only a small percentage (14 percent) of New York’s small businesses say the same.

Philadelphia, Pa. Philadelphia ranks among the top three “cloud-friendly” cities for small businesses. A majority (87 percent) of IT decision makers at large companies have at least some knowledge of the cloud compared with only half (50 percent) of small businesses. Regardless of company size, a high percentage cites low total cost of ownership as a reason to transition to the cloud. San Francisco, Calif.

San Francisco ranks among the top “cloud-friendly” cities. More than half (51 percent) of IT decision makers at large companies in San Francisco know a fair amount about cloud computing, with 49 percent having at least one cloud project planned or underway. Also, 40 percent of IT decision makers at local small companies believe cloud computing is an engine of innovation.

Washington, D.C. Washington, D.C., ranks as the most “cloud-friendly” city for small businesses. Of those businesses that have adopted cloud services, enabling a remote workforce and lower total cost of ownership are cited as the top reasons for the move. In fact, almost half (46 percent) of IT decision makers at local businesses report cost savings of at least $1,000 through their use of cloud services.

“I think the study is incredibly interesting, and it shows business and IT growth is a key output of the cloud,” said Scott Woodgate, a director in Microsoft’s corporate account segment, which serves mid-market businesses. “For IT professionals, it’s clear that becoming skilled in the cloud is an important call to action. For businesses, the cloud really empowers growth. Because of the nature of the cloud, you can take more risks and innovate at a much lower cost.”

IT Decision Makers: Cloud Is Creating New Business Opportunities IT decision makers in financial services, manufacturing, professional services, and retail and hospitality see cloud computing as an opportunity to grow their business, drive innovation and strategy, and efficiently collaborate across geographies, according to Microsoft’s new could computing survey.

Among IT decision makers surveyed: 24 percent used the cloud to help start a new line of business.

  • 68 percent in the financial services said they have been asked to find ways for their companies to save money on the IT side.
  • 34 percent in professional services, and 33 percent in retail and hospitality, believe cloud computing is an opportunity for the IT department to be more strategic.
  • 71 percent in manufacturing said their IT departments must address the business requirement to work anywhere at any time in the next year.
  • 33 percent in professional services said their IT departments must find new ways to enable and support their company’s growing workforce.

The survey, funded by Microsoft, was conducted online and targeted IT decision makers from various industries in 10 U.S. cities. Microsoft ranked the cities according to their “cloud-friendliness” based on a number of results, including opinions and attitudes about cloud computing.

The study shows that one of two things is happening in business – either companies are turning to outside experts to understand and implement the cloud, or they’re looking within their existing IT departments for help, Woodgate said.

Scott Woodgate, a Microsoft director of corporate account marketing. From Microsoft’s perspective, Woodgate said, cloud computing has two other advantages: it lets small businesses act like big businesses, and it lets big businesses move quickly and cheaply like a small business by quickly scaling up and down in size as their IT needs shift.

“It works for small business adopters because they have limited IT staff but have similar desires to large-businesses in terms of productivity, running the business and satisfying customers,” he said. “For big businesses, there is an opportunity to innovate at a lower cost with multiple options rather than having to sink all of their chips into a single, big capital cost option.”

Some businesses still believe that cloud computing will mean job losses, based on new efficiencies gained by moving some IT services to the cloud. This belief, coupled with a shaky economy that wasn’t allowing for new IT projects, led to some reticence for businesses to adopt cloud computing more eagerly. However, the Microsoft survey shows that tide is turning.

The survey also showed that businesses still believe some misnomers about cloud computing – such as that it’s just a trend or a fad.

“People often compare cloud computing to outsourcing. I don’t think it compares well,” Woodgate said. “The skill set of IT workers is changing, and there is plenty of opportunity for IT directly in the context of the cloud. Also, the value proposition of IT is changing. They currently spend a lot of time keeping the lights on. I think with the cloud, they’ll be able to spend less time on that and more time on moving the overall business forward.”

“For example, almost two-thirds of enterprise IT decision makers have hired or are planning on hiring vendors to help understand and deploy the cloud,” Woodgate said. “And 21 percent of IT decision makers are looking to hire new staff with cloud experience.”

RDA Corporation in Baltimore is one of the many cloud consultants investing heavily in the technology – and it’s paying off.

Tom Cole, CEO of RDA Corporation in Baltimore, Md. CEO Tom Cole said his company, which does IT consulting, planning, strategy and integration, spent all of last year introducing the concept of cloud computing to its customers. In just one year there has been a surge in interest in the cloud, he said. Last year his company decided to invest in the cloud and started talking to customers about it in a major way, and now companies are approaching RDA on their own asking for help moving to the cloud.

“Adoption is a two-pronged effort,” Cole said. “No. 1, understand what the technology is and its viability in the marketplace and, once you determine it is in fact viable, ensure that you’ve invested in the people and tools to learn the technology and be able to apply it. Secondly, you’ve got to invest in a field sales team and customers to be able to understand who the early adopters are, how they can take advantage of the cloud, and what the market will bear.” Cole said his company is heavily invested in cloud computing, adding that it’s not hard to pitch Microsoft Azure – Microsoft’s cloud computing platform – as a solution. He said it’s quick and affordable to deploy; it’s easy to build, maintain, manage, and add devices; and it’s easy to build customized solutions that scale up and down when the need shifts.

Cole said any talk of cloud computing contributing to job loss is “totally fictitious.” “It does not drive people out of work. If anything, it creates business opportunities to add different value and lowers the cost of optimizing your infrastructure,” Cole said. “The planning, deployment, migration and support opportunities in the area of new venture startup are a tremendous way – at a low risk – to start something new, which means new jobs.” Whenever there is a seismic shift in an industry, like cloud computing is for the IT industry, the changes may mean job shifting, Woodgate said. But in the long term, it will mean more jobs and higher-value IT jobs – such as creating new services for end users.

“With these changes, it takes time to get people’s skills up, it takes time to understand the cloud and evaluate how it can help you, execute your first project and then build on that success” Woodgate said. “Certainly the infrastructure for cloud computing exists today, so it’s great to see this level of interest. Microsoft began our journey to the cloud more than 10 years ago and we have some very strong offerings across productivity, management and software as a service.”


David Pallman gave a detailed answer to the What Platform to Choose? question in Picking a Lane in Cloud Computing on 1/14/2011:

image

image Because cloud computing is big and varied, there are lots of ways to apply it—which requires you to ultimately make some important decisions. In this post we’ll explore what some of those decisions are, first with cloud computing generally, then specifically with the Windows Azure platform. This is an excerpt from my upcoming book, The Azure Handbook.

imageChoice is good, right? Yes and no: it’s good to have options but it also raises the specter that a wrong choice will take you to some place you don’t want to go. You might even be unaware that you have a choice in some area or that a decision needs to be made. While there’s some value in experimenting, you eventually need to make some rather binding decisions. Failure to get those decisions right early on could cost you wasted time, effort, and expense.


 SaaS, IaaS or PaaS?

The first choice to make is the one that’s most talked about (talked about to death, perhaps): whether you’re going to run Software-as-a-Service, Platform-as-a-Service, or Infrastructure-as-a-Service. What’s at issue here is the level at which you use cloud computing.

SaaS: Someone else’s software in the cloud. If you’re simply going to use someone else’s cloud-hosted application (such as Salesforce.com or Microsoft Exchange in the cloud), decision made: you’ll be using SaaS. If that’s you, read no further. The rest of this article is for those who want to run their own software applications in the cloud (to be sure, your SaaS software provider is using IaaS or PaaS themselves but that’s their worry, not yours.)

PaaS: Your Cloud Applications. This means running applications in the cloud that conform to your cloud provider’s platform model. In other words, they do things the cloud’s way (which is often different from in the enterprise). There are many benefits to running at this level, among them superb scale, availability, elasticity, and management. There’s a spectrum here that ranges from minimal conformance all the way to applications designed from the ground up to strongly leverage the cloud.

IaaS: Traditional Applications in the Cloud. This means running traditional applications in the cloud. Not all applications can run in the cloud, and you’re not leveraging the cloud very strongly by running at this level. If your application and data aren’t protected with redundancy there are some real dangers you could lose availability or data (in PaaS, the platform has these protections built into its services). IaaS appeals to some people because it’s more similar to traditional hosting and thus somewhat more familiar, or because they prefer to take control themselves.

Not sure which way to go? For running your own applications in the cloud, PaaS is the best choice for nearly everybody.


 Public, Private, or Hybrid Cloud?

Public Cloud: Full Cloud Computing. Cloud computing in its fullest sense is provided by large technology providers such as Amazon, Google, and Microsoft who have both the infrastructure and the experience to support large communities well with dynamic scale and high reliability. We call this “public cloud”. When you use public cloud, you get the most benefits: no up-front costs, consumption-based pricing, capacity on tap, high availability, elasticity, and no requirement to make commitments.

Private Cloud: Under Your Control. And then there’s private cloud, not quite as firmly defined yet but very much on everyone's mind. Ever since cloud computing became a category there’s been ongoing demand in the market for “private clouds”. There’s more than one interpretation of just what this means or how it can be delivered; the general idea is to benefit from the cloud computing way of doing things but with a strong degree of privacy and direct control as compared to public clouds, where you are in a shared environment. Here are some of the ways private cloud is interpreted:

  1. Hardware private cloud: a local cloud computing hardware appliance for your data center.
  2. Software private cloud: a software emulation you run locally.
  3. Dedicated private cloud: leasing a dedicated area of a cloud computing data center not shared with other tenants.
  4. Network private cloud: ability to exercise network control over assets in the cloud such as joining them to your domain and making them subject to your policies. This last idea more properly belongs in our next category, Hybrid Cloud.

Hybrid Cloud. If you’re making use of public cloud, it often makes sense to connect your cloud and on-premise assets. Using VPN technology some cloud platforms allow you to link your virtual machines running in the cloud with local on-premise machines. You might do this for example if you had a cloud-hosted web site that needed to talk to an on-premise database server. If you want to be on more intimate terms, your cloud assets can become members of your domain.
In the Windows Azure platform, all 3 forms of cloud are available: public cloud, hardware private cloud, and hybrid cloud.

Not sure what you need? Public cloud is the best starting point for most organizations, it doesn’t commit you. Whether and when you look into private cloud or hybrid cloud is something best decided once you’ve tested the public cloud waters.


 Which Cloud Computing Platform to use?

If you’ve decided to go with PaaS or IaaS, you need to choose a vendor and platform. The big players are Amazon, Microsoft, and Google.

Amazon Web Services offers an extensive set of cloud services. I think of them as mostly focused on IaaS but they also provide a growing set of PaaS services.
Microsoft’s Windows Azure Platform also offers an extensive set of cloud services. Windows Azure is very focused on PaaS but also offers some IaaS capability. One distinguishing feature of Windows Azure is the symmetry Microsoft offers between its enterprise technology stack and its cloud services.

Google provides some interesting cloud services such as AppEngine that are very automatic in how they scale, but they limit you to a smaller set of languages and application scenarios.

Here’s a comparison I recently put together on the services offered by these vendors. Keep in mind, the platforms advance rapidly and I’m only an authority on Windows Azure; so you should definitely research this decision carefully and make sure you’re using up-to-date information.

Not sure where to go? Figure out what's important to you and compare. I have my favorite, and it's Windows Azure.


 Services: Hosted Compute vs. Consuming Services

Even after selecting a cloud computing platform and provider you have plenty of decisions left to make! Cloud computing providers provide oodles of services such as those listed in the previous section. Which ones will you use and for what purpose? Not everyone uses the cloud in the same way. Some organizations run public web sites, customer applications, or internal departmental applications in the cloud. Some use the cloud for data archiving, backup, or disaster recovery. Some use the cloud for overflow to back up their primary data center. Some use the cloud to federate security or communication across multiple organizations. Some start-ups and newer companies put all their IT in the cloud.
We can divide the services you might use from a cloud computing provider into 2 big categories: Hosted Compute and Everything Else.

 Consuming Services: Using the Cloud from a Distance. Most cloud services are consumed: that is, your programs (wherever they reside) access them by making Internet calls. Cloud services for storage, database, security, and communication work this way. Since just about any platform can issue web calls, you’re free to make use of cloud services from any operating system and any category of software application (including desktop and mobile applications).

Hosted Compute: Running your Application in the Cloud. Then there’s Hosted Compute, where your software actually runs within the cloud computing data center. That’s different, because you live there and have to conform to the requirements of the environment.

It’s not that you have to pick one category over the other: most likely you’ll be using a combination of services. However, be aware that with Hosted Compute you’re using cloud computing at a much more intimate level and it puts more constraints on the design of your application.


 Design to Minimize or Maximize Use of the Cloud?

You have some choice about how strongly your applications are designed for the cloud platform. There are various reasons why you might favor doing as little as possible or as much as possible in this area.

Driver: Expense. If you are migrating an existing application to the cloud and are sensitive to development costs you might choose to change as little as possible. You can change your application just enough to achieve minimal conformance to the cloud platform.

Driver: Portability. If you have concerns about being locked into a platform, you might choose to stress portability and write your software in such a way that it can run in the enterprise or in the cloud. This means limiting yourself to the “common ground” features that are the same between the enterprise and the cloud.

Driver: Feature Need. There may be a specific feature in the cloud that you need and can’t find elsewhere, such as a federated security service. In this scenario you might change your application design to accommodate this one feature need.

Driver: Commitment. You may have committed to cloud computing as a style of computing you want to embrace for strategic or cultural reasons. Here you will want to do everything the cloud way, including designing your applications to strongly leverage cloud services.


 Identity: Internet, Domain, or Custom?

When your applications run in the enterprise, the default identity model may be obvious such as securing employee applications to your domain. When you put an application in the cloud, you have to decide which security model you want to use for identity. You have many choices in identity these days.

Internet Identities. Many people today have one or more Internet identities such as Facebook, Google, Yahoo, Windows Live, or Open ID.

Domain Identities. Even if your application is in the cloud you can still secure it to your domain. There are multiple approaches for this. One is to establish a hybrid cloud virtual network connection to your domain controller. Another is putting a federated identity server in your enterprises DMZ such as ADFS.

Custom Identities. You could maintain a custom membership database. However, consider that by supporting an existing identity scheme you eliminate the need for someone to create and remember yet another identity for your application.

Federated Identity. Federated identity allows you to support multiple identities simultaneously and to add new ones over time. Windows Azure provides the AppFabric Access Control Service for federated identity. Your program only needs to talk security one way and the service takes care of communicating with multiple identity providers.

The best way to handle security today is to use claims-based security and to decouple security implementation from your code. In the Windows Azure platform, technologies used in this area include Windows Identity Foundation, ADFS, and the Access Control Service.


 Data Storage: Relational or Cheap?

In the enterprise, the king of storage is the relational database, augmented by other types of storage such as queues and file servers. In the cloud, you also have these facilities but the dynamics and costs are such that you may want to change out the equation. For example, in the Windows Azure Platform relational database capability is 66 times more expensive than basic table storage. If your data needs are not sophisticated, table storage may make sense.

Relational Database. A cloud-based relational database is going to give you the rich features you are used to having, which will make development or migration easier. But you may have size or scalability limits or it may be more expensive than other options.

Table Storage. Some cloud platforms offer cheap, big table storage at a fraction of the cost of a relational database and without its limits. In exchange for the lessened cost developers must do a lot more work themselves and live without advanced features like stored procedures, SQL, joins and user security. Not all developers are cut out for this.

Not sure where to go? Come up with a simple data task and have it implemented both ways, then compare notes.


 Master Data Management: Here or There?

Great, you’ve got your application and data running in the cloud—but where is the master system of record for your data, in the cloud or back in the enterprise?

Master Data in the Cloud. If your data’s master copy will be in the cloud, you need to ensure you are using a trustworthy means of storage that will protect your data. For example, in Windows Azure there is the Windows Azure Storage Service and the SQL Azure Database service, both of which protect your data through redundancy.

Master Data on Premise. If the master copy of data is on-premise, you need to think about how your cloud applications get to it: do they access it directly (through a web service or VPN connection) or do they have their own copy of the data in the cloud? If the latter, then some sort of synchronization is going to be necessary, either ongoing or periodic. Your cloud platform may provide synchronization services or you may need to adapt or create tools, scripts, or programs for this purpose.


 Service Access: REST or Platform Libraries?

Now you’re consuming cloud services, which most often use the REST protocol for access; this means your applications issue web requests to use the service. In addition to the usual development platform choices you have about language and tools, your platform may let you choose between REST web calls vs. using a provided library. For example, in Windows Azure Storage you can access the service with REST or a .NET storage library.

REST Interface. Using REST is very popular today, and has the benefit that just about any operating system and development platform can be used since the only requirement is the ability to make web calls. However, REST also requires you to work at the web I/O level, where you need to implement creating web requests, encoding and encrypting data, interpreting web responses, handling errors and performing retries. It can be quite a bit of work.

Platform Library. A platform library in contrast is easy to work with, and if one is available for your favorite development environment and language (such as C#/.NET and Visual Studio, or Java and Eclipse) you may find a radical improvement in productivity using this approach. It may provide built-in error handling and retry logic. However, this approach does limit you to a particular platform and you are trusting the library (usually a wrapper around a native REST interface) that you may not have source code to.

Here’s an example of the difference. A call to Windows Azure Storage service to store data looks like this, and you can use REST or a .NET library to generate it:

PUT http://myproject.blob.core.windows.net/mycontainer/myblob HTTP/1.1
Request Headers:
x-ms-version: 2010-09-19
x-ms-date: Sun, 2 Jan 2011 22:00:35 GMT
Content-Type: text/plain; charset=UTF-8
x-ms-blob-type: BlockBlob
x-ms-meta-m1: v1
x-ms-meta-m2: v2
Authorization: SharedKey myaccount: 4rvSHg2S6LhRuGn713bqFXRM3E08QDGbPWOhOdWO2V+DoLhbmvc2rSwIO/wwMqzxlZUh0C+Wwy0LoDj1da4wQB==
Content-Length: 13
Request Body:
Hello, Cloud!

If you used your own code to generate this REST request in C#, it would look something like this (not shown: additional code to sign and send the request):

// Create or update a blob.
// Return true on success, false if not found, throw exception on error.
public bool PutBlob(string container, string blob, string content)
{
    HttpWebResponse response;
    try
    {
        SortedList headers = new SortedList();
        headers.Add("x-ms-blob-type", "BlockBlob");
        response = CreateRESTRequest("PUT", container + "/" + blob, content, headers)
            .GetResponse() as HttpWebResponse;
        response.Close();
        return true;
    }
    catch (WebException ex)
    {
        if (ex.Status == WebExceptionStatus.ProtocolError &&
            ex.Response != null &&
            (int)(ex.Response as HttpWebResponse).StatusCode == 409)
            return false;
        throw;
    }
}

For comparison, here’s how this is done using the .NET StorageClient library, also using C# code:

// Put (create or update) a blob.
// Return true on success, false if unable to create, throw exception on error.
public bool PutBlob(string containerName, string blobName, string content)
{
    try
    {
        CloudBlobContainer container = BlobClient.GetContainerReference(containerName);
        CloudBlob blob = container.GetBlobReference(blobName);
        blob.UploadText(content);
        return true;
    }
    catch (StorageClientException ex)
    {
        if ((int)ex.StatusCode == 404)
        {
            return false;
        }
        throw;
    }
}

Not sure which way to go? Your developers likely have strong opinions--or will after a little bit of experimentation.


 Closing Thoughts

Well, there you have it. These are some of the decisions you’ll need to make on your journey into cloud computing. For some of these decisions the right way to go for your organization may be obvious. When it isn’t, do some experimentation and read up on the experiences of others.

A good way to be sure you’re making the right decisions is to get help from a knowledgeable consulting company who knows what to look for and the right questions to ask. At Neudesic we’ve teamed up with Microsoft to provide free cloud computing assessments. And of course this is yet another decision.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

• Srinivasan Sundara Rajan described “Enabling Multi Tenancy with Business Rules” in a Private Clouds and Business Rules post of 1/16/2011:

image Moving from dedicated  data centers and applications to Cloud  is not an overnight  switch-in operation; rather  enterprises need to plan out  the movement towards the Private Clouds.

While the benefits are obvious, it takes lot of thinking from the IT department of the enterprises to move applications to private cloud. The following are the typical strategies adopted in migration to Private Clouds for enterprises which are traditionally driven by dedicated applications and infrastructure.

1.       Consolidate  Data Center Resources  using  Virtualization  of the hardware and storage. (IaaS)

2.       Make the common IT applications served to business as Reusable  Services (Saas)

3.       Enable  Multi-tenancy and Application Sharing,  acrossthe  Line Of Business so that  the  applications can be made dynamically scalable (SaaS With Multi Tenancy & Dynamic Scalability)

1. Virtualization And Data Center Consolidation (IaaS)

Data Center Consolidation using the appliances and Hypervisor based approach is relatively easier of the above three steps towards migration to private Cloud.  Several of the vendors  have provided supporting approaches for the  data center  consolidation , some of them are listed below.

  • HP supports Private Cloud Infrastructure with Converged Architecture
  • VMWare Products like VSphere provides Virtualization support for enabling Private Cloud
  • Microsoft Supports with HYPER-V Virtualization technology augmented with Windows Azure Cloud Appliance. Hyper-V, the Windows Server 2008 R2 hypervisor-based server virtualization technology, is a core component of the Microsoft private cloud.
  • Oracle supports the Private Cloud Infrastructure with "Exalogic Elastic Compute Cloud" , an appliance combining server and storage hardware with a pre-tuned web server, hypervisor and other middleware.
  • IBM System X Hyper-V provides private clod infrastructure solution

2. Migrate Applications As A Service (SaaS)

Most enterprises  have lot of legacy applications which are tightly coupled  with their  consumers,  this prevents them from being offered  as services to consumers,  however  enabling  existing applications as services forms the basis of SOA (Service Oriented Architecture),  and the elaborate discussion of service enablement using SOA is beyond the scope of this article.

With several vendors offering  SOA enablement services and   tools supporting  SOA deployments converting  existing applications reusable services is  very much a possible task for the enterprises.

Notable  tools and platforms that support SOA enablement are :

  • Oracle SOA Suite 11gmakes it easier than ever to build, deploy, and manage SOA with complete, open, integrated, and best-in-class technology.
  • Microsoft has delivered a set of new services based on its BizTalk Server technology to help developers build new SOA-oriented applications.
  • IBM Websphere suite of products help with various phases of SOA like (Model, Assemble, Deploy and Manage)
  • HP SOA Center helps your IT organization effectively adopt SOA and scale from project to enterprise. It provides complete service lifecycle governance capabilities

3. Enable Multi Tenancy  & Sharing Of Services

Having  virtualized  hardware  and enabling  service oriented architecture  on the existing applications  will not make the enterprises completely transform into private Cloud.  The biggest challenge is how these applications can be enabled for multi tenancy  and the services shared across different Lines of Business (LOB)  of the enterprises.

Modern enterprises  have been built with  several acquisitions and mergers and each enterprises have expanded their business  into diversified portfolios, so  business  operations of  a typical enterprise is served by  multiple disparate applications. These individual applications  have pre dominantly common processes , however there are  specific  needs that prevents them from using a common application.

For example ,

  • Many sales regions (continents) of a large global enterprise may use Purchase order and accounts payable applications, which may follow a common process across , but specific tax laws and statutory needs across countries may prevent a common service across regions
  • A large telecom may provide services for enterprises, retail and consumers , the process of ordering, provisioning and billing may follow common steps, but yet they are different from individual aspects

Making  the SOA enabled applications to transparently adjust to different Lines of Business (LOB) needs by utilizing Business Rules is one of  the very useful option to enable multi tenancy and  sharing of services.

Business Rules makes processes and applications more flexible by enabling business analysts and non-developers to easily define and modify business logic without programming. By defining and maintaining business rules outside of the related process or application and using a separate, more intuitive web-based interface,  Business Rules provides faster, easier rule modifications and reduces subsequent redeployment costs.

A business rule management system (BRMS) enables organizational policies - and the operational decisions associated with those policies, such as claim approvals, cross-sell offer selection, pricing calculations and eligibility determinations - to be defined, deployed, monitored and maintained separately from application code.

From  the  explanation of the business rules  above,  Business Rules Engine and associated tools provide a viable option to enable multi tenancy of existing applications and enable the services shareable towards their adoption into private clouds. Some of the   business rules engines that will support private cloud enablement are :

  • WebSphere ILOG Business Rule Management Systems
  • Oracle Business Rules
  • Business Rules Framework as part of Microsoft .NET.

Summary

From the above  discussions  it is evident that enabling multi tenancy  and service sharing of the existing applications is the toughest part for enterprises in their  path towards private Clouds.   Business Rules Engines provide a viable option  in enabling the applications to adapt  to the needs of multiple stake holders .


• Jerry Huang asserted “First of all, Intel Hybrid Cloud is a physical server that is going to be placed on-premise at the customer side” in a preface to his Intel Hybrid Cloud and Cloud Storage post of 1/12/2011 to the Gladinet blog:

image Nowadays, every computing technology has a cloud label and a cloud twist.

Last week, we attended an Intel Hybrid Cloud road show and saw an interesting twist of the local computing with a cloud management – the Intel Hybrid Cloud.

hybrid

image First of all, Intel Hybrid Cloud is a physical server that is going to be placed on-premise at the customer side. Inside the physical server, there could be multiple virtual machines.

The cloud twist comes from the management services placed on the server and the virtual machines. With these management services, it allows you to manage the server box remotely from Intel web site with metering features for the licenses and other stuff.

There are other cloud twists you can add to the Intel Hybrid Cloud, for example, adding Cloud Storage. With Gladinet’s Cloud Storage Access Suite, it is pretty easy to attach cloud storage solutions to the virtual machines. For example, on the Windows 2008 server VM, you can install Gladinet CloudAFS so it turns the 2008 box into a cloud storage attached file server. You can also install Gladinet Cloud Backup to the Windows Small Business Server so it can do cloud storage based backup of the server.

The Intel Hybrid Server is an interesting concept of applying management capabilities over the cloud. It doesn’t stop you from adding other cloud capabilities to it. With Gladinet, you can add cloud storage capabilities to it however you want to, with hybrid storage (partial cloud and partial local).


<Return to section navigation list> 

Cloud Security and Governance

Graham Calladine explained Windows Azure Platform Security Essentials: Module 3 – Storage Access in a 00:22:35 Channel9 video segment posted on 1/15/2011:

image

image Graham Calladine, Security Architect with Microsoft Services partners with the Security Talk Series to describe the options for controlling access to information stored in Windows Azure Storage or in SQL Azure.

image

Related resources:

Check out Windows Azure Subscriptions and:


<Return to section navigation list> 

Cloud Computing Events

The HPC in the Cloud blog reported on 1/14/2011 Global Knowledge to Host Live Microsoft Cloud Computing Webinar on 1/19/2011 at 9 AM PST:

image Global Knowledge … will host a free, live webinar entitled “Microsoft's Cloud Architecture in the Enterprise” at 12 pm EST on Wednesday, January 19, 2011. In the hour-long Microsoft cloud computing webinar, Global Knowledge instructor Craig Brown will discuss the main aspects of the emerging cloud architecture solutions from Microsoft.

Webinar attendees will learn how to derive necessary information such as a subnet number, given certain situations in their own networks. Attendees will examine the application of modern cloud solutions to the original client server designs, and they will take a look at the full cloud designs Microsoft has been developing.

“Private cloud solutions have been part of the enterprise design in one form since the client server architecture has been deployed,” said Brown. “In the webinar, we will examine the hybrid public-private cloud solution designs that allow organizations to transition to the cloud while maintaining existing on-premise investments.”

Webinar: Microsoft's Cloud Architecture in the Enterprise

Live Presentation: Wednesday, January 19, 2011, 12:00–1:00 pm EST

Register for Microsoft's Cloud Architecture in the Enterprise on the Global Knowledge web site. [Link added.] The recorded version will be available after January 24, 2011.

About the Presenter

Craig Brown (MCT, MCSE, MCSA, MCITP, MCDST, CTT+, N+, Security+, Microsoft Voice Specialist) is the practice architect for Microsoft training at Global Knowledge. Craig has been a principal technical writer on the courseware development team at Global Knowledge, creating content on a variety of courseware topics related to cloud computing, certification, Active Directory, Group Policy, security, OCS, Exchange, and Voice Ignite. For the past four years, Craig has been the lead for emerging cloud technologies and the unified communication certification programs, which have provided cross training to telephony professionals with Microsoft cloud designs and telephony integration. Craig is a frequent speaker at conferences, such as Microsoft TechEd, GMIS, and CA World.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Lydia Leong continued her series about Gartner’s recent Magic Quadrant on 1/16/2011 with Gartner is NOT dissing Amazon’s cloud:

image I’ve now seen a number of press reports and some related writing, about the Magic Quadrant for Cloud Infrastrucure as a Service and Web Hosting, that I feel mischaracterize statements made on the MQ in ways that they were certainly not intended to be taken, and in some cases, mischaracterize the nature of a Magic Quadrant itself. I feel compelled to try to attempt to make some things explicitly clear. Specifically:

image This is not just a Cloud IaaS MQ. As the title says, this is a Cloud IaaS AND Web Hosting MQ. You should not interpret a vendor who is a Leader in the MQ as necessarily being a “cloud leader”, for instance; they are simply a leader in the context of the overall market, of which we have forecasted 25% of the revenue to be cloud IaaS by the end of 2011. You should look at the execution axis as favoring vendors whose positioning fits well with the immediate, relatively conservative needs of typical Gartner clients, and the vision axis as favoring vendors whose strategy makes them likely to succeed in a more cloud-oriented future.

The Magic Quadrant is not tiered. Specifically, the Challengers quadrant is not “better” than the Visionaries quadrant, nor is the reverse true. Indeed, Visionaries may be better positioned to succeed in the future than Challengers are, since they tend to be companies who have good roadmaps and are evolving quickly. (And Niche Players might be highly focused and fantastic at what they do, note.)

The Magic Quadrant rates relative positions of vendors in an overall market. Importantly, it does not rate products or services; these are only a component of the overall rating (in this particular MQ, about a third). You should never judge a vendor’s position as indicating that it necessarily has a better service, especially with respect to your specific use case. It’s especially important in this particular MQ because most of the vendors are not pure-plays, and their cloud IaaS service might be much better or much worse than their portfolio as a whole.

Strengths and Cautions are statements about a vendor, not the reasons for their rating. The statements are things that we think it is important for a prospective customer to know when they’re thinking about working with this vendor. They are distinct from the criteria scores that underly the graph. In many cases, the vendor has not lost or gained points specifically for the thing that is called out, but it’s something distinctive, or that readers might not be aware, or is a common misunderstanding from readers, or is even just simply pointing out a best practice when dealing with a particular vendor.

At no point do we say that Amazon’s cloud service is unproven. Amazon is positioned as a Visionary, and as a category, Visionaries are typically companies who have a relatively short track record in the evaluated market as a whole (yes, the boilerplate language for the category uses “unproven”). Pure-play cloud vendors are still emerging, which makes this characterization fit pretty well. While Amazon has obviously been at the pure-cloud-IaaS business longer than any other vendor on the Magic Quadrant, they are newcomers to the overall market assessed by the MQ, which is now about 15 years old. MQ Visionaries are pioneering new territory. That shouldn’t be regarded as a bad thing.

We are not “dissing” Amazon. Some writers have been trying to imply that we don’t think much of Amazon’s cloud service. Nowhere does the report state this. The report certainly attempts to present meaningful strengths and cautions for Amazon, as it does for every vendor. Amazon has by far the highest rating on vision. Its execution score is based on its ability to serve the whole host of evaluated enterprise use cases in the Magic Quadrant, which, if you think about it, indicates that Amazon must have scored well on the self-managed IaaS use case, since that is the only one of the three evaluated use cases that are considered by the Magic Quadrant, and Amazon doesn’t serve the other two use cases at all. Use of Amazon or any other vendor should be considered in light of your use case and requirements.

Obviously, there are plenty of people who are interested in understanding more about the thinking and market observations that led to this Magic Quadrant, and possibly some substantial confusion on the part of people who don’t have access to the larger body of research as to Gartner’s views on cloud IaaS and so forth. I’ve blogged a fair amount recently to try to clear up some points of confusion. However, I am mindful of Gartner’s policies for social media use by analysts, and believe that it would be inappropriate for me to methodically blog about market evolution, market segmentation, and the use cases and adoption patterns that we are seeing from our clients — the things that would be necessary in order to fully lay out an explanation of our market view and how it led to this particular MQ. Instead, I should be writing research notes for paying clients, and I intend to do exactly that.

If you are a Gartner client, you are welcome to place an inquiry to discuss the Magic Quadrant and any other related matters; I’m happy to discuss these at length. And do please read the full Magic Quadrant and related research. (Non-clients can only read the non-interactive document.)

As I’ve mentioned previously, I was quite surprised to see Amazon Web Services positioned below seven other IaaS and/or Web hosting providers in the “ability to execute” axis. I’m also of the opinion that “The lady doth protest too much.” (Hamlet Act 3, scene 2, 222–230.)

The way I read the tea leaves, combining IaaS and traditional Web hosting in a single MQ turned out to be inappropriate.


• Chris Czarnecki described Amazon VMImport: A Step Towards Cloud Interoperability in a 1/14/2011 post to the Learning Tree blog:

image Amazon have just announced a new feature for Amazon AWS known as VM Import. What VM import provides is the ability to import virtual machine images from existing corporate environments and run them on Amazon EC2. This then enables organisations to benefit from EC2′s pay per use, elastically scalable Infrastructure as a Service (IaaS).

image Currently, VM Import supports VMWare VMDK images for Windows server 2008 SP2 with the plan to provide support for other operating systems and image formats in the near future. There are plans to make VM Import available from the VMWare vSphere console in the near future too.

image What is exciting about this development from Amazon is not only the ability to potentially move existing infrastructure to Amazon EC2, but importantly in a format that is consistent with other cloud providers. A VMWare image can thus be used on EC2 or on any of the VMWare vSphere cloud providers, a list of which can be found here. This is a big step forward in ‘Cloud interoperability’.

One of the major concerns I hear raised every time I teach Learning Tree’s Cloud Computing course is that of vendor lock-in. VM Import now enables organisations to consider using EC2 without the concern of being locked-in to Amazon’s IaaS. In addition, the development of hybrid clouds just became a whole lot more straightforward too. If you would like to know more about Amazon EC2, Cloud computing in general and the way your organisation may benefit from it, why not enrol[l] in the next class.

This feature seems to me to be very similar to Windows Azure’s new VM Role.


• Alex Williams posted a Weekly Poll: Why are Amazon Web Services and Racskpace Popular with Startups? to the ReadWriteCloud on 1/14/2011:

image We've been writing this past week about the overwhelming interest in startups using Amazon Web Services and Rackspace.

Reasons are numerous, but there is a consensus that the plain vanilla offering such as what Amazon EC2 offers makes for lower costs, more productivity and better access to talent pools.

Thumbnail image for Thumbnail image for Thumbnail image for Thumbnail image for Thumbnail image for Thumbnail image for oracleweeklypollchart.pngBut there is lots more to this story than those factors alone. Google App Engine has been cited with performance issues and there are feature limitations on Azure that make it less flexible for developers. But Heroku is a platform that is popular for startups and there are others such as Engine Yard that are popular for developers.

image And there is the fact that Windows Azure has     co[m]e down a lot in price. At PDC late last year they announced Extra Small Windows Azure Instance priced at 5 cents per compute hour[*] in order to make the process of development, testing and trial easier and more affordable for developers. It's also worth noting the Microsoft BizSpark program, designed to provide software startups the resources they need to build successful companies and connect them with a community of experts.

And Google Apps has made numerous upgrades to improve performance.

What do you think?

Why are Amazon Web Services and Rackspace So Much More Popular with Startups?online surveys

* See my Windows Azure Compute Extra-Small VM Beta Now Available in the Cloud Essentials Pack and for General Use post of 1/9/2011 for more details about the lower-cost instances and freemium offers.


See also Alex Williams asked (and partially answered) Is Windows Azure Suitable for Startups? in a 1/14/2010 post to the ReadWriteCloud blog in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section above.


• Bob Cringely posted his 2011 prediction #9: Apple’s Carolina strategy on 1/6/2011 (missed when published):

Podcast: Play in new window | Download

If you put together my 2011 predictions so far they create a world view of tech culture and business as I see it for the coming year.  Each prediction builds on the others until we get to these last two, which present a couple boffo conclusions, the big question being “What does Apple need with a 500,000 (soon to be one million) square foot data center in rural North Carolina?”

First we have Apple working to kill small hard drives.  We’ll shortly see Apple also killing optical drives in its notebooks. This is to save money, space, and weight, sure, but it is mainly to limit local storage.  We need local storage, but Steve doesn’t want us to have too much or we won’t need his big data center, which is about to open. In that respect Apple likes it that flash storage is still too expensive to have in unlimited amounts.  His various App Stores, too, are intended to change the mechanics of software storage and distribution.

The iPad has no real file system, why is that? Because its intended file system is that late-in-opening Apple data center in Maiden, NC.  Also look for the new MacBooks to take advantage of the same cloud filing system through OS X 10.7 Lion. Steve said the new MacBook air was the future of Apple laptops. But with low capacity solid state drives, where will all the media be stored? The answer, of course, is in North Carolina. Look for updated versions of iWork to also take advantage of the data center for storage and collaboration.

Apple is heading toward a world of thin client computing networked out of the box.  It’s the new MobileMe. Content creation will take place on solid state drive MacBooks/iMacs and content consumption will take place on iPads and iPhones.

This is also their corporate strategy. Apple sees companies abandoning IT departments in favor of simply passing out iPads and MacBooks – already networked out of the box  to secure storage, email, and collaborative services. In typical Apple fashion, this strategy completely does an end-run around the status quo, revolutionizing the way businesses think about computing. Why else would Apple abandon their corporate server (xServe) strategy?

Steve has always seen Apple as a solutions provider, giving customers completely finished functionality. The data center connects subnet functionally to other subnets, extending the reach of Apple computing devices by connecting them to as many subnets as required.


Brian Gracely continued the Apple Data Center story with an Apps Stores coming to your Enterprise - "iTunes IT" - Part II post of 1/16/2011 to his Clouds of Change blog:

Following up on my initial post, it seems that one of the areas that resonated with readers was the idea of "app store" like functionality coming to their businesses, internally. In doing some more research, it appears that this concept is already starting to gain some traction from very large players in the market, such as - Restricted Enterprise App Store (Apple patent via FastCompany), VMware Project Horizon, and Citrix OpenAccess.

Now by no means is this is a new idea, as other have written about it and companies like Google and Salesforce.com have been delivering this concept via SaaS offerings for a couple years. But businesses are often slow to adopt change, so seeing major vendors adopting a model that is closer to Enterprises (and mid-market), giving them a greater level of control of certain elements, is a step in the right direction to affect this type of change. It allows end-users and lines-of-business to leverage new services (internal or external), while continuing to allow a level of control/trust/security for the IT organization.

This brings up some interesting questions for makers of Enterprise software.

  • Who is your customer now and over the next few years if this trend gains traction? 
  • Do you know how to target the end-users and ISVs that will be the consumers and suppliers of these Enterprise App Stores?
  • Are you learning anything from consumer marketing or social media to better target or influence you future customers?
  • Are you ready for communities of user-feedback to potentially have greater influence over future sales than licensing lock-in?
  • Are you continuing to shift your user-experience to work seamlessly with the new types of devices that expect to gain productivity via Enterprise App Stores?



<Return to section navigation list>

0 comments: