Thursday, March 15, 2012

Windows Azure and Cloud Computing Posts for 3/14/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

• Updated 3/15/2012 with new articles marked by Neil Mackenzie, Glenn Gailey, Jim O’Neil, Michael Collier, Bruce Kyle, Scott M. Fulton, III, Michael Washam, Wely Lau and Rich Miller

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Service

image_thumb3_thumb

No significant articles today.

image

<Return to section navigation list>

SQL Azure Database, Federations and Reporting

imageNo significant articles today.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData

Jim O’Neil reported on 3/15/2012 a .NET Bio Workshop at Cornell (Mar 22-23):

The great thing about working for Microsoft is that there is SO much going on in so many disparate areas. Just this week I learned that we have an open source API and SDK focused on bioinformatics research.

A quick peek at the Programming Guide gives this overview:

Application developers can use .NET Bio Framework to perform a wide range of tasks, including:

  • .NET BioImport DNA, RNA, or protein sequences from files with a variety of standard data formats, including FASTA, FASTQ, GenBank, GFF, and BED.
    This document focuses on DNA sequences, but you use similar procedures for the other sequence types.
  • Construct sequences from scratch.
  • Manipulate sequences in various ways, such as adding or removing elements or generating a complement.
  • Analyze sequences using algorithms such as Smith-Waterman and Needleman-Wunsch.
  • Submit sequence data to remote Web sites—such as a Basic Local Alignment Search Tool (BLAST) Web site—for analysis.
  • Output sequence data in any supported file format, regardless of the input format.

Register for .NET Bio Workshop!OK, so they pretty much lost me at “RNA,” but despite that, I did the quick walkthrough without a hitch and played with the cool WPF visualizations in the Sequence Assembler sample. For those of you in the bioinformatics field, it seems like an awesome tool.

And it get’s awesomer… there’s a free two day workshop being held at Cornell University next Wednesday and Thursday from 9 a.m. – 5 p.m. to enable attendees to build their own bioinformatics applications on Windows.

When: Mar 22-23, 2012 9am - 5 pm

Where: Rm. 655, Rhodes Hall

Register now

The sessions will be a combination of lectures and hands-on labs, so you’re encouraged to bring your laptop with Visual Studio 2010 installed (check out DreamSpark if you’re a student, or just download Visual C# 2010 Express free). A detailed agenda and directions are available at the registration page.

Gene (DNA) sequencing was one of the first big data applications.

NCI BLAST is a Microsoft Research project that runs on Windows Azure. Their NCBI BLAST on Windows Azure page begins as follows:

Making bioinformatics data more accessible to researchers worldwide

BLAST on Windows Azure enables cloud-based analysis of vast proteomics and genomic data.Built on Windows Azure, NCBI BLAST on Windows Azure enables researchers to take advantage of the scalability of the Windows Azure platform to perform analysis of vast proteomics and genomic data in the cloud.

BLAST on Windows Azure is a cloud-based implementation of the Basic Local Alignment Search Tool (BLAST) of the National Center for Biotechnology Information (NCBI). BLAST is a suite of programs that is designed to search all available sequence databases for similarities between a protein or DNA query and known sequences. BLAST allows quick matching of near and distant sequence relationships, providing scores that allow the user to distinguish real matches from background hits with a high degree of statistical accuracy. Scientists frequently use such searches to gain insight into the function and biological importance of gene products.

BLAST on Windows Azure extends the power of the BLAST suite of programs by allowing researchers to rent processing time on the Windows Azure cloud platform. The availability of these programs over the cloud allows laboratories, or even individuals, to have large-scale computational resources at their disposal at a very low cost per run. For researchers who don’t have access to large computer resources, this greatly increases the options to analyze their data. They can now undertake more complex analyses or try different approaches that were simply not feasible before. …

Read more.


Glenn Gailey (@ggailey777) began a new series with Running WCF Data Services on Windows 8 Consumer Preview: Part 1 on 3/15/2012:

Windows 8 Desktop

imageI have a Samsung Slate 7, which is the device that was given to the //Build/ conference attendees to run the developer preview of Windows 8 to best demonstrate the new Metro interface. However, I ended up waiting for the Consumer Preview release to really get going on Win8. To that end, I dedicated one of my laptops, in addition to the Slate, to running this new OS. Overall I find Win8 to be an interesting hybrid of classical Win7 desktop and the new Metro interface, which is awesome for touch on the Slate.

image

To be honest, the new set of interfaces took a bit of getting used to (especially using a mouse on a laptop), but I jumped right in with the beta release of Visual Studio 11 to start creating some OData apps, and especially Metro apps. I thought that at this point, I would take a short break and report that I have been doing with WCF Data Services on Win8.

In this first post of the series, I will focus on using WCF Data Services on in the Win8 desktop. While Win8 has the cool new Metro stuff, it also still runs regular Windows desktop apps. I will save the Metro discussion for the next post.

Consuming OData Feeds

The Windows 8 desktop with the Visual Studio 11 beta is basically the traditional Visual Studio development experience. However, Visual Studio 11 has been redesigned to be a bit more “Metro-ish,” including an entirely new set of black-and-white icons, which honestly took a bit of getting used to. Still, I was able to very easily create a new client application using C# to consume the public sample Northwind data service. Add Service Reference in Visual Studio still worked correctly to generate the desktop client proxy classes based on the .NET Framework 4 version of the WCF Data Services client. Everything worked as you would expect when creating a regular Windows 8 desktop application. Good to go there.

Creating OData Endpoints

While it may not be the ideal use of a Windows 8 client computer to host an OData endpoint, I wanted to try to create a new WCF Data Services ASP.NET application on this new OS. The first step was to use Visual Studio 11 to create the ubiquitous Northwind service running on IIS Express (the welcome replacement for the development server). The only difficult issue that I found here was trying to get Northwind running on SQL Server 2012; more about that later. Otherwise, creating this OData endpoint was as easy as the quickstart. Up next, the always challenging real IIS deployment.

Running a New OData Endpoint on IIS

Getting an ASP.NET-based OData endpoint running in the context of the developer (usually a box admin) using Visual Studio has always been very easy. Perhaps misleadingly so compared to getting the same data service running on IIS. This is why WCF Data Services has had the topic How to: Develop a WCF Data Service Running on IIS. However, this topic is not as useful on Windows 8 because a) web hosting components are not installed by default as they might be in Windows Server and b) the IIS account is different.

The following are the basic steps needed to deploy the OData endpoint to IIS running on a Windows 8 desktop:

  1. Install IIS and the following components/features:
    • ASP.NET 3.5 and 4.5
    • IIS Management Console
    • IIS 6 Management Console
    • WCF Services HTTP Activation (.NETFx 4.5 Advanced Services)

    The following were the Windows Components that I needed to turn on to get the service running:
    image001

  2. Run Visual Studio 11 as administrator—otherwise you can’t create the virtual directory from VS.
  3. Create the WCF Data Service endpoint as a new ASP.NET Web application.
  4. On the Web project page, use IIS for hosting (by unchecking the Express box) and create a virtual directory for the application.
  5. Add a login for the “IIS APPPOOL\DefaultAppPool” account in SQL Server and create a user for this account in the Northwind database (see below), using this TSQL script:
    CREATE LOGIN [IIS APPPOOL\DefaultAppPool] FROM WINDOWS;
    GO  
    
    USE AdventureWorks2012
    GO
    
    CREATE USER [IIS APPPOOL\DefaultAppPool] 
    FOR LOGIN [IIS APPPOOL\DefaultAppPool] WITH DEFAULT_SCHEMA=[dbo];
    GO
    
    ALTER LOGIN [IIS APPPOOL\DefaultAppPool] 
    WITH DEFAULT_DATABASE=[AdventureWorks2012]; 
    GO
    
    EXEC sp_addrolemember 'db_datareader', 'IIS APPPOOL\DefaultAppPool'
    GO
    
    EXEC sp_addrolemember 'db_datawriter', 'IIS APPPOOL\DefaultAppPool'
    GO 

At this point, everything should run correctly.

Getting Northwind to Run on SQL Server 2012

As I mentioned, the biggest pain point that I had was trying to get Northwind to run on SQL Server 2012 Express Edition. Apparently, the old Northwind DB .mdb file is not compatible with this latest version of SQL Server. You are supposed to be able to just attach the old Northwnd.mdb file and the server will upgrade it for you, but I was getting errors when I tried this. Instead, I tried the Northwind install script. It also didn’t run because of the deprecation and removal of sp_dboption. Luckily, it was a pretty easy fix to remove those calls and replace them with (what I hope are) equivalent ALTER DATABASE calls. If you are interested, I have attached my updated version of the Northwind install script to this post (it's big ~1MB).

In the next post, I’ll share with you where I am and what I have seen with the Metro-side of the Win8 client.

image_thumb15_thumbNo significant articles today.


<Return to section navigation list>

Windows Azure Access Control, Service Bus and Workflow

imageNo significant articles today.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Michael Washam (@MWashamMS) reported the availability of Windows Azure PowerShell Cmdlets (v2.2.2) on 3/15/2012:

imageWe have a new release of the Windows Azure PowerShell cmdlets that we hope will make getting started and scripting with the cmdlets a much easier task.
The new release can be downloaded from its CodePlex project site here.

Getting Started Improvements
In 2.2.2 we have added a start menu link that starts a PowerShell session with the Windows Azure cmdlets already loaded. We have also added a Start Here link that shows how to complete the setup and a short tour about the capabilities of the Windows Azure PowerShell cmdlets and release changes.


Subscription Management Improvements
We have taken the subscription management improvements from the 2.2 release and made them much better.

Specifically, we have added the ability to persist your subscription settings into your user profile. This functionality allows you to set the subscription data once and then in new scripts or PowerShell sessions just select the subscription you want to use without the need to specify the subscription ID, Certificate and storage accounts each time.

Code Snippet One: Setting Subscription Data

$subid = "{subscription id}"

$cert = Get-Item cert:\CurrentUser\My\CERTTHUMBPRINTUPPERCASE

# Persisting Subscription Settings

Set-Subscription -SubscriptionName org-sub1 -Certificate $cert -SubscriptionId $subid

# Setting the current subscription to use

Select-Subscription -SubscriptionName org-sub1

Calling the Set-Subscription cmdlet with your certificate and subscription ID. Set-Subscription will persist the certificate thumbprint and subscription id to (C:\Users\{username}\AppData\Roaming\Windows Azure PowerShell Cmdlets\DefaultSubscriptionData.xml) associated with the subscription name.

This functionality supports adding multiple subscriptions to your configuration so you can manage each individually within the same script simply by calling Select-Subscription with the subscription name.

Code Snippet Two: Setting the Default Subscription

Set-Subscription -DefaultSubscription org-sub1

Snippet two demonstrates setting the default subscription to use if you do not set one with Select-Subscription.

Code Snippet Three: Associating Storage Accounts with your Subscription

# Save the cert and subscription id for two subscriptions

Set-Subscription -SubscriptionName org-sub1 -StorageAccountName stname1 -StorageAccountKey mystoragekey1

Set-Subscription -SubscriptionName org-sub1 -StorageAccountName stname2 -StorageAccountKey mystoragekey2

# Specify the default storage account to use for the subscription

Set-Subscription -SubscriptionName org-sub1 -DefaultStorageAccount stname1

Snippet three shows that you can associate multiple storage accounts with a single subscription. All it takes to use the correct storage account is to set the default before calling a cmdlet that requires a storage account.

Code Snippet Four: Specifying the Subscription Data File Location

# overriding the default location to save subscription settings

Set-Subscription -SubscriptionName org-sub1 -Certificate $cert -SubscriptionId $subid -SubscriptionDataFile c:\mysubs.xml

# retrieving a list of subscriptions from an alternate location

Get-Subscription -SubscriptionDataFile c:\mysubs.xml

Each of the subscription cmdlets take a -SubscriptionDataFile parameter that allows you to specify which XML file to use for operations.

Code Snippet Five: MISC Subscription Management

# Returns all persisted settings

Get-Subscription

# Removes mysub2 from persisted settings

Remove-Subscription -SubscriptionName org-sub2

# Removing a storage account from your persisted subscription settings

Set-Subscription -SubscriptionName org-sub1 -RemoveStorageAccount stname1

Other Usability Improvements
We have made many of the cmdlets simpler to use by allowing more parameters to be optional with default values.

  • Label parameter is now optional in New-AffinityGroup, Set-AffinityGroup, New-HostedService, New-StorageAccount, New-Deployment and Update-Deployment.
  • Slot parameter is now optional in New-Deployment and Update-Deployment (Production slot is used by default).
  • Name parameter is now optional in New-Deployment (a Globally Unique Identifier value is used by default).

In addition to the defaults we provided some needed fixes to unblock certain scenarios.

  • Get-Deployment now returns $null if no deployment was found in the specified slot (an error was thrown in previous versions).
  • Package and -Configuration parameters now accept UNC paths in New-Deployment and Update-Deployment.

Breaking Changes
Usability improvements like these did require some sacrifices. Before you download the latest build please review the list below because we have a few breaking changes.

  • DefaultStorageAccountName and -DefaultStorageAccountKey parameters were removed from Set-Subscription. Instead, when adding multiple accounts to a subscription, each one needs to be added with -StorageAccountName and -StorageAccountKey or -ConnectionString. To set a default storage account, use Set-Subscription –DefaultStorageAccount {account name}.
  • SubscriptionName is now mandatory in Set-Subscription.
  • In previous releases, the subscription data was not persisted between PowerShell sessions. When importing subscription settings from a publishsettings file downloaded from the management portal, the Import-Subscription cmdlet optionally saved the subscription information to a file that could then be restored using Set-Subscription thereafter. This behavior has changed. Now, imported subscription data is always persisted to the subscription data file and is immediately available in subsequent sessions. Set-Subscription can be used to update these subscription settings or to create additional subscription data sets.
  • Renamed -CertificateToDeploy parameter to -CertToDeploy in Add-Certificate.
  • Renamed -ServiceName parameter to -StorageAccountName in all Storage Service cmdlets (added “ServiceName” as a parameter alias for backward compatibility).

Summary
In the 2.2.2 release we have made a number of fixes such as accepting UNC paths and fixing Get-Deployment to not throw an error on empty slots. We have also substantially improved the getting started experience and how you can manage your Windows Azure subscriptions from PowerShell.
The new release can be downloaded here.


• Wely Lau (@wely_live) described Applying Config Transformation app.config in Windows Azure Worker Role in a 3/14/2012 post:

Background

imageIn many cases, we need to have two different set of configuration settings (let say: one for development environment and another one for production environment). What we normally do is to change the setting one by one manually before deploying to production server and change them back again to development. This is very annoying especially when you have many settings.

Web.config transformation is an awesome technique to transform the original web.config into another one with slightly changed of settings.

You could find more detail about how to configure and use it here in common ASP.NET project.

Transforming App.Config with some trick

The bad news is the technique is only built-in for web.config for ASP.NET Web Project, not others like Windows Form, Console App, etc.!

The good news is we can do some trick to make it works. The idea is to perform some modifications on its project file as illustrated in this post.

Config Transformation in Windows Azure

Since SDK 1.5 (if I remember correctly), VS Tools for Windows Azure enables us to select service configuration and build configuration.

1

Service Configuration is essentially configuration for Windows Azure services. You can have two or more different configurations, let say one for local (ServiceConfiguration.Local.cscfg) and another one for cloud environment (ServiceConfiguration.Cloud.cscfg).

Build configuration is either your web.config (for Web Role) and app.config (for Worker Role). Let say one for debug (Web.Debug.config) and another one for release (Web.Release.config).

App.Config in Windows Azure Worker Role

For web.config, it certainly works well. Unfortunately, it doesn’t applicable for app.config (Worker Role project) Crying face. Although if you try to apply the technique above to your App.config inside your Worker Role, it still won’t work.

That is the reason why I am writing this article Winking smile.

Using SlowCheetah – XML Transforms

The idea is utilizing an Visual Studio add-on SlowCheetah – XML Transforms to help us perform xml transformation. This is an awesome tools (not only for Windows Azure project) that can help us add and preview applicable on config. Thanks to JianBo for recommending me this tool!

How to?

Let’s see how it will be done …

1. Download and install SlowCheetah – XML Transforms. You might need to restart your Visual Studio after the installation.

2. Prepare your Windows Azure Worker Role project. I named my Windows Azure project: WindowsAzureWorkerConfigDemo and my Worker Role: WorkerRole1.

4

3. Open the app.config file and add the following value:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appSettings>
    <add key="setting1" value="original"/>
  </appSettings>
    <system.diagnostics>
        <trace>
            <listeners>
                <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0,Culture=neutral,PublicKeyToken=31bf3856ad364e35"
                    name="AzureDiagnostics">
                    <filter type="" />
                </add>
            </listeners>
        </trace>
    </system.diagnostics>
</configuration>

Remember to save the file after adding that value.

4. Right-click on app.config and select Add Transform. (This Add Transform menu will only appear if you’ve successfully install the SlowCheetah). If Visual Studio prompts you for Add Transform Project Import, click on Yes to proceed.

2

5. You will then see there are children file[s] (app.Debug.config and app.Release.config) below your app.config.

5

6. Double-click on the app.Release.config and add the following snippet:

<?xml version="1.0" encoding="utf-8" ?>
<!-- For more information on using transformations
     see the web.config examples at http://go.microsoft.com/fwlink/?LinkId=214134. -->
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <appSettings>
    <add key="setting1" value="new value" xdt:Transform="SetAttributes" xdt:Locator="Match(key)" />
  </appSettings>
</configuration>

As you could see, I’ve change the value of setting1 into “new value”.

The “xdt:Transform=SetAttributes” indicates that the action that will be perform. In this case, it sets the attribute of the entry.

The “xdt:Locator=”Match(key)” indicates the condition when it will be perform. In this case, when the “key” is matched.

You can refer to this post to see what are the other possible values for xdt:Transform and xdt:Locator.

Remember to save the file after adding the settings.

7. Now, right-click on the app.Release.config and click on Preview Transform. (Again: it will be only appeared if SlowCheetah is properly installed).

6

8. Now, you can see the comparison between the original app.config and app.Release.config.

7

9. Right-click your Windows Azure project and click “Unload Project”. Right-click on it again and select Edit [your Windows Azure project].ccproj file.

10

10. Scroll down to the end of the file and add the following snippet before the closing tag of project.

  <Import Project="$(CloudExtensionsDir)Microsoft.WindowsAzure.targets" />
  <Target Name="CopyWorkerRoleConfigurations" BeforeTargets="AfterPackageComputeService">
    <Copy SourceFiles="..WorkerRole1bin$(Configuration)WorkerRole1.dll.config"
          DestinationFolder="$(IntermediateOutputPath)WorkerRole1" OverwriteReadOnlyFiles="true" />
  </Target>
</Project>

What it does is basically performing a task each time before packaging the Windows Azure service. The task is to copy the WorkerRole1.dll.config file to the IntermediateOutputPath.

Save and close the file. Right-click and select Reload Project again on the Windows Azure project.

11. Alright, we should package it now and see if it really works. To do that, right-click on Windows Azure project and select Package. Choose Release for the build configuration. Click on Package to package the file.

8        9

When Release is selected, we expect the value of “setting1” would be “new value” as we set inside the app.Release.config.

12. Verification

As the service is successfully packaged, you can see two files as usual (one is ServiceConfiguration.Cloud.cscfg and another one is WindowsAzureWorkerConfigDemo.cspkg).

To verify the correct configuration is included, change the extension of the cspkg file into .zip and unzip it. Inside the directory, look for the biggest size file (start with WorkerRole1, since I name my Worker Role project “WorkerRole1”).

11

Change its extension to .zip and unzip it again. Navigate inside that directory and look for “approot” directory. You could see the WorkerRole1.dll.config file inside.

13. Open that file and check out if it’s the correct value, set in our “release” build.

12

Mine is correct, how about yours?


Ronnie Hoogerwerf reported “Data Transfer” and “Cloud Numerics” better together in a 3/14/2012 post to the Microsoft Codename “Cloud Numerics” blog:

imageCheck out a blog post from our fellow SQL Azure Lab Microsoft Codename “Data Transfer”! You can now upload data through the Data Transfer service from on premises to Azure and use that data directly from a “Cloud Numerics” application: http://blogs.msdn.com/b/data_transfer/archive/2012/03/12/support-added-for-cloud-numerics-format.aspx:

imageMicrosoft Codename "Cloud Numerics" is a SQL Azure Lab that lets you model and analyze data at scale. Now when you want to upload a file with Microsoft Codename "Data Transfer" to Windows Azure Blob storage for use in Cloud Numerics, you can choose to have the file converted to the Numerics Binary Format. This only applies to CSV and Excel files that contain numerical data ready for analysis with “Cloud Numerics.”

To run the example application in the blog post you will need the implementation of the SequenceReader used in the example. You can download the sample reader from the “Microsoft Numerics” Connect Site (note that you need to sign up to our Connect program first). See links for Sign-up / Download.


Nathan Totten (@ntotten) posted Node.js on Windows Azure News & Updates (March 2012) on 3/14/2012:

imageNode.js on Windows Azure continues to get better. The Windows Azure SDK for Node.js is already on its third release in just a few short months, the PowerShell Cmdlets for Node.js makes deploying to Windows Azure super easy, and improvements to Windows Azure continue to make the platform even more attractive to Node.js developers.

The post is a summary of some of the most recent news and updates around Node.js and Windows Azure.

Windows Azure SDK for Node.js Version 0.5.2

The Windows Azure SDK for Node.js continues to be updated rapidly. The most recent update, version 0.5.2, adds support for Windows Azure Service Bus in addition to numerous bug fixes.

The addition of Windows Azure Service Bus support to the Node.js SDK enables a whole new set of scenarios for Node.js developers on Windows Azure. The Service Bus enables developers to build large decoupled systems though relays and pub/sub queues. For more information on version 0.5.2 of the SDK as well as some examples of using Service Bus with Node.js I would recommend reading this post by Glenn Block.

Reduced Pricing for Windows Azure Compute and Storage

This next bit of news isn’t explicitly for Node.js developers, but it is still big. Recently, we announced that the price for Windows Azure Compute and Storage would be reduced. The most notable reduction is that the Extra-Small Compute instance is now priced at $0.02 per hour. This means that you can run two servers with a 99.95% SLA for only about $30 per month.

Combining two extra-small instances with the power of Node.js makes for some serious computing for a great price. I am working on a few demos that will show what you can do with only 2 extra-small instances using Node.js that I will post shortly, but for now I challenge you to build something and see how much you can accomplish for $30 a month. Remember – if you are a startup you can also use BizSpark to get even more Windows Azure for FREE.

Using Windows Azure Access Control Services with Node.js

The last bit of news is a great post that Matias Woloski did showing how you can use Windows Azure Access Control Services (ACS) with Node.js. The process is really straight forward so if you are building an application that requires multiple identity providers I recommend giving that a read.

Coming Soon

Keep an eye out here for most posts like these. Additionally, I am starting a series of tutorials on Node.js and Windows Azure that you will see on my blog shortly.

Let me know if you have any questions or feedback.


Liam Cavanagh (@liamca) continued his series with What I Learned Building a Startup on Microsoft Cloud Services: Part 9 – Sending Email Notifications from Windows Azure on 3/14/2012:

imageI am the founder of a startup called Cotega and also a Microsoft employee within the SQL Azure group where I work as a Program Manager. This is a series of posts where I talk about my experience building a startup outside of Microsoft. I do my best to take my Microsoft hat off and tell both the good parts and the bad parts I experienced using Azure.

imageA key feature of the Cotega monitoring service is the ability to send email notifications when there are issues with a user’s database. The Windows Azure worker role executes a specified set of jobs against a users database and if there are issues such as connection drops or query performance degradation, then the service will send a notification to the administrator.

Currently there are no SMTP servers available within Windows Azure to allow me to send email notifications. Luckily, there are a huge number of third party email services that work really well with Windows Azure and are extremely cheap.

Sending Email Using Free Email Services

To get started, I first built a prototype that would send notifications from my worker roles. To do this I started with Hotmail as the SMTP server. The code to do this is pretty simple and there are a number of good examples on how to do this, such as here and here. These services worked pretty well. Ultimately I decided not to move forward with them given the sending restrictions that free email services like Hotmail, Office365, GMail, Yahoo mail have. For example, many of them limit the number of senders you can send to within an hour or limit the total number of emails you could send over a specified period of time. I suspected it would be a long time before the Cotega service reached these limits but I preferred to avoid them if possible.

3rd Party Paid Email Services

I really did not want to host my own SMTP service in Windows Azure. In fact, I have heard (but not confirmed) that sending emails using your own SMTP servers can have issues where emails will frequently be bounced back or will be tagged inaccurately as spam. Many third party email services have techniques to minimize this which was very attractive to me. So for these reasons, I started researching other third party email services. Some of the most promising ones that I found were Elastic Email, SendGrid and Amazon SES (Simple Email Service). Each of these are paid services. I ended up using Amazon SES primarily because I wanted to get an opportunity to learn more about Amazon’s services and their cost was attractive at $0.10 per thousand email messages. The other interesting thing about Amazon SES is that they start you out at a limited number of outbound emails. This limit is quite high at 10,000 over 24 hours and increases as Amazon learns to trust that you are not sending spam. Plus you can request an increase if needed. The dashboard for monitoring your email traffic is pretty nice and allows you to visually see the number of delivered, rejected, bounced and complaints emails.

After setting up my Amazon SES account, I needed to install the Amazon SDK and include the Amazon.SimpleEmail.Model namespace which would be deployed a a reference to the worker role. Here is a snippet of code that I used which is based on a sample Amazon provides. If you use it, remember to update the [CODE] sections with your SES keys:

public static Boolean SendEmailSES(String From, String To, String Subject, String Text = null, String HTML = null, String emailReplyTo = null, String returnPath = null)
{
    if (Text != null && HTML != null)
    {
        String from = From;

        List to
            = To
            .Replace(", ", ",")
            .Split(',')
            .ToList();

        Destination destination = new Destination();
        destination.WithToAddresses(to);
        //destination.WithCcAddresses(cc);
        //destination.WithBccAddresses(bcc);

        Content subject = new Content();
        subject.WithCharset("UTF-8");
        subject.WithData(Subject);

        Content html = new Content();
        html.WithCharset("UTF-8");
        html.WithData(HTML);

        Content text = new Content();
        text.WithCharset("UTF-8");
        text.WithData(Text);

        Body body = new Body();
        body.WithHtml(html);
        body.WithText(text);

        Message message = new Message();
        message.WithBody(body);
        message.WithSubject(subject);

        AmazonSimpleEmailService ses = AWSClientFactory.CreateAmazonSimpleEmailServiceClient("[CODE]", "[CODE]");

        SendEmailRequest request = new SendEmailRequest();
        request.WithDestination(destination);
        request.WithMessage(message);
        request.WithSource(from);

        if (emailReplyTo != null)
        {
            List replyto
                = emailReplyTo
                .Replace(", ", ",")
                .Split(',')
                .ToList();

            request.WithReplyToAddresses(replyto);
        }

        if (returnPath != null)
        {
            request.WithReturnPath(returnPath);
        }

        try
        {
            SendEmailResponse response = ses.SendEmail(request);
            SendEmailResult result = response.SendEmailResult;

            Console.WriteLine("Email sent.");
            Console.WriteLine(String.Format("Message ID: {0}",
                result.MessageId));

            return true;
        }
        catch (Exception ex)
        {
            Helper.LogException("Worker - SendEmailSES", ex.Message.ToString());
            return false;
        }
    }

    Console.WriteLine("Specify Text and/or HTML for the email body!");

    return false;
}

Brian Swan (@brian_swan) asked What is Microsoft Doing at DrupalCon Denver? in 3/14/2012 post:

imageMicrosoft will be at DrupalCon Denver next week, and I have the good fortune of being one of the Microsoft representatives that will be attending. The program looks great – it’s packed with great speakers and sessions, and there are lots of fun events planned. I’m excited about going for those reasons, but also because I’m curious about how this conference will be different than the last DrupalCon I attended (DrupalCon San Francisco, 2010). At that conference, I was frequently asked “What is Microsoft doing here?” You can read more about that in the post I wrote after the conference, What was Microsoft Doing at DrupalCon? (be sure to read the comments), but suffice it to say that I hope the fact that we will be at a Drupal conference (as a sponsor, no less) isn’t as surprising as it was then. And, because of that post, I’m going to Denver with great interest in the community reaction to us today. Essentially, I said that our commitment to Drupal would (and should) be judged by our continued involvement with and contributions to the community. Now that two years have passed since my last DrupalCon, I hope that our actions do speak to our continued involvement and contribution.

imageI am, of course, very eager to hear what folks at the conference have to say, but going in, I feel good about our level of commitment. At DrupalCon 2010 in S.F., we announced the beta release of our PDO driver for SQL Server and our engagement with Commerce Guys to build integrated support for SQL Server into Drupal 7. Since then, we have continued to engage the Drupal community by continuing to sponsor and be engaged at subsequent DrupalCons and other Drupal events worldwide, sponsoring and providing guidance for various integrations, tools, training, and much more. I’m sure I’m not capturing everything in this list, but here are some of the results:

Note: I need to point out that the results called out above were, in part, due to the ongoing work of many people at Microsoft over the last couple of years.

What do you think? I’d love to hear your thoughts at the conference or in comments below.

OK, so now let’s talk about DrupalCon Denver. What are we doing there? At a high-level, I hope we are continuing to collaborate with people and support efforts to broaden opportunities for Drupal deployments. Specifically, here are the Microsoft-related events and people to watch for:

  • Monday, March 19, 9:00-6:00 (pre-conference training): Deploying Drupal at Scale on the Microsoft Platform. This training is for anyone “who wants to explore the IIS ecosystem and gain knowledge on optimizing Drupal and PHP on Windows…” This training will be run by Alessandro Pilotti, CEO of Cloudbase Solutions Srl, and will be full of useful technical information. (I’ll be there!)
  • Tuesday, March 20, 3:45-4:45: Lightning Talk: Mobile Essentials for Drupal. Alessandro Pilotti will present Windows Phone: a Platform for Drupal Mobile Apps. During this session, Alessandro will demonstrate how to get the best out of Drupal and the next generation of Windows Phone devices, including some practical Javascript debugging tips.
  • Throughout the conference:
    • Microsoft booth. Stop by, say hi, and ask questions. We’ll have some goodies as long as they last.
    • Giveaways. We’ll be giving away three LG Quantum Windows Phones. One of these will be given away on Tuesday at Alessandro Pilotti’s lightning talk (details above), and the other two will be given away on Wednesday and Thursday via Twitter (look for the #windowsphone, #drupal, and #drupalcon hash tags).
  • The folks from Microsoft who will be at the conference (and who will be happy to answer any questions) are Grace Francisco (@gracefr), Jerry Nixon (@jerrynixon), and myself (@brian_swan). You can read Jerry’s pre-conference thoughts here: Drupal and Microsoft? Yes!

Looking forward to it!


Brian Hitney reported on 8/14/2012 a Webcast: Intro to @home with Windows Azure scheduled for 8/15/2012 at 9:00 AM PDT:

imageTomorrow (Thursday, 3/15/2012) at noon ET or 9am PT, we have our first screencast in the @home series: an introduction to the @home distributed computing project!

This is the first in a series where we’ll dive into various aspects of Windows Azure – in this first webcast, we’ll keep it 100 level, discussing the platform, how to get started, and what the project is about. From the abstract page:

imageIn this 100-level webcast, we introduce Windows Azure. We look at signing up a new account, evaluate the offers, and give you a tour of the platform and what it's all about. Throughout this workshop, we use a real-world application that uses Windows Azure compute cycles to contribute back to Stanford's Folding@home distributed computing project. We walk through the application, how it works in an Windows Azure virtual machine and makes use of Windows Azure storage, and deploying and monitoring the solution in the cloud.

imageIf you can’t make this one, be sure to check out the rest in the series by watching the @home website – we’ll be diving deeper into various features as the weeks progress, and we’ll post links to the recordings as they become available.


Himanshu Singh (@himanshuk) recommended that you Watch “@home with Windows Azure” on Channel 9 and Learn About Windows Azure While Contributing to Scientific Research in a 3/14/2012 post:

imageDoes learning about cloud computing while contributing to scientific research sound good to you? Then be sure to check out the new four-part video series, “@home on Windows Azure” on Channel 9. In addition to learning how to build applications for the cloud on Windows Azure, you’ll deploy a solution that will contribute to Stanford University’s Folding @home distributed computing project to study protein folding.

By simply running a piece of software, you can help scientists learn more about diseases such as Alzheimer’s, ALS, Huntington’s, Parkinson’s disease and many cancers by banding together to make one of the largest supercomputers in the world. Every participant takes the project closer to understanding how protein folding is linked to certain diseases.

  • @home with Windows Azure – Part 1 of 4: Getting your Windows Azure 90-day Free Trial Account

  • @home with Windows Azure – Part 2 of 4: Setting up the @home App

  • @home with Windows Azure – Part 3 of 4: Configuring Windows Azure Storage

  • @home with Windows Azure – Part 4 of 4: Deploying to Windows Azure

In addition to contributing directly to this project, Microsoft will also donate $10 per participant (up to $5,000 maximum) to Stanford University to help the cause. You can learn more about the Folding @home project here.

The @home with Windows Azure project is brought to you buy Brian Hitney, Jim O'Neil, and Peter Laudati. You can stay in touch with them at US Cloud Connection.


Bruno Terkaly (@brunoterkaly) asked Occasionally Connected Scenarios-How much power is too much power for the client? on 3/14/2012:

Introduction

The biggest challenge in occasionally connected scenarios is empowering the client when they are not connected. Just because the network is down, doesn’t mean the client still has to do serious work.

This means the client application will occasionally make mistakes because it doesn’t have the most up to date information. Let’s face it – that is just a fact of life – sometimes you make decisions with limited information. The question ultimately becomes, how bold are your decisions and how long has it been since you get the latest information.

Concrete Example-Conference Software

Imagine that you create software run conferences. This software handles all aspects of running a variety of conference types, handling such things as registration, badge printing, scheduling, calendars, and so on.

bcxjjove
For example, how would you handle the situation where the network is down an conference registrant wants to upgrade their registration and attend the panel discussion?
The client software will need to update the badge to allow entry into the event.
But because the network is down you inadvertently sell an extra ticket when the event was already sold out.

You can also call it occasionally connected scenarios

Questions that you’ll have to answer along the way of developing a solution.

image

Introduction

The biggest challenge in occasionally connected scenarios is empowering the client when they are not connected. Just because the network is down, doesn’t mean the client still has to do serious work.

This means the client application will occasionally make mistakes because it doesn’t have the most up to date information. Let’s face it – that is just a fact of life – sometimes you make decisions with limited information. The question ultimately becomes, how bold are your decisions and how long has it been since you get the latest information.

Concrete Example-Conference Software

bcxjjoveImagine that you create software run conferences. This software handles all aspects of running a variety of conference types, handling such things as registration, badge printing, scheduling, calendars, and so on.

For example, how would you handle the situation where the network is down an conference registrant wants to upgrade their registration and attend the panel discussion?

The client software will need to update the badge to allow entry into the event.

But because the network is down you inadvertently sell an extra ticket when the event was already sold out.

You can also call it occasionally connected scenarios

Questions that you’ll have to answer along the way of developing a solution.

Slide1

Power to the client !

Assuming the client is disconnected, are you going to allow seat purchases? Isn’t that risky since the event may have sold out? Maybe you decide that if the client connected within the last hour and there was 10% seats left at that time, then it would be ok to sell additional seats when disconnected from the registration databases.

Slide2

The client will need to queue outgoing events when disconnected

The client will need to update the server after re-connecting. Changes to local data will need to be persisted up to the cloud.

Interestingly, there are some challenging decisions here. There are at least 2 choices like this:

image

Slide4

The client will adjust it's local store based on incoming events

Two types of Queues - Incoming and Outgoing

But this is only possible when there is a connection. Notice there are queues to support incoming and outgoing events. This event mechanism is used to synchronize clients and servers. You can leverage the Windows Azure Service Bus to implement this pattern. I will do this in a future post.

Slide5

Sequence Diagram Exploration

How does the client manage event processing when occasionally disconnected? Get ready to view this in detail in a little while.

Slide6

We will allow clients to oversell seats at events

The caveat is that it has only a one hour window where an oversell can take place. We could even add more rules. For example, we could say that client workstations can sell tickets assuming they have connected within the hour, plus there were at least 10% of seats left last time we checked.

Slide7

Two type of events

Seat Sold and Seat Available Events.These events could be sent by the client or could be received by the client from the server.

Slide8

Here is a workflow that demonstrates how seats are sold

Service interruption may occur anywhere in the timeline.Notice that we have a Client A and and a Client B.

The sequence diagram is fairly basic:

image

Client A reconnects and needs to merge events

The key question to ask “What will the client do once it realizes it oversold seats to an event?”

Slide9

The client is allowed one hour to override oversold scenarios

The client may need to notify onsite staff for oversold scenarios.

Slide10

Lingering Questions

Overselling an event is not an easy problem to resolve. If a mistake is made, it may not be resolvable at all.

Pick your poison – some decisions aren’t easy

There is a discrete number of behaviors the occasionally connected client can pursue once it discovers and event is oversold:

image


Erin Maloney asserted Microsoft Poised to Lead in Social Business in a 3/14/2012 post:

imageMark Fidelman of harmon.ie wrote an article for Forbes today outlining Microsoft’s current position, in which they are poised for a launch into social business tools and solutions for the enterprise. He argues that Microsoft, with their business technology foothold in the enterprise, is best positioned to become a primary leader in the social business space.

Here are some key highlights from his post:

  • “At least 75% of the organizations surveyed have deployed or plan to implement a social solution to increase information sharing this year. Yet according to Gartner, through 2012, over 70% of these social business initiatives will fail. “
  • “So what does the Social Enterprise look like according to Microsoft? It is mainly comprised of four things: the cloud, social technologies, mobile and analytics. Each helping to power a connected, contextual user experience where people getting things done is the focus – technology is NOT.”
  • image“For Microsoft, Azure and Office 365 will power the vision, enabling companies to easily and cost effectively support the next generation knowledge worker.”

I noticed that Fidelman does not outline what specifically Microsoft will be now be offering in social business solutions beyond the solutions they’ve already created in cloud, big data and collaboration tools. But his article certainly makes me excited to see what comes next.

FINALLY, Microsoft Embraces Social — And It’s Going to be Big


MarketWire (@MarketWire) asserted “Ariett® ReqNet® Purchase Request to Invoice Solution Is Now Available on Microsoft Windows Azure” in a deck for its Ariett Announces a Complete, New Cloud eProcurement Solution press release of 3/13/2012:

Ariett, a leading provider of purchasing, travel and expense software solutions that helps companies control spending and generate savings, announces today that Ariett ReqNet is available in the Cloud. Ariett ReqNet Software as a Service (SaaS) is the complete requisition to purchase order to invoice solution that is fast to configure and more cost effective than anything available today.

imageAriett ReqNet, along with Ariett Travel, Expense and AP Invoice, are built on a common approval workflow model, with vendor and chart of account structures in the Cloud -- making it possible for companies to control spending on a single platform. ReqNet manages the complete purchase cycle from the purchase request, to the approval with audit trail, to creating a purchase order, with electronic documents, and offers the ability to receive goods, services and create and approve the AP Invoice.

"Ariett's eProcurement cloud solution captures critical information to analyze spending across corporate entities, driving down costs with a powerful single solution backed by the Microsoft™ Windows Azure platform," said Glenn Brodie, President of Ariett. "This allows Ariett to provide complete, sophisticated functionality at a low transaction price -- with webservice or API integration to leading financial systems."

ReqNet supports the way you do business with vendor catalogs, electronic documents, and supplier Punch-out that eliminates paper, rekeying of requests and improves operational efficiency. ReqNet's unique 360 transaction view combines the requisition, the PO, the receipt and the invoice with approval audit trail and all electronic documents into a single view.

image"Ariett offers a flexible solution that makes it easy for companies to manage all non-payroll expense transactions, multiple approval workflows, and employee access from one platform," said Brodie. "Companies can rest assured that Ariett provides the scalability, security and functional capabilities, with mobile approval from smart devices, electronic document management and SQL Azure Reporting to meet the most demanding business processes."

imageReal-time visibility and on-the-fly query, when combined with data analytics through SQL Azure Reporting, provides your team with the tools to eliminate bottlenecks, manage vendors, control potential cost overruns and improve your bottom line. Ariett products have successfully completed the Microsoft ™ Windows Azure compatibility testing.

Ariett ReqNet Purchase Request to Invoice Solution is also available on Premise

Ariett makes the submittal, review and approval process across spending categories and companies easy and painless. From anywhere at any time, Ariett solutions support your business policies and compliance, whether you need to request approval for corporate travel, search for available flights and pricing, re-order supplies, enter your expense report, select your corporate credit card charges or manage your department's expenses, capital expenditures or IT purchases. With options to deploy on premise or through web-service integration for Microsoft Dynamics ERP and API's for leading financial systems, Ariett makes your data integration to Accounts Payable seamless.

About Ariett

Ariett provides Purchasing and Expense software solutions for global companies that automate Requisition and Purchase Management, AP Invoice Automation and Employee Expense Reporting with Pre-Travel Approval and Booking. Ariett solutions offer enterprise class functionality with flexible workflows, analytics and reporting, electronic document management, & corporate credit card integration. Ariett's solutions are offered in the Cloud, or on premise, with API or web-service integration to leading ERP systems or direct integration to Microsoft Dynamics ERP. Visit www.ariett.com for more information.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

The Visual Studio LightSwitch Team (@VSLightSwitch) announced LightSwitch Extensibility Toolkit for Visual Studio 11 Beta Released in a 3/14/2012 post:

image_thumb1We've just uploaded the updated version of the Extensibility Toolkit to the MSDN Extensions Gallery: LightSwitch Extensibility Toolkit for VS11

The toolkit provides project types for creating new LightSwitch Extension Libraries that target Visual Studio 11 and includes templates for creating the following LightSwitch extensions: LightSwitch Business Type, LightSwitch Control, LightSwitch Data Source, LightSwitch Screen Template, LightSwitch Shell, and LightSwitch Theme.

imageUpdated versions of the eight extensibility samples have been uploaded to the LightSwitch Extensibility Samples for VS 11 as well.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

• Neil Mackenzie (@mknz) reviewed Patterns & Practices Books on Windows Azure on 3/14/2012:

Getting Started with Windows Azure

imageThe Windows Azure Platform is Microsoft’s platform-as-a-service (PaaS) cloud service. The best way to start Windows Azure development is to:

  1. Sign up for a free trial of Windows Azure.
  2. Download the Windows Azure SDK.

imageAnd now you have to learn how everything works, and that is where the books written by Microsoft’s Patterns & Practices group comes into play.

Developing Applications for Windows Azure

I’m a big fan of the Patterns & Practices books. The team is developing a nice set of Windows Azure books that explain various development patterns and practices. The books are all written in the same very breezy style, which makes them an easy read. They are available from the usual online booksellers in either paper or pixel form. Some of them are also available as downloadable PDFs.

The books are usually accompanied by a large-scale functional sample, the design of which is explained in the book. The source code for the sample can be downloaded from the Patterns & Practices Windows Azure Guidance area on Codeplex.

The two original books explain how to perform basic green field and brown-field development:

A third book describes hybrid applications:

Web-formatted versions of these books are available on the MSDN website.

The most recent book in the series describes how to use the Enterprise Library Integrations Pack for Windows Azure:

  • Building Elastic and Resilient Cloud Applications

Specifically, it describes how to use the Wasabi Autoscaling Application Block and the Transient Fault Handling Application Block. The book provides background information on autoscaling and transient fault handling which makes it useful even if you don’t want to use the Application Blocks. Note that the book is not yet available in your favorite bookstore.

Patterns & Practices has published other books which are not specifically about Windows Azure but which are very relevant to anyone developing on the platform:

These books provide solid introductions to complex topics.

And then, there is always my book: Microsoft Windows Azure Development Cookbook.

And don’t forget Bruce Kyle’s just-completed Windows Azure Best Practices series in the Cloud Security and Governance section below.


Brian Gracely (@bgracely) asked Shouldn't DevOps really be DesignOps? in a 3/14/2012 post:

imageThere is a lot of buzz these days about DevOps, the movement to blur the lines between application development and IT operations. The thinking goes - if there is direct linkage between the functions (or if they are a single group), then how the applications are operated is always top-of-mind and things like security, automation and scalability can be designed-in from Day 1.

But as we move towards a world that is heavily dominated by touch-screen devices (gestures instead of clicks), apps replacing applications (UI + API + Data) and API integration between various services, I'm starting to wonder if the value of the automation begins to get overshadowed if Design isn't considered as the #1 factor.

By "Design", I'm not talking about the application design or the infrastructure design - none of that technical stuff. Not even just the UI experience. What I'm referring to is the end-to-end design that directly effects the experience of the user.

  • How simple do you make it to get the app?
  • How simple do you make it to launch the app, or sign up for the service?
  • How many additional services are implicitly linked in intuitive ways to your app?
  • Does the design get in the way of using the app/service, or does it create an experience where the user immediately has a positive opinion of your group/company?

Signing up for the NetFlix streaming service recently gave me that type of experience. I know in the back-end that Adrian Cockroft (@adrianco) and their team are doing all sort of cool DevOps things (part I, part II, part III), but the experience I had was the pure simplicity of getting an account and experiencing the first video. The 1st month is free, but they don't bother getting a credit card up front, that would take time. They ask some "learn about you" questions before subjecting you to a search box. They deliver the same experience across all screens. It's pretty darn close to the simplicity of flopping myself on the couch and clicking a button on the TV remote. I suspect that might have been the goal.

A similar experience can be had by using the Square service for mobile payments. Extremely complicated on the back-end to connect merchants, shoppers and banks in a way that's simple and yet avoids fraud and other ugliness. But yet the company is guided by the principle that better design can be the differentiation in a crowded, well-established market.

I know it's sacrilegious to mention the word marketing in the same context as something as engineering-driven as DevOps, so ignore that previous mention, but I do believe that the DevOps groups that figure out how to embrace customer-centric design will be the models of success. It's more than just a nice UI, it's thinking about how to deliver anything-as-a-service with great experience, which doesn't just happen because the Chef is leading the Puppets by pulling all the right API strings.


Julia Talevski (@JuliaTalevski) asserted “The cloud platform will offer applications and services such as Exchange, SQL, SharePoint as well as storage hosting capabilities” in a deck for her Australia gets Azure cloud in April article of 2/24/2012 for ARN (missed when published):

imageAustralians will get a taste of Microsoft’s own cloud platform, Azure, in April.

It will offer applications and services such as Exchange, SQL, SharePoint and storage hosting capabilities. To help boost its launch, the software giant is offering five promotions to encourage partners to join up. Incentives include an introductory special, where companies can try out the platform at no charge and access a base level of compute hours, storage, data transfers, a SQL Azure database and .NET Services messages.

image

The Development Accelerator Core promotion, which aims to support partners developing on top of the Azure platform, provides similar tools, but partners have to commit to it for six months, which attracts a discounted monthly price. Azure will also be offered under a pay-as-you-go pricing plan (Consumption offer). Partners will receive a 5 per cent discount across all offerings.

imageThe cost of using the cloud service has not been revealed for the Australian market, but Microsoft senior director for the Azure product team, Diane O’Brien, said local prices would be similar to what is offered in the US.

For example, storage is charged at $US0.15 per gigabyte per month or $US0.10 for 10,000 storage transactions. The Web version of SQL databases will cost $US9.99 per month and the business version is priced at $US99.99 per month.

Microsoft has six datacentres set up for Azure across Hong Kong, Singapore, Europe and US. Whether or not Microsoft will have its own datacentre in Australia for local users to store their data on, is still to be seen.

So far companies like Adslot, Ajiliti, Joomla, Dataract, Avanade, Fujitsu Australia, MYOB, Object Consulting and Soul Solutions have become early adopters of Azure.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

• Bruce Kyle completed his Windows Azure Security series with Windows Azure Security Best Practices – Part 7: Tips, Tools, Coding Best Practices on 3/15/2012:

imageWhile writing the series of posts, I kept running into more best practices. So here are a few more items you should consider in securing your Windows Azure application.

Here are some tools, coding tips, and best practices:

  • imageRunning on the Operating System
    • Getting the latest security patches
    • If you can, run in partial trust
  • Error handling
    • How to implement your retry logic
    • Logging errors in Windows Azure
  • Access to Azure Storage
    • Access to Blobs
    • Storing your connection string
    • Gatekeepers patterns
    • Rotating your storage keys
    • Crypto services for your data security
Running on the Operating System
Get the Latest Security Patches

When creating a new application with Visual Studio the default behavior is to set the Guest OS version like this in the ServiceConfiguration.cscfg file:

osFamily="1" osVersion="*"

This is good because you will get automatic updates, which is one of the key benefits of PaaS. It is less than optimal because you are not using the latest OS. In order to use the latest OS version (Windows Server 2008 R2), the setting should look like this:

osFamily="2" osVersion="*"

Many customers unfortunately decide to lock to a particular version of the OS in the hopes of increasing uptime by avoiding guest OS updates. This is only a reasonable strategy for enterprise customers that systematically tests each update in staging and then schedules a VIP Swap to their mission critical application running in production. For everyone else that does not test each guest OS update, not configuring automatic updates is putting your Windows Azure application at risk.

-- from Troubleshooting Best Practices for Developing Windows Azure Applications

If You Can, Run in Partial Trust

By default, roles deployed to Windows Azure run under full trust. You need full trust if you are invoking non-.NET Code or using .NET libraries that require full trust or anything that requires admin rights. Restricting your code to run in partial trust means that anyone who might have access to your code is more limited to what they can do.

If your web application gets compromised in some way, using partial trust will limit your attacker in the amount of damage he can do. For example, a malicious attacker couldn’t modify any of your ASP.NET pages on disk by default, or change any of the system binaries.

Because the user account is not an administrator on the virtual machine, using partial trust adds even further restrictions than those imposed by Windows. This trust level is enforced by .NET’s Code Access Security (CAS) support.

Partial trust is similar to the “medium trust” level in .NET. Access is granted only to certain resources and operations. In general, your code is allowed only to connect to external IP addresses over TCP, and is limited to accessing files and folders only in its “local store,” as opposed to any location on the system. Any libraries that your code uses must either work in partial trust or be specially marked with an “allow partially trusted callers” attribute.

You can explicitly configure the trust level for a role within the service definition file. The service definition schema provides an enableNativeCodeExecution attribute on the WebRole element and the WorkerRole element. To run your role under partial trust, you must add the enableNativeCodeExecution attribute on the WebRole or WorkerRole element and set it to false.

But partial trust does restrict what your application can do. Several useful libraries (such as those used for accessing the registry or accessing a well-known file location) don’t work in such an environment. r trivial reasons. Even some of Microsoft’s own frameworks don’t work in this environment because they don’t have the “partially trusted caller” attribute set.

See Windows Azure Partial Trust Policy Reference for information about what you get when you run in partial trust.

Handling Errors

Windows Azure automatically heals itself, but can your application?

Retry Logic

Transient faults are errors that occur because of some temporary condition such as network connectivity issues or service unavailability. Typically, if you retry the operation that resulted in a transient error a short time later, you find that the error has disappeared.

Different services can have different transient faults, and different applications require different fault handling strategies.

While it may not appear to be security related, it is a best practice build retry logic into your application.

Azure Storage

The Windows Azure Storage Client Library that ships with the SDK already has retry behavior that you need to switch on. You can set this on any storage client by setting the RetryPolicy Property.

SQL, Service Bus, Cache, and Azure Storage

But SQL Azure doesn’t provide a default retry mechanism out of the box, since it uses the SQL Server client libraries. Neither does Service Bus also doesn’t provide a retry mechanism.

So the Microsoft patterns & practices team and the Windows Azure Customer Advisory Team developed a The Transient Fault Handling Application Block. The block provides a number of ways to handle specific SQL Azure, Storage, Service Bus and Cache conditions.

The Transient Fault Handling Application Block encapsulates information about the transient faults that can occur when you use the following Windows Azure services in your application:

  • SQL Azure
  • Windows Azure Service Bus
  • Windows Azure Storage
  • Windows Azure Caching Service

The block now includes enhanced configuration support, enhanced support for wrapping asynchronous calls, provides integration of the block's retry strategies with the Windows Azure Storage retry mechanism, and works with the Enterprise Library dependency injection container.

Catch Your Errors

Unfortunately systems fail. And Windows Azure is built to fail. And even with retry logic, you will occasionally experience a failure. You can add your own custom error handling to your ASP.NET Web applications. Custom error handling can ease debugging and improve customer satisfaction.

Eli Robillard, a member of the Microsoft MVP program, shows how you can create an error-handling mechanism that shows a friendly face to customers and still provides the detailed technical information developers will need in his article Rich Custom Error Handling with ASP.NET.

If an error page is displayed, it should serve both developers and end-users without sacrificing aesthetics. An ideal error page maintains the look and feel of the site, offers the ability to provide detailed errors to internal developers—identified by IP address—and at the same time offers no detail to end users. Instead, it gets them back to what they were seeking—easily and without confusion. The site administrator should be able to review errors encountered either by e-mail or in the server logs, and optionally be able to receive feedback from users who run into trouble.

Logging Errors in Windows Azure

ELMAH (Error Logging Modules and Handlers) itself is extremely useful, and with a few simple modifications can provide a very effective way to handle application-wide error logging for your ASP.NET web applications. My colleague Wade Wegner describes the steps he recommends in his post Using ELMAH in Windows Azure with Table Storage.

Once ELMAH has been dropped into a running web application and configured appropriately, you get the following facilities without changing a single line of your code:

  • Logging of nearly all unhandled exceptions.
  • A web page to remotely view the entire log of recoded exceptions.
  • A web page to remotely view the full details of any one logged exception, including colored stack traces.
  • In many cases, you can review the original yellow screen of death that ASP.NET generated for a given exception, even with customErrors mode turned off.
  • An e-mail notification of each error at the time it occurs.
  • An RSS feed of the last 15 errors from the log.

To learn more about ELMAH, see the MSDN article Using HTTP Modules and Handlers to Create Pluggable ASP.NET Components by Scott Mitchell and Atif Aziz. And see the ELMAH project page.

Accessing Your Errors Remotely

There are a number of scenarios where it is useful to have the ability to manage your Windows Azure storage accounts remotely. For example, during development and testing, you might want to be able to examine the contents of your tables, queues, and blobs to verify that your application is behaving as expected. You may also need to insert test data directly into your storage.

In a production environment, you may need to examine the contents of your application's storage during troubleshooting or view diagnostic data that you have persisted. You may also want to download your diagnostic data for offline analysis and to be able to delete stored log files to reduce your storage costs.

A web search will reveal a growing number of third-party tools that can fulfill these roles. See Windows Azure Storage Management Tools for some useful tools.

Access to Storage
Keys

One thing to note right away is that no application should ever use any of the keys provided by Windows Azure as keys to encrypt data. An example would be the keys provided by Windows Azure for the storage service. These keys are configured to allow for easy rotation for security purposes or if they are compromised for any reason. In other words, they may not be there in the future, and may be too widely distributed.

Rotate Your Keys

When you create a storage account, your account is assigned two 256-bit account keys. One of these two keys must be specified in a header that is part of the HTTP(S) request. Having two keys allows for key rotation in order to maintain good security on your data. Typically, your applications would use one of the keys to access your data. Then, after a period of time (determined by you), you have your applications switch over to using the second key. Once you know your applications are using the second key, you retire the first key and then generate a new key. Using the two keys this way allows your applications access to the data without incurring any downtime.

See How to View, Copy, and Regenerate Access Keys for a Windows Azure Storage Account to learn how to view and copy access keys for a Windows Azure storage account, and to perform a rolling regeneration of the primary and secondary access keys.

Restricting Access to Blobs

By default, a storage container and any blobs within it may be accessed only by the owner of the storage account. If you want to give anonymous users read permissions to a container and its blobs, you can set the container permissions to allow public access. Anonymous users can read blobs within a publicly accessible container without authenticating the request. See Restricting Access to Containers and Blobs.

A Shared Access Signature is a URL that grants access rights to containers and blobs. A Shared Access Signature grants access to the Blob service resources specified by the URL's granted permissions. Care must be taken when using Shared Access Signatures in certain scenarios with HTTP requests, since HTTP requests disclose the full URL in clear text over the Internet.

By specifying a Shared Access Signature, you can grant users who have the URL access to a specific blob or to any blob within a specified container for a specified period of time. You can also specify what operations can be performed on a blob that's accessed via a Shared Access Signature. Supported operations include:

  • Reading and writing blob content, block lists, properties, and metadata
  • Deleting a blob
  • Leasing a blob
  • Creating a snapshot of a blob
  • Listing the blobs within a container

Both block blobs and page blobs support Shared Access Signatures.

If a Shared Access Signature has rights that are not intended for the general public, then its access policy should be constructed with the least rights necessary. In addition, a Shared Access Signature should be distributed securely to intended users using HTTPS communication, should be associated with a container-level access policy for the purpose of revocation, and should specify the shortest possible lifetime for the signature.

See Creating a Shared Access Signature and Using a Shared Access Signature (REST API).

Storing the Connection String

If you have a hosted service that uses the Windows Azure Storage Client library to access your Windows Azure Storage account it is recommended that you store your connection string in the service configuration file. Storing the connection string in the service configuration file allows a deployed service to respond to changes in the configuration without redeploying the application.

Examples of when this is beneficial are:

  • Testing – If you use a test account while you have your application deployed to the staging environment and must switch it over to the live account when your move the application to the production environment.
  • Security – If you must rotate the keys for your storage account due to the key in use being compromised.

For more information on configuring the connection string, see Configuring Connection Strings.

For more information about using the connection strings, see Reading Configuration Settings for the Storage Client Library and Handling Changed Settings.

Gatekeeper Design Pattern

A Gatekeeper is a design pattern in which access to storage is brokered so as to minimize the attack surface of privileged roles by limiting their interaction to communication over private internal channels and only to other web/worker roles.

The pattern is explained in the paper Security Best Practices For Developing Windows Azure Applications from Microsoft Download.

These roles are deployed on separate VMs. image

In the event of a successful attack on a web role, privileged key material is not compromised. The pattern can best be illustrated by the following example which uses two roles:

  • The GateKeeper – A Web role that services requests from the Internet. Since these requests are potentially malicious, the Gatekeeper is not trusted with any duties other than validating the input it receives. The GateKeeper is implemented in managed code and runs with Windows Azure Partial Trust. The service configuration settings for this role do not contain any Shared Key information for use with Windows Azure Storage.
  • The KeyMaster – A privileged backend worker role that only takes inputs from the Gatekeeper and does so over a secured channel (an internal endpoint, or queue storage – either of which can be secured with HTTPS). The KeyMaster handles storage requests fed to it by the GateKeeper, and assumes that the requests have been sanitized to some degree. The KeyMaster, as the name implies, is configured with Windows Azure Storage account information from the service configuration to enable retrieval of data from Blob or Table storage. Data can then be relayed back to the requesting client. Nothing about this design requires Full Trust or Native Code, but it offers the flexibility of running the KeyMaster in a higher privilege level if necessary.
Multiple Keys

In scenarios where a partial-trust Gatekeeper cannot be placed in front of a full-trust role, a multi-key design pattern can be used to protect trusted storage data. An example case of this scenario might be when a PHP web role is acting as a front-end web role, and placing a partial trust Gatekeeper in front of it may degrade performance to an unacceptable level.

The multi-key design pattern has some advantages over the Gatekeeper/KeyMaster pattern:

  • Providing separation of duty for storage accounts. In the event of Web Role A’s compromise; only the untrusted storage account and associated key are lost.
  • No internal service endpoints need to be specified. Multiple storage accounts are used instead.
  • Windows Azure Partial Trust is not required for the externally-facing untrusted web role. Since PHP does not support partial trust, the Gatekeeper configuration is not an option for PHP hosting.

See Security Best Practices For Developing Windows Azure Applications from Microsoft Download.

Cyptro Services

You can use encryption to help securing application-layer data. Cryptographic Service Providers (CSPs) are implementations of cryptographic standards, algorithms and functions presented in a system program interface.

An excellent article that will provide you insight into how you can provide these kinds of services in your application is by Jonathan Wiggs in his MSDN Magazine article, Crypto Services and Data Security in Windows Azure. He explains, “A consistent recommendation is to never create your own or use a proprietary encryption algorithm. The algorithms provided in the .NET CSPs are proven, tested and have many years of exposure to back them up.”

There are many you can choose from. Microsoft provides:

Key Storage

The data security provided by encrypting data is only as secure as the keys used, and this problem is much more difficult than people may think at first. You should not use your Azure Storage keys. Instead, you can create your own using the providers in the previous section.

Storing your own key library within the Windows Azure Storage services is a good way to persist some secret information since you can rely on this data being secure in the multi-tenant environment and secured by your own storage keys. This is different from using storage keys as your cryptography keys. Instead, you could use the storage service keys to access a key library as you would any other stored file.

Key Management

To start, always assume that the processes you’re using to decrypt, encrypt and secure data are well-known to any attacker. With that in mind, make sure you cycle your keys on a regular basis and keep them secure. Give them only to the people who must make use of them and restrict your exposure to keys getting outside of your control.

Cleanup

And using while you are using keys, it is recommended that such data be stored in buffers such as byte arrays. That way, as soon as you’re done with the information, you can overwrite the buffer with zeroes or any other data that ensures the data is no longer in that memory.

Again, Jonathan’s article Crypto Services and Data Security in Windows Azure is a great place to study and learn how all the pieces fit together.

Summary

Security is not something that can be added on as the last step in your development process. Rather it should be make part of your ongoing development process. And you should make security decisions based on the needs of your own application.

Have a methodology where every part of your application development cycle considers security. Look for places in your architecture and code where someone might have access to your data.

Windows Azure makes security a shared responsibility. With Platform as a Service, you can focus on your application and your own security needs in deeper ways that before.

In a series of blog posts, I provided you a look into how you can secure your application in Windows Azure. This seven-part series described the threats, how you can respond, what processes you can put into place for the lifecycle of your application, and prescribed a way for you to implement best practices around the requirements of your application.

I also showed ways for you to incorporate user identity and some of services Azure provides that will enable your users to access your cloud applications in new says.

Here are links to the articles in this series:


Bruce Kyle continued his Azure Security series with Windows Azure Security Best Practices – Part 6: How Azure Services Extends Your App Security on 3/14/2012:

imageSeveral Windows Azure services help you extend your application security into the cloud.

Three services can help you in providing identity mapping between various providers, connections between an on premises data center, and abilities for applications (where ever they reside) to send messages to each other:

  • imageWith Windows Azure Active Directory you create single sign on application with the authentication brokered in an application residing in the cloud. Using Access Control Service feature, you can map identities from various providers to the claims that your application understands.
  • Windows Azure Connect uses industry-standard end-to-end IPSEC protocol to establish secure connections between on-premise machines and roles in the cloud. This allows you to connect to your cloud app as if it were inside the firewall.
  • With Service Bus you can use secure messaging and relay capabilities to enable your distributed and loosely-coupled applications.
Windows Azure Active Directory

image_thumbWindows Azure Active Directory is a cloud service that provides identity and access capabilities for applications on Windows Azure and Microsoft Office 365. Windows Azure Active Directory is the multi-tenant cloud service on which Microsoft Office 365 relies on for its identity infrastructure.

Windows Azure Active Directory utilizes the enterprise-grade quality and proven capabilities of Active Directory, so you can bring your applications to the cloud easily. You can enable single sign-on, security enhanced applications, and simple interoperability with existing Active Directory deployments using Access Control Service (ACS), a feature of Windows Azure Active Directory.

Access Control Services

Access Control Service (ACS) allows you to integrate single sign on (SSO) and centralized authorization into your web applications. It works with most modern platforms, and integrates with both web and enterprise identity providers.

ACS is a cloud-based service that provides an easy way of authenticating and authorizing users to gain access to your web applications and services while allowing the features of authentication and authorization to be factored out of your code. Instead of implementing an authentication system with user accounts that are specific to your application, you can let ACS orchestrate the authentication and much of the authorization of your users. ACS integrates with standards-based identity providers, including enterprise directories such as Active Directory, and web identities such as Windows Live ID, Google, Yahoo!, and Facebook.

Access Control Service is a key part of building out a single sign on strategy for applications that use claims.

ACS enables authorization decisions to be pulled out of the application and into a set of declarative rules that can transform incoming security claims into claims that applications and services understand. These rules are defined by using a simple and familiar programming model, resulting in cleaner code.

ACS can also be used to manage client permissions, thus saving the effort and complexity of developing these capabilities.

ACS v2 Web Scenario and Solution

In the scenario shown in the diagram above, an end user is using a browser to access the application. The browser accepts credentials from various identity providers – your user can log into your application using Windows Live ID, Google, Yahoo!, Facebook, or your customer’s Active Directory. Once it gets the token from the identity provider, ACS transforms the token using rules you provide. For example, the identity provider can pass through the email and you can change the email in the token to a claim named “electronicmail” if you so desired.

The application trusts ACS to provide the claims in a manner that the application understand.

The following diagram show the steps between each of the parts of a Web application. A Web Services application is similar.

Your application is shown as the relying party.

Main1.png

ACS is compatible with most popular programming and runtime environments, and supports many protocols including Open Authorization (OAuth), OpenID, WS-Federation, and WS-Trust.

The following features are available in ACS:

  • Integration with Windows Identity Foundation (WIF)
  • Out-of-the-box support for popular web identity providers including Windows Live ID, Google, Yahoo, and Facebook
  • Out-of-the-box support for Active Directory Federation Services (AD FS) 2.0
  • Support for OAuth 2.0 (draft 10), WS-Trust, and WS-Federation protocols
  • Support for the SAML 1.1, SAML 2.0, and Simple Web Token (SWT) token formats
  • Integrated and customizable Home Realm Discovery that allows users to choose their identity provider
  • An Open Data Protocol (OData)-based management service that provides programmatic access to the ACS configuration
  • A browser-based management portal that allows administrative access to the ACS configuration

ACS is compatible with virtually any modern web platform, including .NET, PHP, Python, Java, and Ruby.

Getting Started with Access Control Service

ACS Fast Track – A Guide For Getting Started.

Access Control Service 2.0 Samples and Documentation is available through a CodePlex project contains code samples and documentation for the production release of ACS 2.0.

Windows Azure Connect

Windows Azure Connect provides an easy way to set up network-level connectivity between Windows Azure services and on-premise resources such as database servers and domain controllers, allowing each access to the other as if they were on the same network.

Here are two scenarios where Windows Azure Connect help you extend your application into the cloud:

  • Distributed applications. For cloud applications hosted in a hybrid environment, Windows Azure Connect maintains secure connections with on-premise infrastructure without the creation of custom codes. For example, a web application hosted in Windows Azure can securely access an on-premise SQL Server database server or authenticate users against an on-premise Active Directory service.
  • Remote debugging. With Windows Azure Connect, you can create a direct connection between your local development machine and applications hosted in Windows Azure, which allows you to troubleshoot and debug them using the same tools you would use for on-premise applications.

Windows Azure Connect uses industry-standard end-to-end IPSEC protocol to establish secure connections between on-premise machines and roles in the cloud. Unlike a traditional Virtual Private Network (VPN), which establishes secure connectivity at gateway level, Windows Azure Connect offers more granular control by establishing secure connections at a machine and role level.

Getting Started with Windows Azure Connect

See Getting Started with Windows Azure Connect.

Service Bus

Service Bus provides secure messaging and relay capabilities that enable building distributed and loosely-coupled applications in the cloud. These messaging scenarios can be used to secure applications that are running on premises to clients in the cloud. Or they can support endpoints on Windows Azure.

Relayed and Brokered Messaging. The relay service provides a variety of different relay connectivity options and can even help negotiate direct peer-to-peer connections when it is possible. The relay service supports traditional one-way messaging, request/response messaging, and peer-to-peer messaging. It also supports event distribution at Internet-scope to enable publish/subscribe scenarios and bi-directional socket communication for increased point-to-point efficiency. In contrast to the relayed messaging scheme, brokered messaging can be thought of as asynchronous, or “temporally decoupled.” Producers (senders) and consumers (receivers) do not have to be online at the same time.

New features introduced in September 2011 enhances Service Bus with improved pub/sub messaging by supporting features such as Queues, Topics, Subscriptions. This release also enables new scenarios on the Windows Azure platform, such as:

  • Asynchronous Cloud Eventing – Distribute event notifications to occasionally connected clients (for example, phones, remote workers, kiosks, and so on)
  • Event-driven Service Oriented Architecture (SOA) – Building loosely coupled systems that can easily evolve over time
  • Advanced Intra-App Messaging – Load leveling and load balancing for building highly scalable and resilient applications
Service Bus Relayed Messaging

Suppose you had an application running in a customer data center on premises (or within a private cloud.) You could expose the application to your users in a way that does not expose the application to the cloud. A centralized “relay” service running in the cloud supports a variety of different transport protocols and Web services standards, including SOAP, WS-*, and REST.

Using Service Bus Relayed Messaging you can create a basic Windows Communication Foundation (WCF) service application that is configured to register an endpoint for publication with the Service Bus and a WCF client application that invokes it through the Service Bus endpoint. Both the host and client applications are executed on a Windows server or desktop computer (that is, they are not hosted in Windows Azure) and use a common standard protocol and security measures to access the Service Bus.

For a tutorial that describes how to build an application that uses the Service Bus “relayed” messaging capabilities, see Service Bus Relayed Messaging Tutorial.

Service Bus Brokered Messaging

Service Bus Brokered Messaging capabilities can be thought of as asynchronous, or decoupled messaging features that support publish-subscribe, temporal decoupling, and load balancing scenarios using the Service Bus messaging infrastructure. Decoupled communication has many advantages; for example, clients and servers can connect as needed and perform their operations in an asynchronous fashion.

For tutorials on how to implement brokered messaging in .NET or using REST, see Service Bus Brokered Messaging Tutorials.

Getting Started with Service Bus

See:

Next Up

Windows Azure Security Best Practices – Part 7: Tips, Coding Best Practices. I kept running into more best practices. So here are a few more items you should consider in securing your Windows Azure application.

No significant articles today.


<Return to section navigation list>

Cloud Computing Events

• Michael Collier (@MichaelCollier) reported on 3/15/2012 a Windows Azure Kick Start Tour of the US Midwest:

imageWhen I talk with people about Windows Azure and show them some of the really cool things you can do with Windows Azure, one of first questions I’ll get is “how do I get started?”

Microsoft is also holding a series of Windows Azure Kick Start events to help you get ramped up on working with Windows Azure. These will be day long events were you will learn how to build web applications that run in Windows Azure. You will learn how to sign up for Windows Azure (if you haven’t already) and how to build an app using the tools and techniques you’re already familiar with. You’ll also get to learn more about Windows Azure web roles, storage, SQL Azure, and other common tasks and scenarios with Windows Azure.

image

If you don’t already have a Windows Azure account, now is a great time to get one!

Windows Azure Kick Start Schedule

Here is the current schedule for the Windows Azure Kick Start events. Click the location to get more details and to register. These are free events! Act fast before seats are gone!!

Location Date
Edina, MN Mar. 30
Independence, OH Apr. 3
Columbus, OH  (I’ll be here) Apr. 5
Overland Park, KS Apr. 10
Omaha, NE Apr. 12
Mason, OH  (I’ll be here) Apr. 13
Southfield, MI Apr. 19
Houston, TX Apr. 25
Creve Coeur, MO May 1
Downers Grove, IL  (I’ll be here) May 1
Franklin, TN May 2
Chicago, IL May 3
Edina, MN May 8

What To Bring
Windows Azure Kick Starts will be hands-on events (after all, actually using the bits is the best way to learn). You’ll want to bring your favorite laptop with the required components installed and ready to go.

  • A computer or laptop: Operating Systems Supported: Windows 7 (Ultimate, Professional, and Enterprise Editions); Windows Server 2008; Windows Server 2008 R2; Windows Vista (Ultimate, Business, and Enterprise Editions) with either Service Pack 1 or Service Pack 2
  • Your favorite IDE, like Visual Studio 2010.
  • SQL Server Express (or SQL Server)
  • Install the Windows Azure SDK.
  • Consider bringing a power strip or extension cord – you’ll be using your laptop most of the day.


David Chou announced Microsoft DevCamps for Phone, Cloud and Web in a 3/14/2012 post:

image

ATTEND A CAMP, BUILD AN APP
Developer Camps (DevCamps for short) are free, fun, no-fluff events for developers, by developers. You learn from experts in a low-key, interactive way and then get hands-on time to apply what you've learned. Check out the four different DevCamps currently being offered and register today!

CLOUD CAMP

image

Join us to learn about the exciting new Windows Azure developer platform! Windows Azure will open up a host of new application and multi-screen development opportunities, and this briefing will give you a jumpstart in understanding how to take advantage of them.

HTML5 Web Camp
As developers, you keep hearing a lot about HTML5, but many don't know what it actually means or is truly capable of. If you want to learn about it, your choices are limited. The HTML5 Web Camp is an opportunity to connect with designers and developers and show you what's possible, and how you can start using it today. Space is limited.

PHONE CAMP
Take your apps to the next level at the Windows Phone Dev Camp. We'll be covering how to bring the cloud to your app by consuming external APIs, add UX polish with Expression Blend, reach new markets through globalization and localization, and keep your app running at peak performance through advanced debugging and testing. In the afternoon, we'll break into labs to reinforce what we covered, and offer a chance to present your application for a chance to win in our application competition.

STUDENT PREP NIGHT
If you're a student, interested in attending DevCamp (Phone Camp, particularly), we'd love to have you join us for a night of preparation and pizza! We want to be sure you have all the tools you need to be successful with the professional developer community.

DATES and LOCATIONS

March 30, 2012 | Los Angeles | University of Southern California

DevCamps

Cloud

HTML5 Web

Phone

Student Prep Night (March 29)

April 20, 2012 | Irvine | UC Irvine Extension

DevCamps

Cloud

HTML5 Web

Phone

Student Prep Night (April 19)

April 27, 2012 | Redmond | Microsoft Conference Center

DevCamps

Cloud

HTML5 Web

Phone

 

May 18, 2012 | Denver | University of Colorado at Denver

DevCamps

Cloud

HTML5 Web

Phone

Student Prep Night (May 17)

May 25, 2012 | Phoenix | Desert Willow Conference Center

DevCamps

Cloud

HTML5 Web

Phone

 


Andy Cross (@AndyBareWeb) reported UKWAUG: 3rd April 7 pm Liverpool Street, London with Cerebrata’s Guarav Mantri in a 3/14/2012 post:

imageThanks to Yossi Dahan of Microsoft for an inspiring talk on mobile and cloudy development. It’s been a while since we last posted and we have been busy trying to scale elastically with the cloud but failing miserably and working 18 hours days! Yossi is a great speaker and I would recommend that any other user groups snap him up but not on the same day that he’s speaking for us …

image

Simon Hart gave us a great talk and a fantastic reference architecture on the service bus which will be showcased in our new book “Enterprise Developer’s Guide to Windows Azure” as Simon is our resident service bus expert. We ran out of time so Simon will be back next week to explain core concepts of service bus subscriptions and topics.

The last talk was an extended tip of the month by Becca Martin and John Mitchell covering agile and Azure. These two are real “notes from trenches” experts having put together a great architecture and test framework for Azure. They will be speaking next month at the Manchester Azure user group along with Michael Royster from Microsoft.

Yossi’s slides on mobile development in the cloud can be found here:

http://lwaugbe.blob.core.windows.net/talks/Building%20Device%20And%20Cloud%20Applications%20Mar%2012.pptx

We are extremely pleased to be able to host Gaurav Mantri, founder of Cerebrata Software, a company that has proved that developing software for the cloud pays! Cerebrata have provided us with invaluable tools such as Cloud Storage Studio (which I used to upload the above Blob!), Diagnostics Manager (which we never start a project without) and CmdLets which as Becca and John showed last month are invaluable for a scripted deployment. I urge everyone to attend this meetings as Gaurav is not often in the UK and this event will prove to be something very unique.

Register @ http://www.ukwaug.net

imageNo significant articles today.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

• Rich Miller reported a 21-minute outage in his Amazon EC2 Recovers After Brief Downtime post of 3/15/2012 to his Data Center Knowledge blog:

imageAmazon’s Elastic Compute Cloud (EC2) service is back online after a brief outage earlier this morning that affected customers in its US-East region. ”Between 2:22 AM and 2:43 AM PST internet connectivity was impaired in the US-EAST-1 region,” Amazon reported. “Full connectivity has been restored. The service is operating normally.”

imageThe incident also impacted DNS resolution for Amazon Virtual Private Cloud customers. For more, see the Amazon’s Service Health Dashboard and a thread on Hacker News.

The Hacker News thread is especially interesting.


• Scott M. Fulton, III (@SMFulton3) reported Next, Salesforce Aims to Obsolete the CMS with Site.com Launch in a 3/15/2012 post to the ReadWriteCloud blog:

imageHere's the proposition: If your business fronts a marketing Web site, perhaps with a digital storefront and probably with additional content on Facebook, Salesforce.com is now offering a service - not a software package, but a cloud-based system - for you to compose the entire site, including layout template and content, and host the site including the database on the Force.com platform, for a flat fee of $1,500 per month.

imageIt is exactly the type of business model that Salesforce is aiming directly at another huge competitor with dominant market share: this time, WordPress. Salesforce is betting that businesses give WordPress its 50-plus-percent market share in the content management system category because it's the most convenient product to adopt, not because it's best suited to the task. And just like before, Salesforce is doubling down all its chips on a simple domain name: this time, Site.com.

120314 Site.com 01.jpg

If this picture looks like an ordinary Web site... well, frankly that's the point. It is - it's Salesforce's example of taking information that it ordinarily delivers to internal users of a company, and presenting it to customers externally.

"We are extending the social enterprise out to all of your customers, all of your partners, and all of your prospects," says Andrew Leigh, director of product management for the Force.com platform, in an interview with ReadWriteWeb, "by allowing you through a single cloud-based platform to be able to basically publish any data or any content out to an external audience."

Leigh demonstrated a front end for Site.com that would be generally familiar to anyone who has ever used a forms or site layout tool. Although the user can access the CSS style sheets directly, the front end would prefer to let him drag-and-drop components where they should generally appear on the page. Some components, like "Menu," are smart enough to know the layout of the site, so they can present the right menu to the user at the right time. And as Leigh tells us, Site.com manages the process of selecting the right layout template for the end user's browser and device, so the same site appears on a PC as on a tablet as on a smartphone.

120314 Site.com 03.jpg

"If you look at the Web sites that are built and run on Site.com, they use all the latest social widgets, all the latest multimedia, they have the freshest and most compelling content - it's coming instantly from the back-office systems of the company. They're the most compelling Web sites on the Internet today," remarks Leigh. One live example, he tells us, will be HP's promotional site - some 3,000 pages which have already gone live using the Site.com beta, and which have already increased HP's site traffic, according to Leigh, by 30%.

120314 Site.com 02.jpg

The structure of the site is determined through a simple menu system, where classes of "Site Map" pages are assigned to specific templates just as any site designer would expect. "Landing Pages" pertains to resources whose URLs use specific filenames, as opposed to general classes. Obviously from this angle, Site.com is more geared toward publishing static content. However, the dynamic components you drag into place do gather dynamic content from elsewhere in the customer's Force.com stream of assets, including from Salesforce.com itself and from its Data.com resource.

"If you look at the platform that runs the social enterprise, you'll see an amazing amount of common data that's being shared across that enterprise, both with the internal employees and the external customers, partners, and prospects," explains Force.com's Leigh. The roles that employees play in an organization, he adds, may be published externally as descriptions of possible future careers, for a Web site directed toward prospective employees. All the products managed and maintained by a company, and the retail pricing attributed to it, may be integrated into the external site. "Just about anything, whether it's shipping information, order information, billing information - any kind of information you're tracking and managing inside your company, is at some point in time being exposed out to your customers and your prospects to communicate what your business is doing. And that's what Site.com is all about."

Leigh tells RWW that some surcharges may apply in extreme circumstances, to a minority of users for whom bandwidth use explodes. But from now until April 30, all charter customers can sign up two publishers and two contributors for the first site, for a discounted rate of $825 per month. The regular price is $1,500 for that package, plus $125 per month for each additional publisher, and $20 per month for each additional contributor. He reminds us that this is not a beta; Site.com is generally released today.

Might be an idea for Microsoft to poach for Windows Azure and Dynamics CRM.


Jeff Barr (@jeffbarr) described The Next Type of EC2 Status Check: EBS Volume Status in a 3/14/2012 post:

imageWe’ve gotten great feedback on the EC2 Instance Status checks that we introduced back in January. As I said at the time, we expect to add more of these checks throughout the year. Our goal is to get you the information that you need in order to understand when your EC2 resources are impaired.

imageStatus checks help identify problems that may impair an instance’s ability to run your applications. These status checks show the results of automated tests performed by EC2 on every running instance that detect hardware and software issues. Today we are happy to introduce the first Volume Status check for EBS volumes. In rare cases, bad things happen to good volumes. The new status check is updated when the automated tests detect a potential inconsistency in a volume’s data. In addition, we’ve added API and Console support so you can control how a potentially inconsistent volume will be processed.

Here's what's new:

  • Status Checks and Events - The new DescribeVolumeStatus API reflects the status of the volume and lists an event when a potential inconsistency is detected. The event tells you why a volume’s status is impaired and when the impairment started. By default, when we detect a problem, we disable I/O on the volume to prevent application exposure to potential data inconsistency.
  • Re-Enabling I/O – The “IO Enabled” status check fails when I/O is blocked. You can re-enable I/O by calling the new EnableVolumeIO API.
  • Automatically Enable I/O – Using the ModifyVolumeAttribute/DescribeVolumeAttribute APIs you can configure a volume to automatically re-enable I/O. We provide this for cases when you might favor immediate volume availability over consistency. For example, in the case of an instance’s boot volume where you’re only writing logging information, you might choose to accept possible inconsistency of the latest log entries in order to get the instance back online as quickly as possible.

Console Support
The status of each of your instances is displayed in the volume list: (you may have to add the “Status Checks” column to the table using the selections accessed via the Show/Hide button):

(I don't have that many volumes; this screen shot came from a colleague's test environment).

The console displays detailed information about the status check when a volume is selected:

And you can set the volume attribute to auto-enable I/O by accessing this option in the volume actions drop-down list:

To learn more, go to the Monitoring Volume Status section of the Amazon EC2 User Guide.

We’re happy to be delivering another EC2 resource status check to provide you with information on impaired resources and the tools to take rapid action on them. As I noted before, we look forward to providing more of these status checks over time.

Help Wanted
If you are interested in helping us build systems like EBS, we’d love to hear from you! EBS is hiring software engineers, product managers, and experienced engineering managers. For more information about positions please contact us at ebs-jobs@amazon.com.


Susan Hall reported HP Offers Developer Tools for New Cloud Platform in a 3/12/2012 post to the Dice blog:

imageHewlett-Packard’s plans to compete with Amazon Web Services [and Windows Azure] entails offering a number of tools for developers.

Zorawar “Biri” Singh, senior vice president and general manager of H.P.’s cloud services, told The New York Times:

We’re not just building a cloud for infrastructure. Amazon has the lead there. We have to build a platform layer, with a lot of third-party services.

imageAs econsultancy.com put it:

In other words, H.P. will be going beyond infrastructure-as-a-service (IAAS) and platform-as-a-service (PAAS) and melding those with software-as-a-service (SAAS). To add additional appeal, the company plans to create an ecosystem of third-party solutions that are available through its platform, and to provide more hand-holding on the sales and support side.

Among the first applications: structured and unstructured databases, and data analytics as a service. HP plans to woo developers with tools using popular online software languages such as Ruby, Java, and PHP, and for its customer companies to provision and manage workloads remotely. It also will operate an online store where people can offer or rent software for its public cloud.

ZDNet notes that before HP can truly compete with Amazon’s cloud services, it will have to significantly beef up data center support beyond its current offerings.

Singh also was upfront with HP’s desire to make it difficult for IBM, Oracle or other vendors to come in. Making it about more than just cost makes sense, since Amazon continues to cut prices. But as econsultancy.com points out, Amazon also doesn’t send sales people around hawking other products.


<Return to section navigation list>

0 comments: