Wednesday, September 28, 2011

Windows Azure and Cloud Computing Posts for 9/26/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

• Updated 9/28/2001 1:00 PM PDT with new articles marked by Valery Mizonov, Glenn Gailey, Michael Washam, Steve Marx, Neil MacKenzie, Joe Brockmeier, Amazon Silk Team, Sudhir Hasbe and Me.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table and Queue Services

• Michael Washam explained Windows Azure Storage Analytics with PowerShell in a 9/28/2011 post:

imageWindows Azure Storage Analytics allows you to log very detailed information about how a storage account is being used. Each service (Blob/Table/Queues) has independant settings allowing you to have granular control over what data is collected. To enable/disable each setting just add or omit the argument (-LoggingDelete as an example). One of the great things about this service is the ability to set a retention policy so the data can be automatically deleted after a set number of days.

Enabling Storage Analytics per Service

Set-StorageServicePropertiesForAnalytics -ServiceName "Table" `
		-StorageAccountName $storageAccount -StorageAccountKey $storagekey `
		-LoggingDelete -LoggingRead -LoggingWrite -MetricsEnabled -MetricsIncludeApis `
		-MetricsRetentionPolicyDays 5 -LoggingRetentionPolicyEnabled -LoggingRetentionPolicyDays 5 `
		-MetricsRetentionPolicyEnabled 

Set-StorageServicePropertiesForAnalytics -ServiceName "Queue" `
		-StorageAccountName $storageAccount -StorageAccountKey $storagekey `
		-LoggingDelete -LoggingRead -LoggingWrite -MetricsEnabled -MetricsIncludeApis `
		-MetricsRetentionPolicyDays 5 -LoggingRetentionPolicyEnabled -LoggingRetentionPolicyDays 5 `
		-MetricsRetentionPolicyEnabled 

Set-StorageServicePropertiesForAnalytics -ServiceName "Blob" `
		-StorageAccountName $storageAccount -StorageAccountKey $storagekey `
		-LoggingDelete -LoggingRead -LoggingWrite -MetricsEnabled -MetricsIncludeApis `
		-MetricsRetentionPolicyDays 5 -LoggingRetentionPolicyEnabled -LoggingRetentionPolicyDays 5 `
		-MetricsRetentionPolicyEnabled

Retrieving the Current Storage Analytics Settings

Get-StorageServicePropertiesForAnalytics -ServiceName "Table" `
	-StorageAccountName $storageAccount -StorageAccountKey $storagekey | Format-List

Get-StorageServicePropertiesForAnalytics -ServiceName "Blob" `
	-StorageAccountName $storageAccount -StorageAccountKey $storagekey | Format-List

Get-StorageServicePropertiesForAnalytics -ServiceName "Queue" `
	-StorageAccountName $storageAccount -StorageAccountKey $storagekey | Format-List

imageDownloading the storage analytics data requires a bit more explanation. Each service has two fundamental types of data (except blob storage which has 3). The two are log data which contains all of the requests for that service (depending on which settings you have enabled) and transactions. Transactions contains numerous metrics that give you a deep understanding of how the storage service is performing. Metrics such as % Success or Avg E2E Latency are extremely useful for understanding your application. Blob storage also has a “Capacity” set of data that will tell you how much storage space blob storage is using broken down by analytics and application data.

Downloading Storage Analytics Data

Get-StorageAnalyticsLogs -ServiceName "Blob" `
	-LocalPath "c:\DiagData\SALogsBlob.log"  `
	-StorageAccountName $storageAccount -StorageAccountKey $storagekey  

Get-StorageAnalyticsMetrics -DataType "Capacity" -ServiceName "Blob" `
    -LocalPath "c:\DiagData\SAMetricsBlob-Capacity.log" `
    -StorageAccountName $storageAccount -StorageAccountKey $storagekey 

Get-StorageAnalyticsMetrics -DataType "Transactions" -ServiceName "Blob" `
	-LocalPath "c:\DiagData\SAMetricsBlob-Transactions.log" `
	-StorageAccountName $storageAccount -StorageAccountKey $storagekey 

Get-StorageAnalyticsLogs -ServiceName "Table" `
	-LocalPath "c:\DiagData\SALogsTable.log"  `
	-StorageAccountName $storageAccount -StorageAccountKey $storagekey  

Get-StorageAnalyticsMetrics -DataType "Transactions" -ServiceName "Table" `
	-LocalPath "c:\DiagData\SAMetricsTable-Transactions.log" `
	-StorageAccountName $storageAccount -StorageAccountKey $storagekey 

Get-StorageAnalyticsLogs -ServiceName "Queue" `
	-LocalPath "c:\DiagData\SALogsQueue.log" `
	-StorageAccountName $storageAccount -StorageAccountKey $storagekey  

Get-StorageAnalyticsMetrics -DataType "Transactions" -ServiceName "Queue" `
	-LocalPath "c:\DiagData\SAMetricsQueue-Transactions.log" `
	-StorageAccountName $storageAccount -StorageAccountKey $storagekey

For more information on storage analytics and details on understanding the metrics you can use see the following:
http://msdn.microsoft.com/en-us/library/windowsazure/hh343268.aspx


<Return to section navigation list>

SQL Azure Database and Reporting

• Neil Mackenzie (@mkkz) explained Handling Transient Connection Failures in SQL Azure in a 9/28/2011 post:

imageThis post is one of the recipes in my book Microsoft Windows Azure Development Cookbook. The recipe describes how to use the Transient Fault Handling Framework to handle transient connection failures when using SQL Azure.

The Windows Azure Customer Advisory Team, which supports the framework, describes it as follows:

The Transient Fault Handling Framework solution provides a reusable framework for building extensible retry policies capable of handling different types of transient conditions in applications leveraging SQL Azure, Windows Azure storage (queues, blobs, tables), Windows Azure AppFabric Service Bus and Windows Azure AppFabric Caching Service.

imageAlthough the post is specifically concerned with SQL Azure, the general idea can be implemented when using the Windows Azure Storage service, the Windows Azure AppFabric Service Bus and the Windows Azure AppFabric Caching Service.

I highly recommend the Windows Azure Customer Advisory Team blog. It has many posts showing real-world best practices for using the various features of the Windows Azure Platform.

Handling connection failures to SQL Azure

SQL Azure database is a distributed system in which each physical server hosts many databases. This sharing of resources leads to capacity constraints on operational throughput. SQL Azure handles these capacity constraints by throttling operations and closing connections that are using too many resources. SQL Azure also closes connections when it alleviates operational hot spots by switching from a primary SQL Azure database to one of its two backup copies. Furthermore, connectivity to a SQL Azure database is likely to be less reliable than connectivity to a Microsoft SQL Server database on a corporate LAN. It is imperative therefore that applications using SQL Azure be designed to withstand the connection failures that are far more likely to occur than with Microsoft SQL Server.

One of the mantras of cloud development is design for failure. It is important that applications using SQL Azure be designed to handle failures appropriately. There are two kinds of error: permanent errors indicating a general failure of part of the system and transient errors existing only for a brief time. Permanent errors perhaps indicate a logical problem with the application—and handling them may require code changes. However, an application should handle transient errors gracefully by retrying the operation that led to the error in the hope that it does not recur. A dropped connection should be regarded as transient, and an application should respond to a dropped connection by opening a new connection and retrying the operation.

There remains the problem of distinguishing permanent from transient errors. This can be done by comparing the error returned from a failed operation with a known list of transient errors. An application can therefore include a retry mechanism that checks the status of operations and retries any operations that experienced a transient error.

The Windows Azure AppFabric Customer Advisory Team has made available on the MSDN Code Gallery the source code and pre-compiled assemblies for the Transient Fault Handling Framework for Azure Storage, Service Bus, and SQL Azure. This comprises a set of classes that can be used to detect transient failures and retry SQL operations. It contains an extensible way to identify transient failures, with various examples including one that compares an error with a list of known transient failures. The Transient Fault Handling Framework provides various built-in retry backoff techniques that specify how often and frequently an operation should be retried following a transient failure. These include both a fixed interval and an exponential delay between retries. The classes in the Transient Fault Handling Framework include various extension methods that simplify the use of the framework, thereby minimizing the work required to add the handling of dropped connections and other transient failures to an application using SQL Azure.

In this recipe, we will learn how to use the Transient Fault Handling Framework for Azure Storage, Service Bus, and SQL Azure to handle dropped connections and other transient failures when using SQL Azure.

Getting ready

The recipe uses the Transient Fault Handling Framework for Azure Storage, Service Bus, and SQL Azure. It can be downloaded from the following URL:

http://code.msdn.microsoft.com/Transient-Fault-Handling-b209151f

This download is a Visual Studio solution with precompiled output assemblies that are referenced in the project used in the recipe.

How to do it…

We are going to connect to SQL Azure using ADO.NET and perform various DDL and DML operations taking advantage of the transient-error handling provided by the Transient Fault Handling library. We do this as follows:

1. On the Project Properties dialog in Visual Studio, set the Target Framework to .NET Framework 4.

2. Add the following assembly references to the project:

Microsoft.AppFabricCAT.Samples.Azure.TransientFaultHandling.dll
System.configuration.dll

3. Add a new class named RetryConnectionExample to the project.

4. Add the following using statements to the top of the class file:

using System.Data;
using System.Data.SqlClient;
using Microsoft.AppFabricCAT.Samples.Azure.TransientFaultHandling;
using Microsoft.AppFabricCAT.Samples.Azure.TransientFaultHandling.SqlAzure;
using Microsoft.AppFabricCAT.Samples.Azure.TransientFaultHandling.Configuration;

5. Add the following private members to the class:

String connectionString;
RetryPolicy connectionRetryPolicy;
RetryPolicy commandRetryPolicy;

Neil continues with steps 6 through 15.

How it works…

In Step 1, we modify the output target of the project to make it consistent with the requirements of the Transient Fault Handling Framework. In Step 2, we add references to the Transient Fault Handling Framework assembly and to the System.configuration assembly used to access the Transient Fault Handling configuration in the app.config file.

In Steps 3 and 4, we set up the class. In Step 5, we add private members for the connection string and two RetryPolicy instances. In the constructor, we add in Step 6, we initialize the connection string using a SqlConnectionStringBuilder instance. Configuring a connection string for SQL Azure is precisely the same as for Microsoft SQL Server apart from the way in which the DataSource is specified—with the fully qualified host name. We turn encryption on, as this is required, and set TrustServerCertificate to false, so that the server certificate is validated. Instead of building the connection string like this, we could have loaded it from a configuration file.

For demonstration purposes, we initialize the RetryPolicy private members using different techniques. We create the connectionRetryPolicy member directly by providing initialization values in its constructor. We associate the RetryOccurred callback method with the connectionRetryPolicy member. We create the commandRetryPolicy member by retrieving a FixedIntervalDefault policy from the app.config file. We associate the RetryOccurred callback method with the commandRetryPolicy member. In both cases, we use SqlAzureTransientErrorDetectionStrategy to identify transient errors. This compares an error with a list of pre-defined transient errors.

In Step 7, we add two RetryOccurred callback methods the class. These have a trivial implementation that in a real application could be replaced by logging that a retry had occurred.

In Step 8, we create and open a ReliableSqlConnection which we use to create a SqlCommand. The connection is closed automatically when we exit the using block. We use SqlCommand to retrieve the session tracing ID for the connection. This is a GUID, identifying a particular connection, which can be provided to SQL Azure Support when its help is sought in debugging a problem. We use the default RetryPolicy when we open the connection and when we invoke the ExecuteScalarWithRetry() extension method. Note that the default RetryPolicy identifies all errors as being transient.

In Step 9, we invoke a CREATE TABLE operation on SQL Azure to create a table named Writer. The table has three columns: the Primary Key is the Id column; the remaining columns store the name of a writer and the number of books they wrote. We use the connectionRetryPolicy, configured in the constructor, when the connection is opened and the default RetryPolicy when we invoke the ExecuteNonQueryWithRetry() extension method.

In Step 10, we invoke a DROP TABLE operation on SQL Azure to drop the Writer table. We use the default RetryPolicy when the connection is opened and the commandRetryPolicy when we invoke the ExecuteNonQueryWithRetry() extension method.

In Step 11, we retrieve all rows from the Writer table and then iterate over them to examine the content of each column. We use the connectionRetryPolicy when the connection is opened and the commandRetryPolicy when we invoke the ExecuteComman<IDataReader>() extension method.

We insert three rows in the Writer table in Step 12. We invoke the OpenWithRetry() and ExecuteNonQueryWithRetry() extension methods to use the default RetryPolicy when we open and use the connection respectively. In Step 13, we use the same extension methods when we update a row in the Writer table. In this case, however, we parameterize them, so that we use the DefaultExponential retry policy when we open and use the connection. This default policy identifies all errors as transient.

In Step 14, we add a method that invokes the methods added earlier. We need to provide the server name, the database name, the login, and the password.

In Step 15, we add the configuration used to configure a RetryPolicy instance in Step 6. In doing so, we need to add a configSection element specifying the assembly used to access the configuration and then we add a RetryPolicyConfiguration element in which we specify a configuration we name FixedIntervalDefault.

Note that, with an appropriately configured connection string, all the code in this recipe can be run against Microsoft SQL Server—with the exception of the retrieval of the session tracing ID in Step 8.

There’s more…

The Transient Fault Handling Framework for Azure Storage, Service Bus, and SQL Azure can also be used for retrying operations against the Windows Azure Storage Service and the Windows Azure Service Bus.

See also

Valery Mizonov (@TheCATerminator) of the Windows Azure AppFabric Customer Advisory Team has written a blog post on Best practices for handling transient conditions in SQL Azure client applications. He explains how to use the Transient Fault Handling Framework. The post is available at the following URL:

http://windowsazurecat.com/2010/10/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications/


• Michael Washam described Resetting the Passwords on all of Your SQL Azure Servers in a 9/28/2011 post:

imageManaging the passwords of your SQL Azure servers is now 100% scriptable. It’s so easy to reset them it’s almost like magic :).

The code below takes your subscription id and your management certificate and returns all of the SQL Azure servers in your subscription. It takes that collection and passes them to the Set-SqlAzurePassword Cmdlet along with the new password.

imageResetting the Passwords of all SQL Azure Servers in your Subscription:

$subscriptionId = "Your-ID-Goes-Here"
$cert = Get-Item cert:\CurrentUser\My\YOURCERTTHUMBPRINTGOESHERE
Get-SqlAzureServer -Certificate $cert -SubscriptionId $subscriptionid | `
	Set-SqlAzurePassword -NewPassword "abracadabra0!"

• Michael Washam explained Creating a new SQL Azure Server and Firewall Rule with PowerShell in a 9/27/2011 post:

imageCreating SQL Azure Servers and Configuring firewall rules is surprisingly easy with the new Windows Azure PowerShell Cmdlets 2.0.

Step 1: Add the WAPPSCmdlets Snapin or Module

Add-PsSnapin WAPPSCmdlets

imageStep 2: Create a Few Needed Variables

$subscriptionid = "your subscription id"
$cert = Get-Item cert:\CurrentUser\My\yourcertthumbprint
$adminLogin = "sqlAzureLogin"
$adminPassword = "sqlAzurePassword"

Step 3: Call the Cmdlets With Your Variables

$newServer = New-SqlAzureServer -AdministratorLogin $adminLogin -AdministratorLoginPassword $adminPassword `
			       -Location "North Central US" -Certificate $cert `
				   -SubscriptionId $subscriptionid

$newServer | New-SqlAzureFirewallRule -RuleName "EveryBody" `
			-StartIpAddress "0.0.0.0" -EndIpAddress "255.255.255.255"

Obviously, you would want to change the firewall rule if you did not want EVERY IP address to be able to connect to your server but you get the idea.


• Michael Washam described Updating SQL Azure Firewall Rules with Windows Azure PowerShell Cmdlets 2.0 in a 9/27/2011 article:

imageYou’ve deployed a few Sql Azure servers and through no fault of your own the requirement comes up to update all of the firewall rules for each of the Sql Azure Servers.

No Problem!

Adding new SQL Azure Firewall Rules

Get-SqlAzureServer -Certificate $cert -SubscriptionId $subscriptionid | foreach {
  $_ | New-SqlAzureFirewallRule -RuleName "NewRule1" -StartIpAddress "0.0.0.0" -EndIpAddress "1.1.2.2"
  $_ | New-SqlAzureFirewallRule -RuleName "NewRule2" -StartIpAddress "100.1.0.0" -EndIpAddress "100.15.0.0"
}

Removing Rules is Just as Easy

Get-SqlAzureServer -Certificate $cert -SubscriptionId $subscriptionid | foreach {
  $_ | Remove-SqlAzureFirewallRule -RuleName "OldRule1"
  $_ | Remove-SqlAzureFirewallRule -RuleName "OldRule2"
}

This of course requires the Windows Azure PowerShell Cmdlets 2.0


Brian Swan (@brian_swan) reported Version 3.0 (beta) of the SQL Server Drivers for PHP Released! in a 9/22/2011 post (missed when published):

imageA Community Technology Preview (a beta release) of v3.0 of the SQL Server Drivers for PHP was released today (see the announcement on the team blog). You can download it here: Download v3.0 of the SQL Server Drivers for PHP. In this release, there are three new features: buffered queries, support for LocalDB, and support for high availability and disaster recovery. It’s important to note that the latter two features are dependent on the next version of SQL Server (code named “Denali”). A preview of Denali can be downloaded for free here (see notes later in this article about the installation process): Download SQL Server Denali CTP 3. More detail about each new feature is in the sections below. We’re hoping to get feedback from you. If you have feedback, please comment on this post or reach out to me (@brian_swan) and/or Jonathan Guerin (@kop48, the Program Manager for the drivers) on Twitter.

Buffered Queries

imagePerhaps a more descriptive name for “buffered queries” would be “buffered result sets”. With this feature, you can execute a query and bring the entire result set into memory. This allows you to easily get the row count and move back and forth through rows. Prior to this feature, to enable scrolling through a result set, you needed to use a scrollable cursor. Using a scrollable cursor is still the best option if you are dealing with large result sets, but if you have small to medium sized result sets, the buffered queries option may improve your applications performance.

SQLSRV

To bring an entire result set into memory with the SQLSRV driver, supply an options array to sqlsrv_query or sqlsrv_prepare with the following key=>value pair: “Scrollable”=>”buffered”. Then, when retrieving rows, you can call sqlsrv_num_rows to get the row count and you can use the scroll options with sqlsrv_fetch, sqlsrv_fetch_array, or sqlsrv_fetch_object:

$serverName = '.\sqlexpress';
$connectionInfo = array("UID"=>"username", "PWD"=>"password", "Database"=>"ExampleDB");
$conn = sqlsrv_connect( $serverName, $connectionInfo);
 
$sql = "SELECT * FROM CUSTOMERS";
$stmt = sqlsrv_query($conn, $sql, null, array("Scrollable"=>"buffered"));
echo "Row count: " . sqlsrv_num_rows($stmt) . "<br />";
 
$row = sqlsrv_fetch_array($stmt, SQLSRV_FETCH_ASSOC, SQLSRV_SCROLL_ABSOLUTE, 10);
print_r($row);

For more information, see Cursor Types (SQLSRV Driver) in the documentation (included in the download).

PDO_SQLSRV

To bring an entire result set into memory with the PDO_SQLSRV driver, specify cursor options on the PDO::prepare method as shown below. Then, to retrieve data, use the fetch options on the PDOStatement::fetch method:

$serverName = '.\sqlexpress';
$conn = new PDO( "sqlsrv:server=$serverName ; Database = ExampleDB", "", "");
 
$sql = "SELECT * FROM CUSTOMERS";
 
$stmt = $conn->prepare( $sql, array(PDO::ATTR_CURSOR => PDO::CURSOR_SCROLL, PDO::SQLSRV_ATTR_CURSOR_SCROLL_TYPE => PDO::SQLSRV_CURSOR_BUFFERED));
$stmt->execute();
echo "Row count: " . $stmt->rowCount() . "<br />";
 
$row = $stmt->fetch( PDO::FETCH_ASSOC, PDO::FETCH_ORI_ABS, 1 );
print_r($row);

For more information, see Cursor Types (PDO_SQLSRV Driver) in the documentation.

Note that the default buffer size is set to 10240 KB (i.e. 10 MB). You can change this value using the sqlsrv_configure function or by editing your php.ini file (the setting name is ClientBufferMaxKBSize).

LocalDB

LocalDB is a “serverless” database (available in SQL Server “Denali”) designed specifically for developers. It is easy to install and requires no management while offering the same T-SQL language, programming surface and client-side providers as the regular SQL Server Express. (For more information, see this blog post: Introducing LocalDB, An Improved SQL Express.) Basically, LocalDB allows you to connect directly to a SQL Server database file (an .mdf file).

After you have LocalDB installed (see notes on installing SQL Server “Denali” below), you can connect to a database file by using “(localdb)\v11.0” as your server name and by supplying the path to the .mdf file in your connection options:

SQLSRV
$serverName = '(localdb)\v11.0';
$connectionInfo = array( "Database"=>"ExampleDB", "AttachDBFileName"=>'c:\Temp\ExampleDB.mdf');
$conn = sqlsrv_connect( $serverName, $connectionInfo);

For more information, see SQL Server Driver for PHP Support for LocalDB in the documentation.

PDO_SQLSRV
$serverName = '(localdb)\v11.0';
$conn = new PDO( 'sqlsrv:server=(localdb)\v11.0; Database=TestDB; AttachDBFileName=c:\Temp\TestDB.mdf', NULL, NULL);

For more information, see SQL Server Driver for PHP Support for LocalDB in the documentation.

Keep in mind that LocalDB is still a beta feature of SQL Server “Denali” (i.e. FEEDBACK IS WELCOME!) Here’s one “gotcha" that I ran into when using LocalDB with the SQLSRV driver:LocalDB only supports Windows Integrated Authentication. If you are looking closely at the code snippets above, you notice that no user name or password is supplied to connect. The SQLSRV driver attempts to connect using Windows authentication when no username and password are supplied. I found it easy to run PHP from the command line to connect to LocalDB (a command prompt runs under my identity). However, when running PHP as a FastCGI module in IIS, permissions are not so straightforward. You’ll need to configure IIS to use Windows authentication (see this article for background information: SQLServer Driver for PHP: Understanding Windows Authentication.

High Availability and Disaster Recovery

SQL Server high availability and disaster recovery features (collectively called SQL Server Always On) provide near-zero downtime and make it possible to optimize hardware usage. For more information about SQL Server Always On, see Always On – New in SQL Server Code Named “Denali” CTP 3. To understand how you an leverage these features through the SQL Server Drivers for PHP, , see SQL Server Driver for PHP Support for High Availability and Disaster Recovery in the documentation.

Notes on Installing SQL Server “Denali”

You can download SQL Server “Denali” CTP 3 from here. I just want to point out two things to make sure your installation goes smoothly:

1. When selecting an edition, be sure to choose Express. This option will install LocalDB.

image

2. When selecting features to install, be sure to include LocalDB.

image

As I said in the introduction, we would love to get your feedback…so please let us know what you think!


<Return to section navigation list>

MarketPlace DataMarket and OData

• Glenn Gailey (@ggailey777) reported OData Quickstart for Windows Phone Updated—Now with VB! in a 9/27/2011 post:

imageI’m not sure if I’ve mentioned it here on my blog yet or not, but the forthcoming release of Windows Phone 7.5 (“Mango”) and the Windows Phone SDK 7.1 features huge improvements for consuming OData feeds. This means that, in Mango, OData support on phone is basically equivalent to Silverlight 4 and .NET Framework 4 (asynchronous only).

imageHere’s a list of what has gotten (much) better:

  • Add Service Reference—it works now! No more having to use DataSvcUtil.exe now (in most cases) and manually add your references. This is sure a welcome sight when writing a Windows Phone app:
    image
  • LINQ is back!—they (finally) added LINQ support to phone, so welcome back to DataServiceQuery<T> and the ease of composing queries against entity sets returned by a strongly-typed DataServiceContext. Now, no more having to manually compose URIs for queries, check it out:
    image
  • DataServiceState* works much better—if you ever tried to use the Save and Restore methods on the old DataServiceState object, they weren’t really ready for prime time. The new DataServiceState object has methods that are explicitly named Serialize and Deserialize, which do just what they say. Serialize returns you a true string-based serialization of everything being tracked by the context, and Deserialize now returns a correctly re-hydrated context, including nested collections.
  • Authenticated requests—new support for attaching credentials to the DataServiceContext using the Credentials property (like you can do in Silverlight 4). The client uses these credentials to set the Authentication header in the request. Before this, you had to set this header yourself.
  • Compression* works—well, technically it now CAN work, but there is no “in the box” support and you need to track down your own compression provider. However, this cool because I wasted an entire day trying to make compression work in WP7—totally blocked. For more info on how to make this work see this topic (until I can get something published).
  • Now, available in your SDK!—the previous version of the library was published as a separate download. Now, the OData library is a first-class citizen and in the Windows Phone SDK 7.1.
    * This denotes a Windows Phone-only OData functionality.

To better highlight these most excellent OData improvements in Mango, I am in the process of getting the “official” OData on Windows Phone quickstart updated for the pending release. While I wait for these updates to go live, I went ahead (to help out the folks rushing to write Mango-based apps) and updated the existing sample project to the new version, which you can find here:

Attention Visual Basic programmers!
Please notice that there is now also a VB version of this sample published (for a total of like 4-VB samples for phone). I am definitely not primarily a VB guy, but I am proud to say that I DID NOT use a converter to create this VB app. So, if you find code in there that looks like it was definitely written by a C# guy or is bad, please leave me a note on the sample project’s Q and A page.

As you will see, this update to Windows Phone and the SDK makes writing OData apps for phone tremendously more fun.


• Sudhir Hasbe (@shasbe) posted Announcing NEW Data Offerings and International Availability - Windows Azure Marketplace on 9/27/2011:

imageYesterday at BUILD, Microsoft Server and Tools Business President, Satya Nadella made two announcements around the Windows Azure Marketplace and shared details on how Ford Motor Company and eBay are using the Marketplace to add further value to their business. This post will dive deeper into both of these announcements.

International Availability

imageMicrosoft announced the upcoming availability of the Windows Azure Marketplace in 25 new markets around the world, including: Austria, Belgium, Canada, Czech, Denmark, Finland, France, Germany, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland, UK, Australia, Hong Kong, Japan, Mexico, New Zealand, and Singapore. Customers in these new markets will be able to discover, explore and subscribe to premium data and applications on the Marketplace starting next month.

Starting today, partners can submit their applications & datasets to publish on the marketplace. Interested partners can learn how to get started here.

BING Data Available on Windows Azure Marketplace

Microsoft also announced the coming availability of a number of exciting data offerings on the Windows Azure Marketplace. The first of these, the Microsoft Translator APIs are available today, along-side a fast-growing collection of data sets and applications, with more being introduced through the remainder of the year. The Microsoft Translator APIs, which were previously available here, allow developers and webmasters to provide translation and language services in more than 35 languages, as part of their applications, websites or services. This is the same cloud service that delivers translations to millions of users every day via Bing, Microsoft Office and other Microsoft products.

Through the Windows Azure Marketplace, Microsoft will make available both a free, limited throughput version of the Microsoft Translator APIs, as well as a number of paid, higher throughput versions of the APIs. Starting today, Microsoft is offering customers a 3 month promotional period during which the higher throughput versions of the APIs will be available free of charge.

Developers can now start using Microsoft Translator APIs through Windows Azure Marketplace in web or in client applications to perform machine language translations from or to any of the following languages (list updated regularly).

image

How are others using the Windows Azure Marketplace?

Ford Motor Company

Ford will launch its first battery-powered electric passenger vehicle at the end of the year. Fully charging the vehicle at home or a business should take just over 3 hours to complete, however as the cost of electricity can vary by the time of day, when you charge the vehicle can have an important impact on costs of ownership. So, every new Focus Electric will offer the Value Charging system powered by Microsoft, to help owners in the US charge their vehicles at the cheapest utility rates, lowering cost of ownership. To do this, Ford will rely on an electric utility rates dataset on the Windows Azure Marketplace that currently has information from 100 utilities covering more than 10,000 US zip codes and 1,500 Canadian Postal Codes.

eBay

eBay has a popular mobile application on Windows Phone 7 called eBay mobile, with more than 300k downloads to date. In the coming weeks, eBay will release a major update including faster payment flows and selling capabilities as well as the ability to have listing details automatically translated to and from 37 different languages. This is accomplished by leveraging the Microsoft Translator API, which is now available in the Windows Azure Marketplace. By leveraging the Translator API, eBay is able to create a more global product - delivering product listings in multiple languages to a broad global audience.

ESRI

Esri, a leading provider of geospatial software and services, is extending their ArcGIS system to Windows Azure Platform. With ArcGIS Online customers can create “intelligent maps” (starting with Bing, topography, ocean and other base maps) to visualize, access, consume and publish data-sets from Windows Azure Marketplace and their own data services. This will make a rich set of geographic tools, once only available to geographic information professionals, broadly available to anyone interested in working with geospatial data e.g. environmental scientists interested in visualizing air quality metrics against specific geographies. These maps can then be served up to the cloud and shared between individuals and their defined groups, across organizations and devices. This solution is available today, and can be accessed here.

To read more about all of the Windows Azure-related announcements made at BUILD, please read the blog post, "JUST ANNOUNCED @ BUILD: New Windows Azure Toolkit for Windows 8, Windows Azure SDK 1.5, Geo-Replication for Windows Azure Storage, and More". For more information about BUILD or to watch the keynotes, please visit the BUILD Virtual Press Room. And follow @WindowsAzure and @STBNewsBytes for the latest news and real-time talk about BUILD.

Visit the Windows Azure Marketplace to learn more.

Sudhir appears to be behind the times. Satya’s keynote wasn’t yesterday.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

• Valery Mizonov (TheCATerminator) updated his Best Practices for Leveraging Windows Azure Service Bus Brokered Messaging API article for the Windows Azure Customer Advisory Team (CAT) blog on 9/25/2011:

imageThis article offers practical guidance for developers working with the brokered messaging .NET managed API in the Windows Azure Service Bus. The recommendations supplied in this article come directly from recent customer projects. While building real-world solutions with the Service Bus, we learned about some key best practices and little-known secrets that will help increase reliability and improve the performance of the solutions leveraging the new brokered messaging capabilities in the Service Bus. This article intends to share these learnings with the developer community.

Relayed versus Brokered Messaging

image72232222222The Windows Azure Service Bus provides two comprehensive messaging solutions. The first solution is available through a centralized, highly load-balanced “relay” service running in the cloud that supports a variety of different transport protocols and Web services standards, including SOAP, WS-*, and REST. The relay service supports direct one-way messaging, request/response messaging, and peer-to-peer messaging. The pattern associated with this type of messaging solution is referred to as “relayed“ messaging. In the relayed messaging pattern, an on-premises or cloud-based service connects to the relay service through an outbound port and creates a bi-directional socket for communication tied to a particular rendezvous address. The client doesn’t need to know where the service resides, and the on-premises service does not need any inbound ports open on the firewall. Relayed messaging provides many benefits, but requires the server and client to both be online at the same time in order to send and receive messages. Relayed messaging has been available since the initial release of the Service Bus.

The second messaging solution, introduced in the latest version of the Service Bus, enables “brokered” messaging capabilities. The brokered messaging scheme can also be thought of as asynchronous or “temporally decoupled” messaging. Producers (senders) and consumers (receivers) do not have to be online at the same time. The messaging infrastructure reliably stores messages until the consuming party is ready to receive them. This allows the components of the distributed application to be disconnected, either voluntarily; for example, for maintenance, or due to a component crash, without affecting the whole system. Furthermore, the receiving application may only have to come online during certain times of the day, such as an inventory management system that only is required to run at the end a business day.

The core components of the Service Bus brokered messaging infrastructure are queues, topics, and subscriptions. These components enable new asynchronous messaging scenarios, such as temporal decoupling, publish/subscribe, load leveling, and load balancing. For more information about these scenarios, see the Additional Resources section.

Brokered Messaging API Overview

Throughout this guidance, you will see many references to various components, classes and types available in the brokered messaging .NET managed API. To put things into context, let’s start off by highlighting some of the key API artifacts that deliver and support the brokered messaging capability in the Service Bus.

The following classes are the most frequently-used API members from the Microsoft.ServiceBus and Microsoft.ServiceBus.Messaging namespaces, often involved when developing a brokered messaging solution:

Class Name Description
BrokeredMessage Represents the unit of communication between Service Bus clients. The serialized instances of the BrokeredMessage objects are transmitted through a wire when messaging clients communicate via queues and topics.
QueueClient Represents a messaging object that enables sending and receiving messages from a Service Bus queue.
QueueDescription Represents a metadata object describing a Service Bus queue including queue path, behavioral settings (such as lock duration, default TTL, duplicate detection) and informational data points (such as current queue length and size).
TopicClient Represents a messaging object that enables sending messages to a Service Bus topic.
TopicDescription Represents a metadata object describing a Service Bus topic including topic path, behavioral settings (such as duplicate detection) and informational data points (such as current size and maximum topic size).
SubscriptionClient Represents a messaging object that enables receiving messages from a Service Bus subscription.
SubscriptionDescription Represents a metadata object describing a Service Bus subscription including subscription name, owing topic path, behavioral settings (such as session support, default TTL, lock duration) and informational data points (such as current message count).
NamespaceManager Represents a management object responsible for runtime operations with Service Bus messaging entities (queues, topics, subscriptions, rules) including creating, retrieving, deleting and asserting the existence.
MessagingFactory Represents a factory object responsible for instantiating, tracking and managing the lifecycle of the messaging entity clients such as TopicClient, QueueClient and SubscriptionClient.
MessageReceiver Represents an abstract messaging object that supports rich messaging functionality with a particular focus on the message receive operations.
MessageSender Represents an abstract messaging object that supports rich messaging functionality with a particular focus on the message send operations.
MessageSession Represents a message session that allows grouping of related messages for processing in a single transaction.
Filter Represents an abstract metadata object that is comprised of a filter expression and associated action that gets executed in the Service Bus subscription evaluation engine. The Filter class serves the purpose of a base class for TrueFilter, FalseFilter, SqlFilter and CorrelationFilter which represent the implementations of a metadata object for a given filter type.
TokenProvider Represents a factory object that provides access to the different types of security token providers responsible for the acquisition of SAML, Shared Secret and Simple Web tokens.

It is recommended that you familiarize yourself with the above API artifacts to get a head start on building your first brokered messaging solution with the Service Bus. Please note that the above is not an exhaustive list of all classes found in the brokered messaging API. For a complete landscape of all API members, please refer to the MSDN documentation.

Best Practices in Brokered Messaging API

The topics in this section are intended to share specific recommendations that were derived from hands-on experience with the .NET managed brokered messaging API. The goal of these recommendations is to encourage developers to apply the techniques and patterns discussed below, in order to be able to deliver robust messaging solutions.

Managing the Messaging Object Lifecycle

Messaging objects such as TopicClient, QueueClient and SubscriptionClient are intended to be created once and reused whenever possible. These objects are compliant with thread safety enabling you to send or receive messages in parallel from multiple threads. There is some small overhead associated with authenticating a client request in the Access Control Service (ACS) when creating a messaging object, therefore it is recommended that you cache TopicClient, QueueClient and SubscriptionClient object instances. For optimal resource utilization, consider limiting the cache scope to the lifetime of the messaging component that uses the respective Service Bus messaging objects.

The lifetime of a messaging object begins when a new instance is retrieved from the MessagingFactory object:

// The following actions are often performed upon initialization of an application-level messaging component.

string issuerName = "Issuer Name is retrieved from configuration file";
string issuerSecret = "Issuer Secret also comes from configuration file";
string serviceNamespace = "contoso-cloud";
string queuePath = "PurchaseOrders";

var credentials = TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerSecret);
var address = ServiceBusEnvironment.CreateServiceUri("sb", serviceNamespace, String.Empty);
var messagingFactory = MessagingFactory.Create(address, credentials);
var queueClient = messagingFactory.CreateQueueClient(queuePath, ReceiveMode.ReceiveAndDelete);

As mentioned earlier, a single QueueClient object can be reused for sending or receiving messages to or from a given queue. There is no need to create a new instance of the QueueClient object for every message that is being sent or received:

for (int i = 0; i < 5; i++)
{
    using (var msg = new BrokeredMessage(String.Format("Message #{0}", i)))
    {
        queueClient.Send(msg);
    }
}

It is important to note that the messaging objects maintain an active connection back to the Service Bus messaging infrastructure hosted on the Windows Azure platform. As with many other types of multi-tenant services, the brokered messaging services provided by the Service Bus are subject to quotas with respect to how many active concurrent connections a single messaging entity (such as a queue, topic or subscription) can support. To minimize the number of concurrent connections, it is advisable to explicitly control the lifetime of the Service Bus messaging objects and close them if you don’t plan to re-use them at a later stage. You should also close the messaging factory object upon a graceful termination of your messaging solution.

Double Quote Note

Closing a messaging object does not close the underlying connection since multiple messaging objects share the connection at a factory level. In turn, closing a messaging factory object will close the underlying connection to the Service Bus messaging infrastructure.

To close a given messaging object, you must invoke its Close() method using one of the following techniques:

// When a queue client is no longer required, let's close it so that it doesn't consume a connection.

// Option #1: Closing a specific messaging object instance.
queueClient.Close();

// Option #2: Closing all messaging objects tracked by the messaging factory.
messagingFactory.Close();

It is also worth noting that in some rare cases, the messaging objects may end up in a state that prevents them from being closed gracefully. Should such a situation occur, the brokered messaging API will ensure that appropriate actions will be taken, including aborting a connection if it cannot be closed successfully. You do not need to perform a status check to decide whether to call the Abort() or Close() methods. This is performed internally by the API. Please note that the Close() method is not guaranteed to complete without throwing an exception. Therefore, if you want to ensure that closing a messaging object is always safe, an additional layer of protection in the form of a try/catch construct is recommended.

Double Quote Note

Although closing a messaging object is not equal to a dispose action, a closed object cannot be re-opened if you later decide to reuse the closed instance. If you attempt to invoke an operation against a closed messaging object, you may receive a self-explanatory exception such as “This messaging entity has already been closed, aborted, or disposed”. There is no public Open() method that can be called from the client to restore a messaging object to an opened state. You must create a new instance of the messaging object. This recommendation also applies to the MessagingFactory objects.

The lifetime of a messaging object ends upon calling the Close() method. The easiest way to ensure that all messaging objects used by a solution are gracefully terminated is by explicitly closing the MessagingFactory object used to create messaging clients for queues, topics and subscriptions. An explicit close on MessagingFactory implicitly closes all messaging objects created and owned by the class. For example, you may want to close the factory object inside the Dispose() method of your messaging component, inside the OnStop() method provided by RoleEntryPoint or from within the UnhandledException event handler.

Dealing with Faulted Messaging Objects

It is widely known among many WCF developers that a WCF communication object is subject to a special precaution in order to handle internal state transition; in particular, those situations in which the WCF object ends up in a faulted state. Often, the WCF communication stack is required to be reset, for instance, by recreating a client channel, in order to recover from this condition.

The brokered messaging API provides “out-of-the-box” resilience against the faulted communication objects by handling and recovering from conditions that can make the underlying communication objects unusable. Unlike traditional WCF clients, Service Bus messaging clients that leverage the brokered messaging API don’t need to implement any special logic in order to deal with faulted communication objects. All communication objects such as MessageFactory, QueueClient, TopicClient, SubscriptionClient, MessageSender, and MessageReceiver will automatically detect and recover from exceptions that could potentially bring the communication stack into a non-operational state.

Certain messaging operations such as Complete, Abandon and Defer will not be able to provide a seamless automatic recovery. If Complete() or Abandon() fail with the MessagingCommunicationException exception, the only recourse is to receive another message, possibly the same one that failed upon completion, provided a competing consumer didn’t retrieve it in the meantime.

Handling Transient Communication Errors

To improve the reliability of a solution that uses the Service Bus brokered messaging managed API, it is recommended that you adopt a consistent approach to handling transient faults and intermittent errors that could manifest themselves when the solution communicates to the highly multi-tenant cloud-based queuing and publish/subscribe messaging service infrastructure provided by the Service Bus.

When considering a specific technique for detecting transient conditions, you may want to reuse existing technical solutions such as the Transient Fault Handling Framework or build your own. In both cases, you will need to ensure that only a subset of communication exceptions is treated as transient before attempting to recover from the respective faults.

The table below provides a list of exceptions that can be compensated by implementing retry logic:

Exception Type Recommendation
ServerBusyException This exception can be caused by an intermittent fault in the Service Bus messaging service infrastructure that is not able to process a request due to point-in-time abnormal load conditions. The client can attempt to retry with a delay. A back-off delay would be preferable to prevent adding unnecessary pressure to the server.
MessagingCommunicationException This exception signals a communication error that can manifest itself when a connection from the messaging client to the Service Bus infrastructure cannot be successfully established. In most cases, provided network connectivity exists, this error can be treated as transient. The client can attempt to retry the operation that has resulted in this type of exception. It is also recommended that you verify whether the domain name resolution service (DNS) is operational as this error may indicate that the target host name cannot be resolved.
TimeoutException This exception indicates that the Service Bus messaging service infrastructure did not respond to the requested operation within the specified time which is controlled by the OperationTimeout setting. The requested operation may have still been completed; however, due to network or other infrastructure delays, the response may not have reached the client in a timely fashion. Compensating this type of exceptions must be done with caution. If a message has been delivered to a queue but a response has timed out, resending the original message will result in duplication.

The following code snippet demonstrates how to asynchronously send a message to a Service Bus topic while ensuring that all known transient faults will be compensated by a retry. Please note that this code sample maintains a dependency on the Transient Fault Handling Framework.

var credentials = TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerSecret);
var address = ServiceBusEnvironment.CreateServiceUri("sb", serviceNamespace, String.Empty);
var messagingFactory = MessagingFactory.Create(address, credentials);
var topicClient = messagingFactory.CreateTopicClient(topicPath);
var retryPolicy = new RetryPolicy<ServiceBusTransientErrorDetectionStrategy>(RetryPolicy.DefaultClientRetryCount);

// Create an instance of the object that represents message payload.
var payload = XDocument.Load("InventoryFile.xml");

// Declare a BrokeredMessage instance outside so that it can be reused across all 3 delegates below.
BrokeredMessage msg = null;

// Use a retry policy to execute the Send action in an asynchronous and reliable fashion.
retryPolicy.ExecuteAction
(
    (cb) =>
    {
        // A new BrokeredMessage instance must be created each time we send it. Reusing the original BrokeredMessage instance may not 
        // work as the state of its BodyStream cannot be guaranteed to be readable from the beginning.
        msg = new BrokeredMessage(payload.Root, new DataContractSerializer(typeof(XElement)));

        // Send the event asynchronously.
        topicClient.BeginSend(msg, cb, null);
    },
    (ar) =>
    {
        try
        {
            // Complete the asynchronous operation. This may throw an exception that will be handled internally by the retry policy.
            topicClient.EndSend(ar);
        }
        finally
        {
            // Ensure that any resources allocated by a BrokeredMessage instance are released.
            if (msg != null)
            {
                msg.Dispose();
                msg = null;
            }
        }
    },
    (ex) =>
    {
        // Always dispose the BrokeredMessage instance even if the send operation has completed unsuccessfully.
        if (msg != null)
        {
            msg.Dispose();
            msg = null;
        }

        // Always log exceptions.
        Trace.TraceError(ex.Message);
    }
);

The next code sample shows how to reliably create a new or retrieve an existing Service Bus topic. This code also maintains a dependency on the Transient Fault Handling Framework which will automatically retry the corresponding management operation if it fails to be completed successfully due to intermittent connectivity issues or other types of transient conditions:

public TopicDescription GetOrCreateTopic(string issuerName, string issuerSecret, string serviceNamespace, string topicName)
{
    // Must validate all input parameters here. Use Code Contracts or build your own validation.
    var credentials = TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerSecret);
    var address = ServiceBusEnvironment.CreateServiceUri("sb", serviceNamespace, String.Empty);
    var nsManager = new NamespaceManager(address, credentials);
    var retryPolicy = new RetryPolicy<ServiceBusTransientErrorDetectionStrategy>(RetryPolicy.DefaultClientRetryCount);

    TopicDescription topic = null;
    bool createNew = false;

    try
    {
        // First, let's see if a topic with the specified name already exists.
        topic = retryPolicy.ExecuteAction<TopicDescription>(() => { return nsManager.GetTopic(topicName); });

        createNew = (topic == null);
    }
    catch (MessagingEntityNotFoundException)
    {
        // Looks like the topic does not exist. We should create a new one.
        createNew = true;
    }

    // If a topic with the specified name doesn't exist, it will be auto-created.
    if (createNew)
    {
        try
        {
            var newTopic = new TopicDescription(topicName);

            topic = retryPolicy.ExecuteAction<TopicDescription>(() => { return nsManager.CreateTopic(newTopic); });
        }
        catch (MessagingEntityAlreadyExistsException)
        {
            // A topic under the same name was already created by someone else, perhaps by another instance. Let's just use it.
            topic = retryPolicy.ExecuteAction<TopicDescription>(() => { return nsManager.GetTopic(topicName); });
        }
    }

    return topic;
}

In summary, it is advisable to assess the likelihood of a failure occurring, and determine the feasibility of adding additional resilience. Virtually all messaging operations can be subject to transient conditions. When calling into the brokered messaging API, it is therefore recommended that you take appropriate actions to always provide recovery from intermittent issues.

Sending Messages Asynchronously

In order to take advantage of advanced performance features in the Service Bus such as client-side batching, you should always consider using the asynchronous programming model when implementing a messaging solution using the brokered messaging managed API. The asynchronous messaging pattern will enable you to build solutions that can generally avoid the overhead of I/O-bound operations such as sending and receiving messages.

When you invoke an API method asynchronously, control returns immediately to your code and your application continues to execute while the asynchronous operation is being executed independently. Your application either monitors the asynchronous operation or receives notification by way of a callback when the operation is complete. At this time, your application can obtain and process the results.

It is important to note that when you invoke synchronous operations, for example the Send() or Receive() methods in the QueueClient class (or other synchronous methods provided by Service Bus brokered messaging API), internally the API code flows through the asynchronous versions of the respective methods, albeit in a blocking fashion. However, using the synchronous versions of these methods may not render the full spectrum of performance-related benefits that you can expect when calling the asynchronous versions directly. This is particularly apparent when you are sending or receiving multiple messages and want to perform other processing while the respective messaging operations are being executed.

Double Quote Note

A BrokeredMessage object represents a message, and is provided for the purpose of transmitting data across the wire. As soon as a BrokeredMessage object is sent to a queue or topic, it is consumed by the underlying messaging stack and cannot be reused for further operations. This is due to the fact that once the message body is read, the stream that projects the message data cannot be rewound. You should retain the source data used to construct a BrokeredMessage instance until you can reliably assert the success of the messaging operation. If a failed messaging operation requires a retry, you should construct a new BrokeredMessage instance using that source data.

The following code snippet demonstrates how to send multiple messages asynchronously (as well as reliably) while maintaining the order in which the messages are being sent:

// This sample assumes that a queue client is declared and initialized earlier.

// Declare the list of messages that will be sent.
List<XElement> messages = new List<XElement>();

// Populate the list of messages.
for (int i = 0; i < msgCount; i++)
{
    messages.Add(XDocument.Load(new StringReader(String.Format(@"<root><msg num=""{0}""/></root>", i))).Root);
}

// Declare a list in which sent messages will be tracked.
var sentMessages = new List<XElement>();

// Declare a wait object that will be used for synchronization.
var waitObject = new ManualResetEvent(false);

// Declare a timeout value during which the messages are expected to be sent.
var sentTimeout = TimeSpan.FromMinutes(10);

// Declare and initialize an action that will be calling the asynchronous messaging operation.
Action<XElement> sendAction = null;
sendAction = ((payload) =>
{
    // Use a retry policy to execute the Send action in an asynchronous and reliable fashion.
    retryPolicy.ExecuteAction
    (
        (cb) =>
        {
            // A new BrokeredMessage instance must be created each time we send it. Reusing the original BrokeredMessage instance may not 
            // work as the state of its BodyStream cannot be guaranteed to be readable from the beginning.
            BrokeredMessage msg = new BrokeredMessage(payload, new DataContractSerializer(typeof(XElement)));

            // Send the message asynchronously.
            queueClient.BeginSend(msg, cb, Tuple.Create<XElement, BrokeredMessage>(payload, msg));
        },
        (ar) =>
        {
            // Obtain the state object containing the brokered message being sent.
            var state = ar.AsyncState as Tuple<XElement, BrokeredMessage>;

            try
            {
                // Complete the asynchronous operation. This may throw an exception that will be handled internally by the retry policy.
                queueClient.EndSend(ar);

                // Track sent messages so that we can determine what was actually sent.
                sentMessages.Add(state.Item1);

                // Get the next message to be sent.
                var nextMessage = sentMessages.Count < messages.Count ? messages[sentMessages.Count] : null;

                // Make sure we actually have another message to be sent.
                if (nextMessage != null)
                {
                    // If so, call the Send action again to send the next message.
                    sendAction(nextMessage);
                }
                else
                {
                    // Otherwise, signal the end of the messaging operation.
                    waitObject.Set();
                }
            }
            finally
            {
                // Ensure that any resources allocated by a BrokeredMessage instance are released.
                if (state != null & state.Item2 != null)
                {
                    state.Item2.Dispose();
                }
            }
        },
        (ex) =>
        {
            // Always log exceptions.
            Trace.TraceError(ex.Message);
        }
    );
});

// Start with sending the first message.
sendAction(messages[0]);

// Perform other processing while the messages are being sent.
// ...

// Wait until the messaging operations are completed.
bool completed = waitObject.WaitOne(sentTimeout);
waitObject.Dispose();

if (completed && sentMessages.Count == messages.Count)
{
    // Handle successful completion.
}
else
{
    // Handle timeout condition (or a failure to send all messages).
}

Whenever possible, avoid parallelizing the messaging operations using the default scheduling and work partitioning algorithms provided by the Task Parallel Library (TPL) and Parallel LINQ (PLINQ). The basics of the TPL Framework are best suited for adding parallelism and concurrency to applications mostly from a compute-bound operation perspective. The “as is” use of TPL to improve the performance of I/O-bound code such as networking calls and messaging operations may not produce the improvements you would expect. The best way to leverage the TPL to support asynchronous operations is through the use of advanced TPL patterns that conform to the asynchronous programming model.

Receiving Messages Asynchronously

Similar to sending messages asynchronously, and also from a practical point of view, you can also extend the use of the asynchronous programming model to receiving messages from the Service Bus.

While waiting for new messages either on a Service Bus queue or subscription, your solution will often be issuing a polling request. Fortunately, the Service Bus offers a long-polling receive operation which maintains a connection to the server until a message arrives on a queue or the specified timeout period has elapsed, whichever occurs first. If a long-polling receive is performed synchronously, it will block the CLR thread pool thread while waiting for a new message, which is not considered optimal. The capacity of the CLR thread pool is generally limited; hence there is good reason to avoid using the thread pool for particularly long-running operations.

To build a truly effective messaging solution using the Service Bus brokered messaging API, you should always perform the receive operation asynchronously. Whether your solution receives one message at a time or fetches multiple messages, you begin the receive operation using the BeginReceive() method with the specified timeout. In the current API, the maximum receive timeout value is 24 days. While the Service Bus messaging client is waiting on your behalf for a new message, your solution can proceed with performing any other work. Upon completion, your callback method will be notified and the message that was received (if any) will be available for processing.

Double Quote Note

Once a message is received from a queue or subscription, its body can only be read once. Due to the nature of network protocols, message data streams are not always “rewindable”, because they do not often support a seek operation. You should secure the message data by placing it into an object after calling the GetBody() method, then keep that object for as long as you need it. Attempting to invoke the GetBody() method more than once is not supported by the brokered messaging API.

The code sample below shows an example of a programming method that asynchronously receives the specified number of messages from a Service Bus queue:

public static IEnumerable<T> Get<T>(QueueClient queueClient, int count, TimeSpan waitTimeout)
{
    // Use a wait semaphore object to report on completion of the async receive operations.
    var waitObject = new ManualResetEvent(false);

    // Use a retry policy to improve reliability of messaging operations.
    var retryPolicy = new RetryPolicy<ServiceBusTransientErrorDetectionStrategy>(RetryPolicy.DefaultClientRetryCount);

    // Create an internal queue of messages we received from the Service Bus queue.
    var queueMessages = new ConcurrentQueue<T>();

    try
    {
        for (int i = 0; i < count; i++)
        {
            // Use a retry policy to execute the Receive action in an asynchronous and reliable fashion.
            retryPolicy.ExecuteAction
            (
                (cb) =>
                {
                    // Start receiving a new message asynchronously.
                    queueClient.BeginReceive(waitTimeout, cb, null);
                },
                (ar) =>
                {
                    // Complete the asynchronous operation. This may throw an exception that will be handled internally by retry policy.
                    BrokeredMessage msg = queueClient.EndReceive(ar);

                    // Check if we actually received any messages.
                    if (msg != null)
                    {
                        try
                        {
                            // Retrieve the message body. We can only consume the body once. Once consumed, it's no longer retrievable.
                            T msgBody = msg.GetBody<T>();

                            // Add the message body to the internal list.
                            queueMessages.Enqueue(msgBody);

                            // With PeekLock mode, we should mark the processed message as completed.
                            if (queueClient.Mode == ReceiveMode.PeekLock)
                            {
                                // Mark brokered message as completed at which point it's removed from the queue.
                                msg.Complete();
                            }
                        }
                        catch
                        {
                            // With PeekLock mode, we should mark the failed message as abandoned.
                            if (queueClient.Mode == ReceiveMode.PeekLock)
                            {
                                // Abandons a brokered message. This will cause Service Bus to unlock the message and make it available 
                                // to be received again, either by the same consumer or by another completing consumer.
                                msg.Abandon();
                            }

                            // Re-throw the exception so that we can report it in the fault handler.
                            throw;
                        }
                        finally
                        {
                            // Ensure that any resources allocated by a BrokeredMessage instance are released.
                            msg.Dispose();
                        }

                        // Count the number of messages received so far and signal a completion.
                        if (queueMessages.Count == count)
                        {
                            // Otherwise, signal the end of the messaging operation.
                            waitObject.Set();
                        }
                    }
                },
                (ex) =>
                {
                    // Always log exceptions.
                    Trace.TraceError(ex.Message);
                }
            );
        }

        // Wait until all async receive operations are completed.
        waitObject.WaitOne(waitTimeout);
    }
    catch (Exception ex)
    {
        // We intend to never fail when fetching messages from a queue. We will still need to report an exception.
        Trace.TraceError(ex.Message);
    }
    finally
    {
        if (waitObject != null)
        {
            waitObject.Dispose();
        }
    }

    return queueMessages;
}

In line with the recommendation supplied in the previous section, it is best to use the asynchronous programming model integration provided by Task Parallel Library for parallelizing the asynchronous message receive operation.

Implementing Reliable Message Receive Loops

Through observations from several customer projects leveraging the brokered messaging API, we noticed that receiving a message is often subject to a canonical repeated implementation of the receive logic without a sound approach to handling potential anomalies. Generally, such logic doesn’t allow for edge cases; for example, expired message locks. This type of approach can be error-prone if it is not implemented in a robust fashion. The purpose of this section is to provide some specific recommendations around the implementation of reliable message receive logic.

First, it is important to note the two distinct modes in which messages can be received from the Service Bus. These modes are provided by the brokered messaging API to support message delivery using either “At Most Once” (with ReceiveMode.ReceiveAndDelete) or “At Least Once” (with ReceiveMode.PeekLock) semantics.

The first mode is ReceiveAndDelete, which is the simplest model and works best for scenarios in which the application can tolerate a failure in message processing. When using the ReceiveAndDelete mode, the receive action is a single-hop operation during which a message delivered to the client is marked as being consumed and subsequently removed from the respective queue or subscription.

The second mode is PeekLock, which prescribes that a received message is to remain hidden from other consumers until its lock timeout expires. With the PeekLock mode, the receive process becomes a two-stage operation making it possible to support applications that cannot tolerate failed messages. In addition to issuing a request to receive a new message (first stage), the consuming application is required to indicate when it has finished processing the message (second stage). After the application finishes processing the message, or stores (defers) it reliably for future processing, it completes the second stage of the receive process by calling the Complete() method on the received message.

When you specify PeekLock mode, you should always finalize the successful processing of a message by calling the Complete() method, which tells the Service Bus to mark the message processing as completed. Failure to call the Complete() method on a message received in PeekLock mode will result in the message re-appearing in a queue or subscription after its lock timeout expires. Consequently, you will receive the previously processed message again, and this may result in a duplicate message being processed.

In addition, in relation to PeekLock mode, you should tell the Service Bus if a message cannot be successfully processed and therefore must be returned for subsequent redelivery. Whenever possible, your messaging solution should handle this situation by calling the Abandon() method, instead of waiting until a lock acquired for the message expires. Ideally, you will call the Abandon() method from within a catch block that belongs to the try/catch exception handling construct serving the messaging handling context.

It is important to ensure that message processing happens strictly within the designated lock period. In the brokered messaging functionality introduced with the current release of the Service Bus, the maximum message lock duration is 5 minutes, and this duration cannot currently be extended at runtime. If a message takes longer to process than the lock duration set on a queue or subscription, its visibility lock will time out and the message will again become available to the consumers of the queue or subscription. If you attempt to complete or abandon such a message, you may receive a MessageLockLostException error that indicates there is no valid lock found for the given message.

In order to implement a robust message receive loop, it is recommended that you build resilience against all known transient errors as well as any abnormalities that can manifest themselves during or after message processing. This is especially important when receiving messages using PeekLock mode. Because there is always a second stage involved in PeekLock mode, you should never assume that a message successfully processed on the client can be reliably marked as completed in the Service Bus backend. For example, a fault in the underlying network layer may prevent you from completing message processing successfully. Such an implication requires that you handle idempotency edge cases, as you may receive the same message more than once. This behavior is in line with many other messaging solutions that operate in the “At Least Once” message delivery mode.

You can add additional resilience when calling the Complete() and Abandon() methods by using extension methods. For example:

public static bool SafeComplete(this BrokeredMessage msg)
{
    try
    {
        // Mark brokered message as complete.
        msg.Complete();

        // Return a result indicating that the message has been completed successfully.
        return true;
    }
    catch (MessageLockLostException)
    {
        // It's too late to compensate the loss of a message lock. We should just ignore it so that it does not break the receive loop.
        // We should be prepared to receive the same message again.
    }
    catch (MessagingException)
    {
        // There is nothing we can do as the connection may have been lost, or the underlying topic/subscription may have been removed.
        // If Complete() fails with this exception, the only recourse is to prepare to receive another message (possibly the same one).
    }

    return false;
}

public static bool SafeAbandon(this BrokeredMessage msg)
{
    try
    {
        // Abandons a brokered message. This will cause the Service Bus to unlock the message and make it available to be received again, 
        // either by the same consumer or by another competing consumer.
        msg.Abandon();

        // Return a result indicating that the message has been abandoned successfully.
        return true;
    }
    catch (MessageLockLostException)
    {
        // It's too late to compensate the loss of a message lock. We should just ignore it so that it does not break the receive loop.
        // We should be prepared to receive the same message again.
    }
    catch (MessagingException)
    {
        // There is nothing we can do as the connection may have been lost, or the underlying topic/subscription may have been removed.
        // If Abandon() fails with this exception, the only recourse is to receive another message (possibly the same one).
    }

    return false;
}

A similar approach can be extended to shield other messaging operations such as Defer from potential failures. The pattern in which the above extension methods can be used is reflected in the code snippet below. This code fragment demonstrates how to implement a receive loop while taking advantage of the additional resilience provided by the extension methods:

var waitTimeout = TimeSpan.FromSeconds(10);

// Declare an action acting as a callback whenever a message arrives on a queue.
AsyncCallback completeReceive = null;

// Declare an action acting as a callback whenever a non-transient exception occurs while receiving or processing messages.
Action<Exception> recoverReceive = null;

// Declare a cancellation token that is used to signal an exit from the receive loop.
var cts = new CancellationTokenSource();

// Declare an action implementing the main processing logic for received messages.
Action<BrokeredMessage> processMessage = ((msg) =>
{
    // Put your custom processing logic here. DO NOT swallow any exceptions.
});

// Declare an action responsible for the core operations in the message receive loop.
Action receiveMessage = (() =>
{
    // Use a retry policy to execute the Receive action in an asynchronous and reliable fashion.
    retryPolicy.ExecuteAction
    (
        (cb) =>
        {
            // Start receiving a new message asynchronously.
            queueClient.BeginReceive(waitTimeout, cb, null);
        },
        (ar) =>
        {
            // Make sure we are not told to stop receiving while we were waiting for a new message.
            if (!cts.IsCancellationRequested)
            {
                // Complete the asynchronous operation. This may throw an exception that will be handled internally by retry policy.
                BrokeredMessage msg = queueClient.EndReceive(ar);

                // Check if we actually received any messages.
                if (msg != null)
                {
                    // Make sure we are not told to stop receiving while we were waiting for a new message.
                    if (!cts.IsCancellationRequested)
                    {
                        try
                        {
                            // Process the received message.
                            processMessage(msg);

                            // With PeekLock mode, we should mark the processed message as completed.
                            if (queueClient.Mode == ReceiveMode.PeekLock)
                            {
                                // Mark brokered message as completed at which point it's removed from the queue.
                                msg.SafeComplete();
                            }
                        }
                        catch
                        {
                            // With PeekLock mode, we should mark the failed message as abandoned.
                            if (queueClient.Mode == ReceiveMode.PeekLock)
                            {
                                // Abandons a brokered message. This will cause Service Bus to unlock the message and make it available 
                                // to be received again, either by the same consumer or by another completing consumer.
                                msg.SafeAbandon();
                            }

                            // Re-throw the exception so that we can report it in the fault handler.
                            throw;
                        }
                        finally
                        {
                            // Ensure that any resources allocated by a BrokeredMessage instance are released.
                            msg.Dispose();
                        }
                    }
                    else
                    {
                        // If we were told to stop processing, the current message needs to be unlocked and return back to the queue.
                        if (queueClient.Mode == ReceiveMode.PeekLock)
                        {
                            msg.SafeAbandon();
                        }
                    }
                }
            }

            // Invoke a custom callback method to indicate that we have completed an iteration in the message receive loop.
            completeReceive(ar);
        },
        (ex) =>
        {
            // Invoke a custom action to indicate that we have encountered an exception and
            // need further decision as to whether to continue receiving messages.
            recoverReceive(ex);
        });
});

// Initialize a custom action acting as a callback whenever a message arrives on a queue.
completeReceive = ((ar) =>
{
    if (!cts.IsCancellationRequested)
    {
        // Continue receiving and processing new messages until we are told to stop.
        receiveMessage();
    }
});

// Initialize a custom action acting as a callback whenever a non-transient exception occurs while receiving or processing messages.
recoverReceive = ((ex) =>
{
    // Just log an exception. Do not allow an unhandled exception to terminate the message receive loop abnormally.
    Trace.TraceError(ex.Message);

    if (!cts.IsCancellationRequested)
    {
        // Continue receiving and processing new messages until we are told to stop regardless of any exceptions.
        receiveMessage();
    }
});

// Start receiving messages asynchronously.
receiveMessage();

// Perform any other work. Message will keep arriving asynchronously while we are busy doing something else.

// Stop the message receive loop gracefully.
cts.Cancel();

The above example implements an advanced approach to receiving messages asynchronously in the order in which they appear on a queue. It ensures that any errors encountered during processing will result in cancelling the message and returning it back into the queue so that it can be re-processed. The extra code is justified by supporting graceful cancellation of the message receive loop.

Conclusion

The cloud-based messaging and communication infrastructure provided by the latest release of the Service Bus supports reliable message queuing and durable publish/subscribe messaging capabilities. Because such “brokered” messaging services provided by the Service Bus may be subject to quotas with respect to active concurrent connections maintained by messaging entities (such as a queue, topic or subscription), this article detailed some best practices in managing the specifics involved in the lifecycle of such entities and messaging objects, and provided guidance on building your applications with an awareness of resource efficacy.

Of equal importance, when building solutions that have dependencies on such cloud-based technologies, it’s important to build an element of reliability and resilience into your code, and this article has imparted guidance and some real-world examples on how to do so. Such robust practices help cloud-based applications deal with anomalies that may be out of your control, such as transient network communication errors that can manifest themselves in multi-tenant, cloud-based environments.

Finally, this article has provided best practices to help you design code that is robust and efficient, while leveraging some of the advanced features of the Service Bus, such as sending and receiving messages asynchronously, and implementing reliable message loops as part of that process.

Check back here often as we continue to post more best practices guidance on our blog. As always, please send us your comments and feedback.

Additional Resources/References

For more information on the topic discussed in this blog post, please refer to the following resources:

Authored by: Valery Mizonov
Contributed by: Seth Manheim, Paolo Salvatori, James Podgorski, Eric Lam, Jayu Katti

The original version of Valery’s article was included in my Windows Azure and //BUILD/ Posts for 9/12/2011+ post.


Yves Goeleven (@YvesGoeleven) described Hosting options for NServiceBus on Azure – Shared Hosting in a 9/25/2011 post:

imageYesterday, I discussed the dedicated hosting model for NServiceBus on Windows Azure. Today I would like to introduce to you the second model, which allows you to host multiple processes on the same role.

imageIn order to setup shared hosting you start by creating a dedicated azure role which represents the host’s controller.

public class Host : RoleEntryPoint{}

The nservicebus role that you need to specify is a special role, called AsA_Host.

public class EndpointConfiguration : IConfigureThisEndpoint, AsA_Host { }

This role will not start a UnicastBus, but instead it will load other roles from azure blob storage and spin off child processes in order to host them.

The only profile that makes sense to specify in this case is the Production or Development profile, which controls where the logging output is sent to. Other behaviors belong strictly to the UnicastBus and it’s parts, so they cannot be used in this context. Besides this profile, you can also set a few additional configuration settings that control the behavior of the host in more detail.

  • DynamicHostControllerConfig.ConnectionString – specifies the connection string to a storage account containing the roles to load, it defaults to development storage.
  • DynamicHostControllerConfig.Container – specifies the name of the container that holds the assemblies of the roles to load, as .zip files, it defaults to ‘endpoints’.
  • DynamicHostControllerConfig.LocalResource – specifies the name of the local resource folder on the windows azure instance that will be used to drop the assemblies of the hosted roles, it defaults to a folder called ‘endpoints’
  • DynamicHostControllerConfig.RecycleRoleOnError – specifies how the host should behave in case there is an error in one of the child processes – by default it will not recycle when an error occurs.
  • DynamicHostControllerConfig.AutoUpdate – specifies whether the host should poll the storage account for changes to it’s childprocess, it defaults to false.
  • DynamicHostControllerConfig.UpdateInterval – specifies how often the host should check the storage account for updates expressed in milliseconds, it defaults to every 10 minutes.
  • DynamicHostControllerConfig.TimeToWaitUntilProcessIsKilled – if there are updates, the host will first kill the original process before provisioning the new assemblies and run the updated process. I noticed that it might take a while before the process dies, which could be troublesome when trying to provision the new assemblies. This setting allows you to specify how long the host is prepared to wait for the process to terminate before it requests a recycle.

Changes to the hosted roles

Any dedicated role can be used as a child process given a few minor changes.

The first change is that all configuration has to be specified in the app.config file. This is intentional, I have removed the ability for the childprocesses to read from the service configuration file. Why? Well, otherwise every childprocess would have been configured the same way as they would share configuration settings, same storage account, same queue, same everything… that’s not what you want. Hence the only option is to limit yourself to using the role’s app.config. Note that the RoleEnvironment is available from the child processes so if you want to read the configuration settings from it, just specify the AzureConfigurationSource yourself, but I advise against it.

The second change is that you need to add a reference to NServiceBus.Hosting.Azure.HostProcess.exe to the worker’s project. This process will be started by the host and you’re role will run inside it’s process space. Why does NServiceBus force you to specify the host process, can’t it do it itself? Well it could, but that could get you into trouble in the long run. If it would decide on the host processes for you, you would be tied to a specific vresion of NServiceBus and future upgrades might become challenging. When you specify it yourself you can just continue to run any version, or customize it if you like, only the name of the process matters to the host.

The provided host process is a .net 4.0 process, but it still uses some older assemblies, so you need to add an NServiceBus.Hosting.Azure.HostProcess.exe.config file to your project, which specifies the useLegacyV2RuntimeActivationPolicy attribute

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup useLegacyV2RuntimeActivationPolicy="true">
      <supportedRuntime version="v4.0"/>
      <requiredRuntime version="v4.0.20506"/>
    </startup>
</configuration>

The final step is to put the output of the role into a zip file and upload the zip file to the container which is being monitored by the host.

If the host process is running and it’s autoupdate property is set to true, then it will automatically pick up your zip file, extract it and run it in a child process. If the property was set to false, you will have to recycle the instances yourself.

Up until now I’ve only discussed worker roles and how to host message handlers inside them, next time I’ll take a closer look at hosting in a webrole and show you how you can create a so called webworker, where both a website and worker role are hosted in the same windows azure instance.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Brian Hitney explained how to Use the Windows Azure CDN for Assets in a 9/26/2011 post:

imageThe most common response to running stuff in the cloud (Azure, Amazon in particular) is the that it’s too expensive for the little guy. And generally, hosting VMs when a small shared site of something similar will suffice is a tough argument.

imageThere are aspects to Azure, though, that are very cost effective as they do “micro-scale” very well. A good example of this is the Azure CDN, or more simply, Azure Blob Storage. It’s effective to exchange files, it’s effective at fast delivery, and even lightweight security using shared access signatures (links that essentially only work for a period of time). It’s durable: not just redundant internally, but externally as well, automatically creating a backup in another datacenter.

For MSDN subscribers, you already have Azure benefits, but even going out of pocket on Blob storage isn’t likely to set you back much: $0.15/GB of storage per month, $0.01/10,000 transactions, and $0.15/GB outbound bandwidth ($0.20 in Asia; all inbound free). A transaction is essentially a “hit” on a resource, so each time someone downloads, say, an image file, it’s bandwidth + 1 transaction.

Because these are micro transactions, for small apps, personal use, etc., it’s quite economical … often adding up to pennies per month. A few typical examples are using storage to host files for a website, serve content to mobile devices, and to simply offload resources (images/JS files) from website code.

Depending on usage, the Azure Content Delivery Network (CDN) can be a great way to enhance the user experience. It may not always be the case (and I’ll explain why) but essentially, the CDN has dozens of edge servers around the world. While your storage account is served from a single datacenter, having the data on the edge servers greatly enhances speed. Suppose an app on a phone is retrieving documents/text to a worldwide audience … enabling CDN puts the content much closer.

I created a test storage account in North Europe (one of the Azure datacenters) to test this, using a small graphic from RPA: http://somedemoaccount.blob.core.windows.net/images/dicelogo.png

Here’s the same element via the CDN (we could be using custom DNS names, but for demo purposes we’re not): http://az32338.vo.msecnd.net/images/dicelogo.png

Here’s a trace to the storage account in the datacenter – from North Carolina, really not that bad all things considered:

image

You can see we’re routed to NY, then on across the pond, and total latency of about 116ms. And now the CDN:

image

MUCH faster, chosen not only by physical distance but also network congestion. Of course, I won’t see a 100ms difference between the two, but if you’re serving up large documents/images, multiple images, or streaming content, the difference will be noticeable.

imageIf you’re new to Azure and have an account, creating a storage account from the dashboard is easy. You’d just click on your storage accounts, and enter a name/location:

image

You’d typically pick someplace close to you or where most of your users are. To enable CDN, you’d just click the CDN link on the left nav, and enable it:

image

Once complete, you’ll see if on the main account screen with the HTTP endpoint:

image

So why wouldn’t you do this?

Well, it’s all about cacheability. If an asset is frequently changing or infrequently used, it’s not a good candidate for caching. If there is a cache miss at a CDN endpoint, the CDN will retrieve the asset from the base storage account. This will incur an additional transaction, but more importantly it’s slower than if the user just went straight to the storage account. So depending on usage, it may or may not be beneficial.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Steve Marx (@smarx) described The Flatterist: a Windows Azure App to Stroke Your Ego in a 9/27/2011 post:

logoFor my talk at the BUILD conference last week, I built a fun application in Windows Azure that doles out compliments in my voice with soothing piano music in the background. Lots of people have really enjoyed the application, so I thought I’d share the full source code on GitHub: https://github.com/smarx/flatterist.

You’ll want a modern browser (one with audio tag support) to get the full experience. Note that you can click on the little hash symbol below the compliment to get a permalink for that specific compliment.

If you’re so inclined, please also click the “suggest a compliment” link in the footer and submit your own compliments!

image


• Michael Washam reported Windows Azure PowerShell Cmdlets 2.0 Update Web Cast Now Available in a 9/27/2011 post:

Windows Azure PowerShell Cmdlets 2.0 Update Web Cast Now Available


Avkash Chauhan described Uploading Certificate to Windows Azure Management Portal using CSUPLOAD Error - "Key not valid for use in specified state" in a 9/26/2011 post:

imageRecently I was working with someone on Windows Azure SDK 1.5 and VM Role deployment. While trying to upload certificate to Windows Azure Management Portal, the error occurred as below:

C:\Program Files\Windows Azure SDK\v1.5\bin>csupload add-servicecertificate -Connection "SubscriptionID=<Subscription_ID>;CertificateThumbprint=<MGMT_CERT_THUMBPRINT>" -HostedServiceName "testcodewp" -Thumbprint "b28daea93e520d85391987c6a6efb52be9278195"
Windows(R) Azure(TM) Upload Tool version 1.5.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

arg[0]="add-servicecertificate"
arg[1]="-Connection"
arg[2]="******************"
arg[3]="-HostedServiceName"
arg[4]="testcodewp"
arg[5]="-Thumbprint"
arg[6]="b28daea93e520d85391987c6a6efb52be9278195"
Uploading service certificate to 'testcodewp'.
Error trying to access certficate. The most likely cause is the private
key is not exportable. Please reimport the certficate with the
private key marked exportable or use the -PublicKeyOnly option if you
do not wish to upload the private key.
Detail: Key not valid for use in specified state.

System.Security.Cryptography.CryptographicException: Key not valid for use in specified state.

at System.Security.Cryptography.CryptographicException.ThrowCryptogaphicException(Int32 hr)
at System.Security.Cryptography.X509Certificates.X509Utils._ExportCertificatesToBlob(SafeCertStoreHandle safeCertStoreHandle, X509ContentType contentType, IntPtr password)
at System.Security.Cryptography.X509Certificates.X509Certificate.ExportHelper(X509ContentType contentType, Object password)
at Microsoft.WindowsAzure.ServiceManagementClient.CloudManagmentClient.<>c__DisplayClass49.<AddCertificate>b__48(IServiceManagement channel, String subId)
at Microsoft.WindowsAzure.ServiceManagementClient.CloudManagmentClient.<>c__DisplayClass4f.<DoAsyncOperation>b__4e(IServiceManagement x, String y)
at Microsoft.WindowsAzure.ServiceManagementClient.CloudManagmentClient.DoOperation[T](Func`3 f, String& trackingId)
at Microsoft.WindowsAzure.ServiceManagementClient.CloudManagmentClient.DoAsyncOperation(Action`2 act)
at Microsoft.WindowsAzure.Tools.CsUpload.ProgramCommands.<>c__DisplayClass2a.<AddServiceCertificateAction>b__25(CloudManagmentClient client)
at Microsoft.WindowsAzure.Tools.CsUpload.ProgramCommands.TryClientAction(CloudManagmentAccount account, Action`1 act)
at Microsoft.WindowsAzure.Tools.CsUpload.ProgramCommands.AddServiceCertificateAction(IList`1 args, IDictionary`2 switches)

imageBased on error message it was clear that the certificate I had does not support private key export. So just to test I used option –PublicKeyOnly as below and it did worked:

C:\Program Files\Windows Azure SDK\v1.5\bin>csupload add-servicecertificate -Connection "SubscriptionID=<SUBSCRIPTION_ID>;CertificateThumbprint=<MGMT_CERT_THUMBPRINT>" -HostedServiceName "testcodewp"
-Thumbprint "b28daea93e520d85391987c6a6efb52be9278195" -PublicKeyOnly
Windows(R) Azure(TM) Upload Tool version 1.5.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

arg[0]="add-servicecertificate"
arg[1]="-Connection"
arg[2]="*********************"
arg[3]="-HostedServiceName"
arg[4]="testcodewp"
arg[5]="-Thumbprint"
arg[6]="b28daea93e520d85391987c6a6efb52be9278195"
arg[7]="-PublicKeyOnly"
Uploading service certificate to 'testcodewp'.
Service certificate upload complete.
FriendlyName :
Thumbprint : B28DAEA93E520D85391987C6A6EFB52BE9278195
Subject : CN=Avkash Windows Azure Account
IssuedBy : CN=Avkash Windows Azure Account
ValidFrom : 12/31/2010 11:00:00 PM
ValidTo : 12/31/2014 11:00:00 PM
HasPrivateKey : False

When you use Windows Azure tool (in publish wizard) to create certificate, the private key is exportable in all the certificates. This could happen only:

- When you created your own certificate and you have missed to add option to make private key exportable

- You got a certificate which does not have exportable private key

If you have created your own certificate using makecert then please add the –“pe” option.

Learn more @ http://blogs.msdn.com/b/avkashchauhan/archive/2011/09/21/how-to-generate-2048-bit-certificate-with-makecert-exe.aspx


The Bytes by MSDN Team reported Bytes by MSDN: September 27 - Scott Hanselman in a 9/26/2011 post:

Join Tim Huckaby, Founder of InterKnowlogy and Actus, and Scott Hanselman, Principal Program Manager at Microsoft for a discussion around NuGet, a package manager for the .NET Framework. This free, open source developer focused package management system simplifies the process of incorporating third party libraries into a .NET application during development. This system makes the developer’s life so much easier and that much more powerful. Tune in to hear how you can get started with NuGet today!

Open attached fileHDI_ITPro_BytesbyMSDN_mp3_Scott_Hanselman.mp3


Michael Washam described Deconstructing the Hybrid Twitter Demo at BUILD in a 9/22/2011 post (missed when published):

imageMany of you may have watched the Windows Server 8 Session at BUILD and were awed by how cool the Twitter demo delivered by Stefan Schackow was. (Well maybe not awed but at least impressed!). I will admit that it was a fun demo to build because we were using such varied technologies such as IIS8 Web Sockets, AppFabric Topics and Subscriptions and of course Windows Azure Worker Roles.
The demo was built by myself and Paul Stubbs

Here is a screenshot of the application as it ran live:

image

Here is a slide depicting the architecture:

image

Essentially, the demo shows how you could take data from the cloud and pass it to an application behind your firewall for additional processing (Hybrid Cloud Scenario). The method of transferring data we chose was to use Windows Azure AppFabric Topics (a really powerful queue) as the data transport.

One of the beauties of using Windows Azure AppFabric Topic’s is the ability for multiple clients to receive messages across the topic with a different view of the data using filtered subscriptions.

In our demo we could have client A view our Twitter feed with certain tags enabled while client B had a completely different set enabled.

So on to the source code!

Within the Windows Azure Worker Role I am using a modified version of Guy Burstein’s example to search Twitter based on a set of hashtags.

Sending Twitter Search Results to a Topic

[Source code elided for brevity.]

That covers the meat of the service but let’s go a bit deeper on how the Topic is created.

The constructor of the TwitterSubscriptionClient is where the intialization of all of the Service Bus classes take place.

Initialize and Authenticate the Service Bus Classes

[Source code elided for brevity.]

The code below simply tests to see if the topic has already been created and if it has not creates it.
As with all things in the cloud the below code uses some basic retry logic on each operation that operates on a Windows Azure service.

Creating the Topic

[Source code elided for brevity.]

The method below takes the search results from Twitter and sends them individually into the topic.
Note I’m adding an additional property onto the BrokeredMessage “LowerText” so on the receiving end I can easily filter for various tags on my subscription.
As the text is all lower case I don’t care whether the tag is #Microsoft or #microsoft.

Sending Tweets through the Topic

[Source code elided for brevity.]

So now we have built a feed that searches tweets looking for specific hash tags (see the project for the Twitter integration code) and we send the results out on the ServiceBus through a topic. How does our on-premises application consume the data?

The code below is client side javascript code that opens a WebSocket to our IIS8 Server. The socket.onmessage handler essentially waits on data to be sent back from the server which it then parses into a JSON object. If the JSON object has a .Key property I update the Stream Insight UI if not I update the Twitter feed.
There is also a send() method. The only time I send data to the server is when the user has clicked on one of the hash tags on the UI. This updates a structure that holds the current hash tags filter that I send back to the server via the socket.

Initialize the Web Socket

[Source code elided for brevity.]

When a client logs in to our application. We first verify that indeed it is a web socket request and assuming it is create a unique ClientHandler object to facilitate state/communication with the client.

Client Setup

[Source code elided for brevity.]

The implementation of ClientHandler is fairly straight forward. ConfigureServiceBus() sets up the subscriptions to the topic (1 for the live feed and 1 for Stream Insight). The ProcessTopicMessages method is an async method that just waits for input from the user. The only input from the client is the update of the hash tag filter.
If we receive data DeserializeFilter() is called which uses the JSON serializer to deserialize the passed in data to a C# class (TwitterFilter).

Receiving Data from a Web Sockets Client

[Source code elided for brevity.]

GetSubscriptionMessages() is called by ProcessMessagesfromTopic. The method sits in a loop testing to see if the users filter has changed and if it has it will dynamically update the SQLFilterExpression for both subscriptions (removing the existing rules in the process). If not it queries the Topic for the next Tweet available and passes it back to the user over the web socket using SendMessage(). The GetFilterString is below as well. It dynamically creates the query for the SQLFilterExpression for the Stream Insight client and the Live Feed client.

Receive Tweets from the Service Bus Topic and Send them over the Web Socket.

[Source code elided for brevity.]

GetIncomingTweet waits for a short period of time looking for Tweets. Once it receives one it serializes it into a JSON object to be processed by the browser client.

Pull Tweet of the Topic and pass the JSON serialized version back.

[Source code elided for brevity.]

Stream Insight is configured in a method called StartStreamInsight. We have an adapter for Stream Insight in the project to read events form the Service Bus Topic that allows SI to calculate data on the events we pass in. The query it is executing is the Linq query below that basically tells it to calculate how many times each hash tag has been tweeted in the last 60 seconds and to recalulate that value every 1 second. Very Powerful!

Stream Insight LINQ Query

[Source code elided for brevity.]

Finally, the method to send the data back to the client over the web socket.

Using socket.SendAsync to send data over the web socket.

[Source code elided for brevity.]

If you would like to try this demo out for yourself I’ve uploaded the TwitterFeed project (Dev 10) and the Web Sockets Demo (Dev 11). Since this project uses Web Sockets it requires IIS 8 – tested with the developer build Windows Server 8 Developer Preview along with Visual Studio Developer Preview. Both of which can be downloaded from MSDN.

You will also need to install Stream Insight from http://www.microsoft.com/download/en/details.aspx?id=26720 download and install the 1033\x64\StreamInsight.msi instance. Use all defaults and ”StreamInsightInstance” as the instance name.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Paul Patterson delivered Microsoft LightSwitch – The Business Analyst Perspective in a 9/27/2011 post:

imageIn every conversation I have with others regarding LightSwitch, the question of value comes up. People want to know what the value is in using LightSwitch relative to using some other tools. Most coversations end up with the other person drawing comparisions to what they are familiar with using today. Well, here is what I commonly express to others when this context of how LightSwitch compares to their favourite tool comes up…

My Story…

image222422222222I have many years of business analysis experience, including some years with an official title of Business Analyst (BA). In fact, the last 20 or so years have included roles and responsibilities that, if not directly titled and scoped as a BA, have crossed many of the roles and domains that a BA would typically be assigned to.

Interestingly, however, is how my career path has taken me from roles that are more analytical in nature, to roles that are more technical. It seems that most people go in the opposite direction; starting out as a lower level software developer and then progressing into more high-level roles as a business analyst, for example. The path I am on is a hilly one; analytical, then technical, and now more analytical but with a very technical context.

The route I have taken, I believe, has conditioned me to view problems at many different perspectives. As a BA I would take a top down approach and look at the big picture when analyzing a problem. As a software developer I would analyse something by breaking it down into small logical chunks and then assembling them to solve the problem – a bottom up approach. As a BA I would ensure that my deliverables were measurable at the macro level. As technologist I would make sure that my deliverables achieved very specific and granular expectations and requirements.

It is that multi-domain breadth of knowledge and experience that provides me the insight in to things that some technologists may not have. Problem solving has become a much more intuitive approach for me, as opposed to a methodical approach – at first anyway. I take in as much information as I can and then let the information stew in my brain for a little bit. Then, I listen to what my gut is telling me (but usually not after eating something greasy). It is usually that instinctive approach that tells me where a root cause exists. That been-there-and-done-that past has absolutely contributed to that instinctive conditioning.

Money Talks!

So why am I telling you this? Well, because I think it is important to understand why I have taken a keen interest in LightSwitch. When I first started learning about LightSwitch, it was my gut that was telling me that this product will be something to keep an eye on. The value proposition that LightSwitch offers is that makes complete sense – build professional quality software quickly for the desktop or the cloud. My gut instinct was saying, “Hey, I can create a software solution for someone in less time that what I have done in the past. That should be an easy sell – and I can make more money, more quickly.”

Here is an argument for you subjective techno-monkeys out there…

Let’s say you are a technology professional who is very experienced and proficient at creating line of business software solutions. For argument sake, let’s say that you are one of those professionals who are at the top 10% of the scale of your industry. You can create a solution that exceeds all industry best practice expectations, and can deliver a solution well with budget and time constraints.

A customer has asked you to submit a proposal for a solution that you have estimated to cost around $200,000, and delivered in 6 months. You’ve taken a good look at the RFP and you have carefully estimated your costs based on the required function points. You even have an “in” with the CIO, of the publicly traded organization whose daughter goes to the same pre-school as yours. Your proposal gets submitted to the organization.

Now let’s say I come along and with a proposed solution that utilizes LightSwitch as the technology used for the solution development. Understanding that all the same requirements can be met with LightSwitch, I determine that the solution can be delivered in 2 months, at a cost of $45,000.

Interesting. That high-level executive you know may be the one to champion your cause, but how would you realistically expect that person to objectively argue that the organization would have to spend more money and more time to deliver a solution that would take longer to provide an ROI? Come on, really? I have experienced situations where people have tried to champion a subjective “favourite”, only to fall flat on their faces when being accountable for making the “investment” – and seen some lose their jobs because of it!

No Favouritism Here

It is certainly hard not to be objective when considering the value proposition of LightSwitch. We all have favourites, but when it comes down to making those business investment decisions, like in the scenario above; money talks! With LightSwitch, my object thinking is telling me that LightSwitch will give me that competitive advantage that most of my competitors will not have.

Since following the evolution of LightSwitch, from pre-beta to the official release this year, my objective arguments bin has been filling up. And with my experience at actually using the product, my subjective favouritism for the tool has also increased. I enjoy how easy it is to use, and how quickly I can get things done.

Notwithstanding, I still believe in using the right tools for the job. I still use other tools where necessary, but only if necessary. Again, I want to get things done as quickly and cheaply as possible while still meeting and exceeding customer expectations. So far, LightSwitch has enabled me to achieve that.

My current role is one of a solution architect. My most recent solution is a simple Windows Forms one. I could have easily met most of the requirements with LightSwitch, but not all requirements. LightSwitch fell a little short of solving just a few of the functional requirements. However, give it a couple of years and I am sure that the evolution of the tool could have solved those requirements.

Acceptance

Even at its early stage of evolution, LightSwitch has already provided a tremendous value proposition. The next couple of years will be very interesting though. The already huge community of people developing for LightSwitch suggests that the LightSwitch ecosystem is quickly growing and evolving. Extensions are being created every day and before long just about any kind of requirement scenario may be solved by simply enabling something that someone else has already created.

I am not saying that you should get on yet another bandwagon. I am, however, saying that this product should not be lightly perceived nor thought of us just another developer tool. I anticipate that this product will have a relatively big impact on the development industry. How can I tell? Well, I have presented LightSwitch to a number of groups of professional developers. Most often I see looks of wide eyed bewilderment and “…this is the first I have heard of this…” reactions. Even more surprising is the look of “…holy crap! I think I missed the boat on something…”. Some people even look like they are threatened by it!

Regardless of the reaction to hearing about LightSwitch, I always ask people to keep an open mind when reviewing LightSwitch. Again, it is about the value proposition that the tool provides now, and in the near future. I am certainly not going to miss the boat on this one, and I think that if you look at all things objectively, your gut will tell you the same.

What say you?


Glenn Gailey (@ggailey777) reported OData Quickstart for Windows Phone Updated—Now with VB! on 9/26/2011:

imageI’m not sure if I’ve mentioned it here on my blog yet or not, but the forthcoming release of Windows Phone 7.5 (“Mango”) and the Windows Phone SDK 7.1 features huge improvements for consuming OData feeds. This means that, in Mango, OData support on phone is basically equivalent to Silverlight 4 and .NET Framework 4 (asynchronous only).

Here’s a list of what has gotten (much) better:

  • Add Service Reference—it works now! No more having to use DataSvcUtil.exe now (in most cases) and manually add your references. This is sure a welcome sight when writing a Windows Phone app:
    image
  • LINQ is back!—they (finally) added LINQ support to phone, so welcome back to DataServiceQuery<T> and the ease of composing queries against entity sets returned by a strongly-typed DataServiceContext. Now, no more having to manually compose URIs for queries, check it out:
    image
  • DataServiceState* works much better—if you ever tried to use the Save and Restore methods on the old DataServiceState object, they weren’t really ready for prime time. The new DataServiceState object has methods that are explicitly named Serialize and Deserialize, which do just what they say. Serialize returns you a true string-based serialization of everything being tracked by the context, and Deserialize now returns a correctly re-hydrated context, including nested collections.
  • Authenticated requests—new support for attaching credentials to the DataServiceContext using the Credentials property (like you can do in Silverlight 4). The client uses these credentials to set the Authentication header in the request. Before this, you had to set this header yourself.
  • Compression* works—well, technically it now CAN work, but there is no “in the box” support and you need to track down your own compression provider. However, this cool because I wasted an entire day trying to make compression work in WP7—totally blocked. For more info on how to make this work see this topic (until I can get something published).
  • Now, available in your SDK!—the previous version of the library was published as a separate download. Now, the OData library is a first-class citizen and in the Windows Phone SDK 7.1.
    * This denotes a Windows Phone-only OData functionality.

To better highlight these most excellent OData improvements in Mango, I am in the process of getting the “official” OData on Windows Phone quickstart updated for the pending release. While I wait for these updates to go live, I went ahead (to help out the folks rushing to write Mango-based apps) and updated the existing sample project to the new version, which you can find here:

Attention Visual Basic programmers!
Please notice that there is now also a VB version of this sample published (for a total of like 4-VB samples for phone). I am definitely not primarily a VB guy, but I am proud to say that I DID NOT use a converter to create this VB app. So, if you find code in there that looks like it was definitely written by a C# guy or is bad, please leave me a note on the sample project’s Q and A page.

As you will see, this update to Windows Phone and the SDK makes writing OData apps for phone tremendously more fun.


Hans Bouma (@FransBouma) posted LLBLGen Pro, Entity Framework 4.1, and the Repository Pattern on 9/26/2011:

imageNormally I don't use this blog to post short messages with a link to other blogposts, but as this blogpost is worth reading, I make an exceptions ;)

Matt Cowan has written an in-depth post about using LLBLGen Pro with Entity Framework to create a repository using system. Highly recommended for every LLBLGen Pro and/or Entity Framework user!


Return to section navigation list>

Windows Azure Infrastructure and DevOps

• My (@rogerjenn) Configuring Windows 8 Client Developer Preview Features post of 9/27/2011 describes how to set up Windows 8 for developing Windows Azure apps:

imageIf you plan to to create Windows Azure, Windows Phone 7+, or other desktop and Web applications with Visual Studio 2010 or 2011 under Windows 8 (Client) Developer Preview, you’ll need to add a substantial number of Windows Features to those Setup installs by default.

imageThe feature addition process for Windows 8 (client) is quite similar to that of Windows 7, but Windows 8 offers a slightly different set of default and optional features. This post also covers installation of:

  • Windows Azure SDK v1.5 and its Tools for Visual Studio 2010
  • Windows Azure Toolkit for Windows 8
  • Windows Azure Platform Training Kit (WAPTK), September 2011 release
  • Windows Phone 7.1 Toolkit (RC) for Visual Studio 2010* (see below)

Note: There is a serious bug (cycle) in the WAPTK’s Dependency Checker detection and installation process with Windows 8 running in a VM created with Windows Server 2008 R2 Enterprise Hyper-V OS. This bug prevents running the WAPTK Dependency Checker. See steps 28 through 37 at the end of this post.


Update 9/27/2011 12:45 PM PDT: Attempts to run the WAPTK Dependency Checker on a second WAPTK installation on Windows 8 Developer Preview on a physical client (not a VM) failed as above. This bug was reported to the Microsoft Connect Windows Ecosystem Readiness Program’s BUILD Attendees group (WERP: BUILD Attendees) on 9/27/2011.

* The Windows Phone 7.1 Emulator won’t run under Windows 8:

image

The KB article linked by the dialog relates to problems during installation of the WP 7.1 Emulator, not running it. …

The post continues with 38 illustrated steps to get ready for cloud development with Visual Studio 2010 running under Windows 8 Developer Preview.


• My (@rogerjenn) Problems Connecting to Windows 8 VMs Isn’t Related to Developer Previews; Windows 2008 R2 SP1 Is at Fault of 9/26/2011 begins:

imageI reported inability to connect to VMs created by Windows 2008 R2 SP1’s Hyper-V subsequent to a Windows Update of 9/23/2011 in my Unable to Connect to Windows 8 Client or Server Virtualized with Windows 2008 R2 Hyper-V post of 9/24/2011, updated 9/24 and 9/25. All machines are members of the oakleaf.org domain managed by a domain controller running Windows 2003 Server R2 Enterprise Edition with SP2.

imageThe problem isn’t related to the Windows Developer Preview OSes. It’s an issue with how Windows 2008 Server R2’s Hyper-V hypervisor started handling default user credentials after a Windows Update of 9/23/2011.

The workaround is to edit the machine’s Local Computer Policy/Computer Configuration/Administrative Templates/System/Credentials Delegation template with GpEdit.msc to:

  • Allow Delegating Default Credentials with NTLM-only Server Authentication
  • Allow Delegating Default Credentials
  • Allow Delegating Saved Credentials
  • Allow Delegating Saved Credentials with NTLM-only Server Authentication

For details, see the “Workaround” section near the end of [the original] post.

The post continues with illustrated instruction for applying the fix.


Nathan Totten (@ntotten) described Running Processes in Windows Azure in a 9/27/2011 post:

imageOne of the little known features of the Windows Azure SDK 1.5 (September 2011) Release is the ability to directly run executables on startup without writing custom code in your WorkerRole or WebRole entry point.

image

This feature is facilitated by a new section in the service definition for the roles. You can see the new section and subsections below.

<Runtime executionContext="[limited|elevated]">
  <Environment>
	 <Variable name="<variable-name>" value="<variable-value>">
		<RoleInstanceValue xpath="<xpath-to-role-environment-settings>"/>
	  </Variable>
  </Environment>
  <EntryPoint>
	 <NetFxEntryPoint assemblyName="<name-of-assembly-containing-entrypoint>" targetFrameworkVersion="<.net-framework-version>"/>
	 <ProgramEntryPoint commandLine="<application>" setReadyOnProcessStart="[true|false]" "/>
  </EntryPoint>
</Runtime>

For purposes of this example we are going to duplicate the simple NodeJS worker role that was created in a previous post, but we are going to use the new ProgramEntryPoint functionality.

To begin we create a standard Windows Azure project with a single Worker Role.

SNAGHTML1c4d83

SNAGHTML1d1c08

Our starting point is a standard Worker Role.
image

The first thing we need to do is delete the WorkerRole.cs file. We won’t be using that file or any other C# code for this project.

image

Next, we add our node.exe and app.js files. Make sure to set the to “Content” and “Copy if newer”.

image

Next, we need to tell Windows Azure to run our node application after the role starts. To do this open the ServiceDevinition.csdef file.

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="WindowsAzureProject11" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WorkerRole name="WorkerRole1" vmsize="Small">
    <Imports>
      <Import moduleName="Diagnostics" />
    </Imports>
  </WorkerRole>
</ServiceDefinition>

Now add the following lines to the csdef file. These lines tell Windows Azure that your role entry point is “node.exe app.js” and that when that process starts the role is ready. Note that we also added an endpoint for NodeJS on port 80 and also removed the diagnostics import.

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="WindowsAzureProject11" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WorkerRole name="WorkerRole1" vmsize="Small">
    <Runtime executionContext="limited">
      <EntryPoint>
        <ProgramEntryPoint commandLine="node.exe app.js" setReadyOnProcessStart="true" />
      </EntryPoint>
    </Runtime>
    <Endpoints>
      <InputEndpoint name="NodeJS" protocol="tcp" port="80" />
    </Endpoints>
  </WorkerRole>
</ServiceDefinition>

Finally, we are ready to deploy our project. After you deploy the project you will see your NodeJS server running all without writing a single line of .Net code.

Another thing to note is that if your NodeJS server shuts down for any reason the Worker Role will recycle and restart the process. This way you don’t have to worry about restarting the services yourself.

While our example used NodeJS, you can use pretty much any process like this. There are also ways of using .Net code in this fashion as well to make it even easier to transition your existing application to Windows Azure.

You can see my working demo deployed here: http://simplenodejs.cloudapp.net/


David Linthicum (@DavidLinthicum) asserted “Cloud services save on greenhouse gas emissions, but on its own, the environmental rationale won't cut it in business” in a deck for his Green is good -- but it's no reason to go to the cloud article of 9/27/2011 for InfoWorld’s Cloud Computing blog:

imageNew research from the CDP (Carbon Disclosure Project) tells us again what most of us already know: Cloud computing can slash CO2 emissions. The report suggests U.S. companies could save 85.7 million metric tons of CO2 annually by moving to the cloud. That's the equivalent of using 200 million barrels of oil.

imageHowever, most interesting is the fact that many of the companies participating in the research said improving their environmental performance was not their primary motivation for moving to the cloud. Indeed, in my experience, it's never the primary reason. If anything, it is a secondary argument that IT rolls out when asking for the money to make the migration. I call this the "you love the planet, don't you?" argument for cloud computing.

I'm going to come right out and say it: The motivation in moving to the cloud these days is more around cost savings and hype. Green computing is a by-product, but never the objective. I even suspect that if cloud computing was proven to increase CO2 emissions, we'd be moving to public clouds anyway.

This is not to say corporations are greedy entities that don't care about the environment, but they have to be true to their mission in returning shareholder equity. This means they will use the most effective and cost-efficient computing resources. If those resources happen to be "green," all the better. But the greenness of cloud computing is never on the critical path.

That's a good thing. The only way technology will be viable is if it provides clear and quick value to the business. If that technology is green, great. If not, then be sure not to mention that fact.


Chris Czarnecki described Microsoft Release Azure SDK 1.5 in a 9/25/2011 post to the Learning Tree blog:

imageAnybody who has been working with Azure from its earliest release will appreciate that both the platform and the associated development tools have improved beyond recognition from the early days. Its good to know that the rate of development is being maintained by Microsoft and the recent release of the Windows Azure SDK 1.5 and the associated Azure tools for Visual Studio 2010 have added some great new features.

image

Considering the SDK firstly, this adds a number of new features including:

  • A new architecture for the emulator which more closely resembles the target Azure cloud
  • Improved performance of emulator
  • Support for uploading service certificates

imagePerformance of the emulator has always been a criticism of the SDK so this is a particularly welcome development. For the Visual Studio 2010 Azure tools improvements include :

  • New project type for ASP.NET MVC 3 roles
  • Profile applications running on Azure
  • Add an Azure deployment project to standard MVC, Web Forms and Web Service projects

The new features provide significant improvements in development productivity, type of applications for Azure and also in management tools for monitoring deployed applications. All combined, these provide an incredibly powerful platform for anybody developing Web applications or Services using .NET technology. The benefits, both technical and commercial to organisations is significant. If you would like to find out more about developing for Azure why not consider attending the four day Learning Tree course developed by Azure authority Doug Rehnstrom. I am sure you will learn a lot.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

No significant articles today.


<Return to section navigation list>

Cloud Computing Events

Jim O’Neil (@jimoneil) reported on his TechWave 2011: PowerBuilder and the Cloud presentation on 9/26/2011:

imageIt was great to see so many former colleagues and customers at SAP TechEd and Sybase TechWave a little over a week ago in Las Vegas. Thanks especially to those that sat in on my two ‘cloudy’ PowerBuilder talks; I hope they were informative and gave you some ideas about how various features of the Windows Azure cloud can be relevant to your PowerBuilder applications, even those you may have built years ago.

Sybase TechWave 2011My slides and demos are available – where else – in the cloud at: http://azurehome.blob.core.windows.net/presentations/TechWave2011. This post provides a quick overview of each of the demos, in case you’d like to try to run them on your own using the code in the download. You’ll be able to run some of the samples out-of-the-box, but if you really want to kick the tires of Windows Azure, I recommend downloading the Windows Azure Tools for Visual Studio 2010 and getting your free trial account. Then you can run my samples as well as experiment with your own code. Let me know what great applications you come up with, and if you run into issues with getting my samples to run, I’d be happy to help.

PowerBuilder Connection to Windows Azure

There’s wasn’t really any code to this, I just demonstrated how to create a database on SQL Azure and then connect to it via the PowerBuilder Database painter. We used the pipeline to create the EAS Demo DB Customer table in SQL Azure and then view the data in the painter via the ODBC driver. It’s really that easy to connect, since SQL Azure uses the same native Tabular Data Stream (TDS) protocol that SQL Server does.

The trickiest part is specifying the correct server name (which ends in database.windows.net) and the user name, which must be in the format user@server; you can always get the ODBC connection string (see below) from the Windows Azure portal via the View Connection String… button in the Properties pane for the selected database. Note: you will not be able to connect to the database noted below since the firewall rules are not set up to support it, and I’ve not given you my password! If you provision a Windows Azure trial account though, it will take you only minutes to get to this point.

SQL Azure Connection Strings

Upon connecting in the Database painter, you will get an error that the catalog tables cannot be created; this occurs because one or more of them do not include a clustered index, which is a requirement of all tables in SQL Azure. The tables can be created via the standalone scripts available with the PowerBuilder installation after you update them to specify a clustered index on each table.

If you try the SQL Native Client driver (SNC), you’ll get an error in the Database painter because the driver makes use of cross-database references to tempdb to populate the table list (and likely other metadata), and that construct is not supported in SQL Azure. If you build your DataWindows with the ODBC driver and execute them with the native driver, that seems to work. Note, I haven’t tested all data types thoroughly, and this is currently an unsupported database as far as PowerBuilder is concerned.

One last thing to keep in mind is that if you connect from a PowerBuilder Windows application to a SQL Azure database, you’re making a trip to one of the six data centers across the earth (you pick which one when you create the server). Latency and timeouts are therefore more likely to be an issue than with your on-premises, client-server setups. Keep that in mind if you design desktop applications to access SQL Azure, and be sure to code defensively!

PowerBuilder and Windows Azure Blob Data

In this sample (PB_and_BlobStorage), I built a simple PowerBuilder 12.5 Classic application to access my Windows Azure blob storage account. A number of the containers in this account are marked for public access, so it’s very easy to use the GetUrl method in PowerBuilder (which has been around since PowerBuilder 5!) to retrieve the data and populate a DataWindow (I used an XML import template here).

Windows Azure blob viewer

Try changing the directory to “presentations” in the window above, and you’ll see a list of PowerPoint and other files that I’ve hosted for distribution via Windows Azure.

PowerBuilder Web Service Deployed to Windows Azure

Faux Windows Phone Client to Windows AzureIn this demo (PB_WebService) I faked out a Windows Phone 7 interface with PowerBuilder 12.5 Classic to access a PowerBuilder Web Service that was deployed to Windows Azure. The service code in pokerhand.pbl was deployed as a .NET Web Service (the older .asmx type, not a WCF service in the case – though that would be possible in the .NET version of PowerBuilder 12.5). The client code in fauxphoneclient.pbl uses the SOAP client capability of PowerBuilder to invoke that Web Service hosted in Windows Azure.

Unfortunately, at this point PowerBuilder doesn’t have a strong deployment story for Windows Azure, so there’s a bit of manual intervention needed to get the service running in the cloud. The approach I used involved the following steps, which are ok for demos but not really viable for a production system:

  1. Create an MSI for the PowerBuilder Web Service (an option in the PowerBuilder project painter).
  2. Generate an MSI using the PowerBuilder Runtime Packager to collect the required PowerBuilder runtime files, which, of course, will not be present by default in the cloud.
  3. Incorporate the two MSI files as content files in a Visual Studio 2010 Web Role project (see the vs directory for this demo). Note, you don’t need to write a lick of C# code, the project is merely the vehicle to move your PowerBuilder files to Windows Azure.
  4. Deploy the Web Role project from Visual Studio, which also pushes the two MSI files to a Virtual Machine in the cloud.
  5. Remote Desktop to the Web Role in the cloud and run the two MSI installers interactively. This is currently required since the Web Service MSI installer cannot be run in silent mode.

The demo code and service should be operational, so you can certainly run the application as is; if you have problems let me know. If you are interested in following the steps I mentioned above, you’ll need your own Windows Azure account to perform them.

Note that with a few tweaks to the PowerBuilder MSI installers – primarily the support of silent installation – the deployment can be completely automatic as well as resilient in the event of VM reboots or fail-overs in Windows Azure (which is not the case using the mechanism above). If you just can’t wait for native support in PowerBuilder and would like further details about about a more production-ready approach to setting up a PowerBuilder Web Service in the cloud (using the Windows Azure VM role), please contact me and I can elaborate.

PowerBuilder and the Service Bus

This demo (PB_ServiceBus), as I expected, is where all audience control was lost! The scenario here involves a PowerBuilder 12.5 NET application that hosts a small WCF service (implemented by n_AlertService) with a simple interface to accept a color value and a message. When that service is invoked (via a separate ASP.NET application running at http://alertjim.cloudapp.net/ in Windows Azure), a new window pops up on the client machine displaying the message within a window of the desired background color.

Service Bus example output

That may sound underwhelming, but the key here is that the service can be invoked from anywhere even though the machine hosting the service is behind a firewall and not otherwise publically accessible! For instance, if you run the PowerBuilder application on your home network behind your firewall, behind your ISP’s firewalls, etc., and then text your buddy across the country to visit the public site http://alertjim.cloudapp.net/, he can send a message directly to your machine!

If you look at the PowerBuilder code, behind the cb_listen button on w_window, you’ll see the code to host the service with an endpoint URI constructed as follows:

endpointUri[1] = ServiceBusEnvironment.CreateServiceUri("sb", "techwave", "AlertService")

That translates to a URL of sb://techwave.servicebus.windows.net/AlertService, which is the same endpoint that the Visual Studio web application talks to as a client with the following code (note especially Line 4):

image

That endpoint is simply a namespace hosted in the Windows Azure cloud and created declaratively using the Windows Azure Portal – yes it’s that easy!

Service Bus configuration in Windows Azure portal

Note that only one application can establish the endpoint listener at a time, so if someone else happens to be running the PowerBuilder example at exactly the same time you try, you’ll be greeted with the following dialog. Accommodating this scenario isn’t hard, but it will require that you set up a Service Bus endpoint in your own account to experiment further.

Exception raised when trying to create another listener


PowerBuilder and RESTful data

My PowerBuilder and RESTful data application (PB_Netflix) focused on using the new functionality to create a REST client in PowerBuilder 12.5 .NET. My good friend and PowerBuilder guru, Yakov Werde, has written some excellent articles on how to exercise this functionality, so I’ll refer you to that for the mechanics. The specific service I used is publically available from Netflix, http://odata.netflix.com/

PowerBuilder Netflix sample


PowerBuilder and OData

While Netflix is an OData source, the mechanism I used for the prior demo didn’t really exploit on the fact it is, in fact, OData. OData itself is a huge gateway to amazing number of data providers including SharePoint 2010, SAP Netweaver Gateway, and a number of free and subscription data services at the Windows Azure DataMarket. Although PowerBuilder itself does not yet have explicit OData support (that was cited as a proposed feature for PowerBuilder 15), it’s *just* HTTP and XML (or JSON), so there are no real technical barriers.

One way to incorporate an OData source today in your PowerBuilder .NET application:

  • Create a .NET Class Library and generate a proxy class using the Add Service Reference.. functionality for the REST endpoint (e.g., http://odata.netflix.com/v2/Catalog).
  • Add methods to this class library to perform the desired queries on the underlying data source.
  • Add the assembly generated from Visual Studio as a reference to your PowerBuilder 12.5. NET application.
  • Instantiate the .NET class you created in the second step above, and invoke the desired query method.
  • Access the data returned via the .NET proxy classes.

I’ve done just that with a demo (PB_DataMarket) that leverages crime statistics from Data.gov. Note to run the sample, you’ll have to get your own (free) account on the Windows Azure DataMarket, subscribe to the data set, and modify the code behind the command button to refer to your Live ID and account key that is assigned when you subscribe to the DataMarket.

OData sample

The code in the Visual Studio project is pretty straightforward, and of course, you can add additional query functionality and parameters. When PowerBuilder .NET supports extension methods, you should be able to write the code below directly in PowerScript (at the moment, I’m finding PowerBuilder crashes whenever I try to write this analogous code directly in the IDE).

image

The PowerBuilder code to populate the WPF graph DataWindow looks like this (after pulling in a reference to the .NET assembly created by the Visual Studio project). Note how Line 6 invokes the method defined at Line 19 in the C# script above:

image

Windows Azure DataMarketOf all the technologies I covered during my two talks, I have to say this one excites me the most. There are a host of free services in the DataMarket, and you can use a similar process for pulling any of that data in and mashing it up with your own data to offer some incredible new functionality and value in your applications. If you are still using PowerBuilder Classic, it should be possible to access this same .NET proxy assembly with a COM Callable Wrapper (CCW).

Alternatively, OData is really just HTTP and Atom/JSON, so it can be consumed ‘natively’ in PowerBuilder Classic as well. Unfortunately, there isn’t a convenient wrapper class to handle all of the HTTP calls necessary, so it would be a bit of work to pull it off, but it would be a great open-source project to contribute back to the PowerBuilder and OData community! There are similar libraries already out there for Python, Ruby, Objective C and other languages.

If you try some of these demos out, let me know what you think. None are of production quality at this point, but hopefully they’ll get you started exploring what PowerBuilder and Windows Azure can accomplish together.

I thought PowerBuilder had gone they way of dBASE and Clipper (into oblivion.)


<Return to section navigation list>

Other Cloud Computing Platforms and Services

The Amazon Silk Team (@AmazonSilk) posted Introducing Amazon Silk on 9/28/2011:

imageToday in New York, Amazon introduced Silk, an all-new web browser powered by Amazon Web Services (AWS) and available exclusively on the just announced Kindle Fire. You might be asking, “A browser? Do we really need another one?” As you’ll see in the video below, Silk isn’t just another browser. We sought from the start to tap into the power and capabilities of the AWS infrastructure to overcome the limitations of typical mobile browsers. Instead of a device-siloed software application, Amazon Silk deploys a split-architecture. All of the browser subsystems are present on your Kindle Fire as well as on the AWS cloud computing platform. Each time you load a web page, Silk makes a dynamic decision about which of these subsystems will run locally and which will execute remotely. In short, Amazon Silk extends the boundaries of the browser, coupling the capabilities and interactivity of your local device with the massive computing power, memory, and network connectivity of our cloud.

imageWe’ll have a lot more to say about Amazon Silk in the coming weeks and months, so please check back with us often. You can also follow us on Twitter at @AmazonSilk. Finally, if you’re interested in learning more about career opportunities on the Amazon Silk team, please visit our jobs page.


Joe Brockmeier (@jzb) riffed on The Implications of Amazon's Silk Web Browser in a 9/28/2011 post to the ReadWriteCloud blog:

imageJeff Bezos wasn't just rambling today when he was talking about Amazon's cloud services in the middle of the consumer-focused Kindle triple-launch. Amazon's Kindle has massive implications for the tablet market, but the Silk browser has some implications for the Web at large. And don't expect the Silk browser to stay confined to the Kindle Fire.

imageBy funneling traffic through Amazon's own servers, it may create some privacy implications and security concerns for individuals and businesses. It also changes the landscape a bit for cloud computing providers.

Technical Implications

From a technical perspective, it seems Amazon has come up with a fairly creative solution for dealing with the problem of Web browsing for mobile devices.

As Amazon says, modern Web sites are getting more and more complex. Rendering a single Web page for many sites requires hitting tens of domains and upwards of 100 files. That can be sluggish even on modern desktops, and they have a lot more horsepower than the Kindle Fire's 7-inch package can hold.

Focusing on EC2 means that Amazon is putting out a clarion call for companies to host their sites on AWS infrastructure. The promise is that if you host there, you're going to be reaching your customers that much faster. Granted – right now you're only going to be reaching your customers that happen to have a Kindle Fire tablet.

But does it seem likely that Amazon will put that much emphasis on Silk just for the Fire? I don't think that's likely. Amazon has several jobs posted for Silk engineers, and while mobile is mentioned, it's not exclusive. I strongly suspect that Amazon is going to be releasing a Silk desktop browser eventually. Probably not in the near future – Amazon needs to make sure that its infrastructure can handle the onslaught of all the Kindle users before trying to scale to an unknown number of desktop users.

Another side-effect of Silk is that Amazon is making AWS a household name. Sure, there are plenty of providers – but how many have a major consumer device to showcase their services?

As an aside, I wonder what happens when Amazon has another major EC2 outage as they did earlier this year? Does this mean that their customers using the Silk browser are going to be unable to reach the Web?

Note that Amazon isn't the first to do something like this. On a smaller scale, Opera lets users turn on server-side compression that goes through its servers. However, it does look like Amazon is doing it on a much larger scale, and certainly has a heftier infrastructure than Opera.

If Amazon is successful here, look for Google, Apple and Microsoft to quickly follow suit. If the split-browsing idea catches on, it puts Mozilla in an interesting position as the only major browser maker without the kind of infrastructure to deploy the cloud-side services.

Privacy, Security and Content Integrity

What's of greater interest here is that Amazon is positioning itself to filter content viewed by millions of users – assuming the Fire sells well, of course.

From Amazon's press release about the Silk, "with each page request, Silk dynamically determines a division of labor between the mobile hardware and Amazon EC2 (i.e. which browser sub-components run where) that takes into consideration factors like network conditions, page complexity and the location of any cached content." Amazon goes on to say that Silk is going to be learning from the "aggregate traffic patterns" of Web users. In short, Amazon is watching you.

silk.jpg

And not just in aggregate. Each Kindle is tied to an Amazon ID, which gives Amazon a great deal of information about you already. Introducing Silk into the mix and Amazon is going to be in a position to know a great deal about your Web browsing habits along with your buying habits and media habits. Now Amazon is in a position to know what books you buy, what shows you watch, the Web sites you visit and much more. I'm curious to see how Silk handles things like corporate intranets where it has no access to the sites in question.

Granted, Amazon isn't the only one with insight into your browsing habits – so is your ISP. But this introduces a new relationship between Amazon and its customers that bears noticing. And Amazon is taking a more active role in the experience by trying to pre-fetch and deliver data ahead of time.

I asked the EFF what they thought about the implications of the Silk browser. Given the fact that they've had no time to look at it closely, they declined to give a specific comment – but pointed to the reader privacy act that they're supporting in California. While that's targeted at records for booksellers and relates to digital and physical books, it might be time to expand the scope a bit.

Then there's the question of content integrity. Amazon has indicated that it plans "optimized content delivery," and the example given is compressing images for display on the Fire. On one hand, this makes sense – you may not want a 3MB image when you're going to be viewing it on a 7-inch screen. Then again, maybe you do want the uncompressed image. With Amazon caching so much content, how do you know if you're getting the latest content and not something that's 10 minutes old or worse. In the majority of situations, it may not matter if you get cached content that's slightly out of date, but it may matter a lot to publishers.

Until the Kindle Fire ships, there are more questions than answers. I'm eager to get hands on a Fire so I can test out Silk and see for myself how it works. I'm not yet concerned about the privacy issues, but I do think they bear watching. What do you think? Is the Silk model something you're excited about, or is Amazon a middle-man you'd rather do without when browsing the Web?


Bruno Terkaly continued his Android series with Supporting Billions of entities/rows for Mobile – Android Series - Part 4–Building a Cloud-based RESTful service for our Android, iOS, and Windows Phone 7 Clients on 9/28/2011. [Also, see below.]


Bruno Terkaly (@BrunoTerkaly) continued his series with Supporting Billions of entities/rows for Mobile – Android Series - Part 2 - What are some high level cloud offerings? on 9/27/2011:

imageHere are a few examples of cloud – based data offerings

Although I work for Microsoft, I want to present as balanced a picture as I can. That means I will address other options that are available to Android developers today, which can be considered to be outside of the Microsoft ecosystem.
There is somewhere around 200,000 Android applications out in the wild. The types of back-end data stores are many and varied.
I presented several vendors here and I’m sure I missed a few. Feel free to let me know what is critically overlooked (bterkaly@microsoft.com).

Push or Pull

There are two ways mobile applications read data. First, a mobile application can simply request data. This is called "Pull," because the app is pulling data in.

The second type is "Push," which means that data is sent to the mobile application without the mobile application requesting it.

MyImage

For example, it may be necessary for the cloud to notify mobile applications when new data is ready.

A "Pull" scenario often means your are providing access to the cloud data with a RESTful architecture.

RESTful services

zfcqwjj3

RESTful services are a style of software architecture based on the underpinnings of the World Wide Web.

image

Below is a rough landscape of cloud-based data providers

The list below is not meant to be fully comprehensive. It is meant as a rough guide about the different types of cloud vendors out there. There are clearly dozens of other data providers that I did not bother to add the list below. As I said earlier, just let me know through e-mail what you think is critical and why.

SimpleGeo

3zx0fy0uYou can think of this offering as an interesting collection of business listings and points of interest. This is an offering that allows you to harness the power of the location-aware aspect of your Android device. In other words it leverages your GPS System on the device.

SimpleGeo offers a geographically aware database that you can query and where you can store location data.

The SDK is available for many environments, such as Objective C, Java/Android, JavaScript, Python, Ruby,and .Net.

For many applications it doesn’t make sense to build all this from scratch yourself.

UrbanAirship

evnkm1neThis is a company that offers Push Notification Services (discussed previously). As stated earlier, this allows developers to send out notifications to its users, such as sending them important messages and updates, breaking news, current weather, and so on.

UrbanAirship supports almost all versions of the Android device, which can simplify support for developers.

MongoLabs

dutowa0nMongoDB is an open source database that MongoLabs will host for you. MongoLabs makes it easy for you to expose your data through a RESTful API. The underlying MongDB is a schema free, document-oriented database, that manages your data as collections of JSON like documents. The data can be constructed and nested into complex hierarchies, that can be indexed and are query-able.

Google

wspqehecThe name of Google’s data store and cloud products are bundled in App Engine. You can program in a variety of languages including Java and Python. It is a Java 6.0 runtime that is available, so you could support JRuby, Groovy, Scala. In order for these languages to operate in App Engine, there are a set of incompatible libraries and frameworks that you must pay attention to.

They also have an experimental language now called Go. There is also a query language called GQL. GQL allows you to execute queries from the Python runtime or from the Admin Console.

Amazon Web Services

1ghm2xsxIf you look at the developer offerings from Amazon, you will note Amazon Web Services as one of the first options at Developer.Amazon.Com. Amazon offers infrastructure as a service capabilities, based on their own back in technology platform.

I perceive a Amazon on as an Infrastructure As A Service company, which means typically you are booting your own customized virtual machines. Typically, developers will set up their own custom security and network access configuration. I consider Infrastructure as a Service leaving you wanting more, because you’ll end up writing and configuring a lot of plumbing code.

Users of the service can access a web service API, which allows you to programmatically add scale or reduce it.

Amazon offers both hierarchical data stores as well as relational data stores.

Amazon offers multiple data center locations as well as service level agreements.

Microsoft Windows Azure

qg1ixo1kI think that when you look at the spectrum of the offerings mentioned above, Microsoft products do offer a unique approach. Clearly, we offer a full line of comprehensive products, both in terms of Compute and Storage, which include service level agreements and multiple data center support.

When I think about how I would separate Microsoft's offerings from the ones above, here is what comes to mind:

image

Next post is about building a RESTful service hosted in the cloud.
And after that post it will be about consuming the RESTful data from Android.


Cloud Times asserted Piston Cloud to Create World’s Most Secure OpenStack Distribution in a 9/27/2011 post:

Piston Cloud Computing, Inc., the enterprise OpenStack™ company, officially entered the market today to introduce pentOS™, an easy, secure and open cloud operating system for managing enterprise private cloud environments.

Building and Betting on OpenStack
Piston Cloud CEO and co-founder, Joshua McKenty, was technical lead and cloud architect of NASA’s Nebula Cloud Computing Platform, which formed the cornerstone of the OpenStack project, and currently holds an appointed seat on the OpenStack Project Policy Board. Christopher MacGown, co-founder and CTO, was technical lead at Slicehost, acquired by Rackspace® in 2008. The growing Piston Cloud team boasts experienced engineering talent from NASA and the broader OpenStack developer community.

OpenStack is the fastest-growing open source project in the world, with over 1,550 contributors and 110 participating companies including Rackspace, NASA, Citrix, Intel, Cisco, Arista Networks, Microsoft and Dell. OpenStack is proven and ready for enterprise prime time, and Piston Cloud is committed to driving the technology forward as a key member of the community – sharing knowledge, guidance, vision and code.

Introducing pentOS, Built for Easy, Secure and Open Enterprise Private Clouds
Piston Enterprise OS™ (pentOS) is the first enterprise OpenStack cloud operating system specifically focused on security and easy operation of private clouds. Piston’s patent-pending Null-Tier Architecture™ offers storage, compute and networking on every node for massive scalability.

Custom built to address regulatory requirements, pentOS represents the first enterprise implementation of CloudAudit, a cross-industry standard launched in early 2010 to automate the process of auditing cloud service providers, which has become the security standard for OpenStack.


Joe Brockmeier (@jzb) asserted Amazon's Linux AMI is All Grown Up in a 9/27/2011 post to the ReadWriteCloud blog:

imageAmazon has declared its Linux Amazon Machine Image (AMI) production ready. With the update, Amazon is introducing a security center to track security and privacy issues, providing 50 new packages for the distribution and adding access to Extra Packages for Enterprise Linux (EPEL).

imageThe Linux AMI provides a Linux image for use on Amazon EC2, so that users have a way to get started with EC2 without having to create their own image or use one of the paid images from Red Hat or SUSE.

imageThe Linux AMI is a minimal install, and looks to be a clone of Red Hat Enterprise Linux (RHEL). It's available in all Amazon regions, formats, and for all architectures supported on EC2. The update adds Puppet to the repositories, as well as Varnish, Pssh, Dash and the AWS command line tools.

Should Red Hat and SUSE Worry?

In essence, Amazon is acting as a Linux distributor with its Linux AMI. Technically, it has been doing that for some time – but with the removal of the beta tag, Amazon is also providing support for its images as well. It's also providing tight integration with its cloud services, and it's undercutting the competition when it comes to price. There's no charge for running Amazon's Linux AMI, aside from the normal charges for using EC2 and other Amazon Web Services. Red Hat and SUSE, however, have additional costs built in.

AWS-Management-Console.jpg

Running a small instance of Amazon's AMI is $0.085 an hour, whereas SUSE's pricing starts at $0.115 an hour and Red Hat is at $0.145 an hour for a small standard instance. Prices go up from there, of course – and SUSE and Red Hat don't provide free micro instances.

So cost-conscious folks might opt for the Amazon AMI, as long as they're not wanting to run ISV-certified software on top of the AMI. SUSE's Darrell Davis, director of ISV relations, says that there's not much threat to Red Hat and SUSE from Amazon's AMI.

First off, ISVs are unlikely to add Amazon to their list of supported platforms. Davis says that ISVs "do not like to have more Linuxes" to support.

Then there's the minor consideration of support outside AWS. Unless customers are happy living entirely inside the AWS universe, there's not much to compel customers to adopt Amazon versus SUSE or Red Hat. Davis says that Amazon's AMI will be a good choice for someone that's just building out a Web site, or other custom development, but for certified applications people will still be turning to the big two.

Users that have already deployed using the Linux AMIs can upgrade to the current release with Yum, or simply launch new 2011.09 AMIs, whichever is most convenient.


Alex Honor (@alexhonor) posted Puppet and Chef Rock. Doh. What about all these shell scripts ?! on 9/26/2011 to the dev2ops:delivering change blog:

imageIncorporating a next generation CM tool like Puppet or Chef into your application or system operations is a great way to throw control around your key administrative processes.

Of course, to make the move to a new CM tool, you need to adapt your current processes into the paradigm defined by the new CM tool. There is an upfront cost to retool (and sometimes to rethink) but later on the rewards will come in the form of great time savings and consistency.

Seems like an easy argument. Why can't everybody just start working that way?

If you are in a startup or a greenfield environment, it is just as simple as deciding to work that way and then individually learning some new skills.

chess

In an enterprise or legacy environment, it is not so simple. A lot of things can get in the way and the difficulty becomes apparent when you consider that you are asking an organization to make some pretty big changes:

  • It's something new: It's a new tool and a new process.
  • It changes the way people work: There's a new methodology on how one manages change through a CM process and how teams will work together.
  • Skill base not there yet: The CM model and implementation languages needs to be institutionalized across the organization.
  • It's a strategic technology choice: To pick a CM tool or not to pick a CM tool isn't just which one you choose (eg, puppet vs chef). It's about committing to a new way of working and designing how infrastructure and operations are managed.

Moving to a next generation CM tool like Chef or Puppet is big decision and in organizations already at scale it usually can't be done whole hog in one mammoth step. I've seen all too often where organizations realize that the move to CM is a more complicated task than they thought and subsequently procrastinate.

So what are some blocking and tackling moves you can use to make progress?

Begin by asking the question, how are these activities being done right now?

I bet you'll find that most activities are handled by shell scripts of various sorts: old ones, well written ones, hokey rickety hairballs, true works of art. You'll see a huge continuum of quality and style. You'll also find lots of people very comfortable creating automation using shell scripts. Many of those people have built comfortable careers on those skills.

tshirt

This brings me to the next question, how do you get these people involved in your movement to drive CM? Ultimately, it is these people that will own and manage a CM-based environment so you need their participation. It might be obvious by this point but I think someone should consider how they can incorporate the work of the script writers. How long will it take to build up expertise for a new solution anyway? How can one bridge between the old and new paradigms?

The pragmatic answer is to start with what got you there. Start with the scripts but figure out a way to cleanly plug them in to a CM management paradigm. Plan for the two styles of automation (procedural scripting vs CM). Big enterprises can't throw out all the old and bring in the new in one shot. From political, project management, education, and technology points of view, it's got to be staged.

To facilitate this pragmatic move towards full CM, script writers need:

  • A clean consistent interface. Make integration easy.
  • Modularity so new stuff can be swapped/plugged in later.
  • Familiar environment. It must be nice for shell scripters
  • Easy distribution. Make it easy for a shell scripter to hand off a tool for a CM user (or anybody else for that matter)

Having these capabilities drives the early collaboration that is critical to the success of later CM projects. From the shell scripter's point of view, these capabilities put some sanity, convention and a bit of a framework around how scripting is done.

I know this mismatch between the old shell script way and the new CM way all too well. I've had to tackle this problem in several large enterprises. After a while, a solution pattern emerged.

rerunSince I think this is an important problem that the DevOps community needs to address, I created a GitHub project to document the pattern and provide a reference implementation. The project is called rerun. It's extremely simple but I think it drives home the point. I'm looking forward to the feedback and hearing from others who have found themselves in similar situations.

For more explanation of the ideas behind this, see the "Why rerun?" page.


Jeff Barr (@jeffbarr) reported Amazon Route 53 - Now an Even Better Value in a 9/26/2011 post:

imageI never get tired of writing posts that announce price decreases on the various AWS services!

Today, we are reducing the price to host a set of DNS records for a domain (which we call a hosted zone) using Amazon Route 53. Here's the new pricing structure:

  • $0.50 per hosted zone per month for the first 25 zones per month.
  • $0.10 per hosted zone per month for all additional zones.

imageThe original pricing was $1.00 per hosted zone per month, with no volume discounts.

    The per-query pricing ($0.50 per million for the first billion queries per month, and $0.25 per million afterward) has not changed.

    Add it all up, multiply it all out, and you will see savings of between 50% and 90% when compared with the original prices. The AWS Simple Monthly Calculator's example shows that if you managed 100 hosted zones, your bill will drop from $100 to $20. We enjoy making it easier and more cost-effective for our customers to use AWS and this is one more step in that direction.

    In case you forgot, the Amazon Route 53 Service Level Agreement specifies 100% availability over the course of a month. Over the last couple of months we've seen a number of large-scale customers come on board.

    With 19 locations world-wide, Route 53 is able to provide low-latency, high-quality to all of your customers, regardless of their location.

    As I described in a previous post, Route 53 also integrates with the Elastic Load Balancer to allow you to map the apex of any of your hosted zones directly to an Elastic Load Balancer.

    We have additional enhancements to Route 53 on the drawing board, so stay tuned to this blog.


    Jeff Barr (@jeffbarr) described Amazon Linux AMI - General Availability and New Features in a 9/26/2011 post:

    imageWe introduced the Amazon Linux AMI in beta form about a year ago with the goal of providing a simple, stable, and secure Linux environment for server-focused workloads. We've been really happy with the adoption we've seen so far, and we continue to improve the product and further integrate it with other Amazon Web Services tools.

    imageToday we are zapping the "beta" tag from the Amazon Linux AMI, and moving it to full production status. We are also releasing a new version (2011.09) of the AMI with some important new features. Here's a summary:

    • The Message of the Day now tells you when updates to installed packages are available.
    • While the AMI’s default configuration is set to provide a smooth upgrade path from release-to-release, you can now lock the update repositories to a specific version to inhibit automatic updates to newer releases.
    • Security updates are automatically applied on the initial boot of the AMI. This behavior can be modified by passing user data into the AMI with cloud-init.
    • There's a new Amazon Linux AMI Security Center.
    • Puppet has been added to the repositories and is available for system configuration management.
    • Access to the Extra Packages for Enterprise Linux (EPEL) repository is configured, though not enabled by default. EPEL provides additional packages beyond those shipped in the Amazon Linux AMI repositories, but these third party packages are not supported.
    • The cfn-init daemon is installed by default to simplify CloudFormation configuration.
    • A total of 50 new packages are available including the command line tools for AWS, Dash, Dracut, Facter, Pssh, and Varnish. 227 other packages have been updated and 9 have been removed. For a full list of changes, refer to the Amazon Linux AMI Release Notes.

    Users of existing Amazon Linux AMIs can either upgrade to the latest release with yum or launch new 2011.09 AMIs. The new AMIs are available in all AWS regions.


    <Return to section navigation list>

    0 comments: