Wednesday, December 07, 2011

Windows Azure and Cloud Computing Posts for 12/6/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

• Updated 12/7/2011 3:30 PM PST with many new articles marked . Be sure to check out the post about the release of Codename “Data Explorer” and the immediate availability of the “Data Explorer” Desktop Client to all comers in the Marketplace DataMarket, Social Analytics and OData section. Also, check out Scott Guthrie’s Windows Azure keynote on 12/13/2011 at 9:00 AM, as described in the Cloud Computing Events section. 

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

The Datanami blog reported MapR Update Extends Hadoop Accessibility on 12/7/2011:

This week MapR released an updated version of its Hadoop distribution, adding some features that are targeted at improving overall performance and accessibility for a larger set of applications.

imageThe release of Version 1.2 is focused on extending user access, both in terms of their diverse applications and environments, not to mention reaching out to those who are still waffling about which distribution is the best fit.

In addition to expanding API access, MapR says they are including support for MapReduce 2.0 for when it becomes available. While it’s still some time off before the newest release of Hadoop is production-ready, the company says “users will be able to take advantage of the combined benefits of MapReduce 2.0, such as backward-compatibility and scalability and MapR’s unique capabilities, such as HA (no lost tasks or jobs during a failure) and the high performance shuffle.”

MapR is working to address some of the stability issues that have plagued some Hadoop users, issues that have lent to some deciding to keep the open source version out of production environments. In the new update they have upgraded a number of elements, including Hive, Pig and HBase. They also claim that they have been able to identify several “critical stability data corruption issues” in HBase that have been fixed.

For those who are still on the Hadoop distribution fence, the company announced that it is now possible to access an entire MapR cluster as a free VM to experiment with the platform and “try before buying” into the Hadoop distro. Using the test virtual cluster will allow potential users to play with some of the elements that make MapR a bit different, including its NFS capabilities and snapshots. They claim that testing out the distro can happen within minutes on a standard laptop.

The company plans on rolling out the new version later this week and will make the test cluster available at the same time. Many of the companies pitching Hadoop distributions are looking for ways to let users make the differentiations themselves, it seems that opening access and making onboarding simple is one of the only ways of accomplishing this.


Rohit Asthana reported on 12/6/2011 that he had earlier posted source code for a Silverlight Azure Blob Parallel Upload Control to CodePlex (missed when published):

Problem:

imageTraditionally uploading files to Windows Azure blob storage involve one of the following approaches, assuming the keys of your account are not to be made available to the client uploading the file:

    1. Web role uploads file to temporary directory and then uses API to upload file to a blob in parallel.

    2. Web role uses client side script, generally JQuery to split a client file and use an intercepting WCF service to upload the file to blob sequentially.

    3. Web role uses client side script and shared access signature to upload file sequentially to blob storage.

    4. Web role uses Silverlight control to upload files using shared access signature on the container.

imageNone of the solutions are truly up to mark in the issue of uploading files to blob storage from client side (to make it faster) and in parallel (to utilize parallel uploading capabilities of blob storage and make the process even faster).

Solution:

The solution to the problem is two staged:

    1. Use a client side application to manage file operations such as splitting the file in chunks and retrying in case of failure.

    2. Web role uses client side script, generally JQuery to split a client file and use an intercepting WCF service to upload the file to blob sequentially.

Building such a solution is simple with Silverlight. It runs on client and supports threading as well. Let’s proceed step by step to build such a solution (Process flow diagram at the end of document):

    1. Create a cross domain policy for access to blob storage through Silverlight application. This involves adding a policy file to the $root container of your storage account.

    2. Acquire shared access signature on container for a sufficient time in which the file may be uploaded.

    3. Pass this signature to the Silverlight application handling file uploads. I passed it through initialization parameter to Silverlight application.

    4. Inside the Silverlight application split the file into chunks of 1 MB each.

    5. Upload file using a single PUT request of you get a single packet and otherwise upload the file chunks as block blobs in parallel using multi-threading, I used Portable TPL that is an open source abstraction to threading in Silverlight 4. Silverlight 5 would have inbuilt TPL, but the process would remain essentially the same.

    6. In case any of the threads fail to upload its designated content, retry a finite number of times and fail the entire upload process if it keeps on failing.

    7. In case you have successfully uploaded the file as block blobs then issue a PUT request to commit the blocks.

    8. Exit the application.

Please refer the documentation for more details.

Silverlight Blob Parallel Upload Control

image image image

Features Coming Soon

    1. Support to upload multiple files.

    2. More Descriptive Upload Progress Tracker bar. This would display the each MB wise or kb wise uploading progress of the file.

    3. Implement concurrency control with TaskScheduler.

    4. Support for even larger files (>200 Mb).

References

1. http://blog.smarx.com/posts/uploading-windows-azure-blobs-from-silverlight-part-1-shared-access-signatures

2. http://portabletpl.codeplex.com/

3. http://watwp.codeplex.com/


<Return to section navigation list>

SQL Azure Database and Reporting

Haddy El-Haggan (@Hhaggan) described Managing SQL Azure ADO.NET in a 12/3/2011 post (missed when published):

imageIn the previous post, we were talking about creating servers, databases and tables using the Windows Azure portal. Now we will talk about developing it using ADO.NET.

First of all what do we need to start the development?

imageYour username, password, server name and database name.

Create the first connection string to the master database to be able to create other databases like the following:

SqlConnectionStringBuilder constrbuilder;
constrbuilder = new SqlConnectionStringBuilder();
constrbuilder.DataSource = dataSource;
constrbuilder.InitialCatalog = “master”;
constrbuilder.Encrypt = true;
constrbuilder.TrustServerCertificate = false;
constrbuilder.UserID = userName;
constrbuilder.Password = password;

Now you are able to create a new database like the one we did in the previous post but using the ASO.NET.

SqlConnection con = new SqlConnection(connString1Builder.ToString());
SqlCommand cmd = con.CreateCommand();
con.Open();
cmd.CommandText = string.Format(“CREATE DATABASE {0}”, databasename);
cmd.ExecuteNonQuery();
con.Close();

as the previous example as long as you have the data required to access the database you can normally do anything like the SQL server

SqlConnection con = new SqlConnection(constrbuilder.ToString());
SqlCommand cmd = con.CreateCommand();
cmd.CommandText = “ENTER ANY COMMAND YOU WOULD LIKE TO BE DONE ON YOUR DATABASE”;
cmd.ExecuteNonQuery();

<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

The Codename “Data Explorer” Team (@DataExplorer) posted Announcing the Labs release of Microsoft Codename “Data Explorer” on 12/6/2011:

imageToday we are excited to announce the availability of the lab release of Microsoft Codename “Data Explorer”. This is an exciting milestone for our team and we would like to thank you for your continued interest over the past few weeks.

imageIf you had already signed up for trying our cloud service, you can expect to receive an email over the coming weeks with instructions for getting started with the service. We are gradually onboarding people as we described in this previous post.

However, if you are eager to try “Data Explorer” and do not want to wait until you get the invitation to the cloud service, you can install our Desktop Client. You can find the “Data Explorer” Desktop Client download here, available to anyone who wants to try it out. “Data Explorer” in the client currently offers integration with Excel along with many of the same capabilities available in the cloud.

Once you start using “Data Explorer”, you might consider visiting our Learning Page. Here you will find learning resources including step-by-step samples, how-to videos and the formula language and library specifications.

We would like to remind you that the main purpose of this lab release is to validate our ideas for “Data Explorer” as well as to learn from your feedback. You can use our forum to send us your thoughts and comments.

We look forward to hearing from you about your experiences with “Data Explorer”.

I received my Codename “Data Explorer” activation key today. Stay tuned for a “First Look at the Codename ‘Data Explorer’ Desktop Client” post and mashup examples.


Jamie Thomson (@jamiet) posted Data Explorer walkthrough – Parsing a Twitter list on 12/7/2011:

imageYesterday the public availability of Data Explorer, a new data mashup tool from the SQL Server team, was announced at Announcing the Labs release of Microsoft Codename “Data Explorer”. Upon seeing the first public demos of Data Explorer at SQL Pass 2011 I published a blog post Thoughts on Data Explorer which must have caught someone’s attention because soon after I was lucky enough to be invited onto an early preview of Data Explorer, hence I have spent the past few weeks familiarising myself with it. In this blog post I am going to demonstrate how one can use Data Explorer to consume and parse a Twitter list.

imageI have set up a list specifically for this demo and suitably it is a list of tweeters that tweet about Data Explorer – you can view the list at http://twitter.com/#!/list/jamiet/data-explorer. Note that some of the screenshots in this blog post were taken prior to the public release and many of them have been altered slightly since then; with that in mind, here we go.

First, browse to https://dataexplorer.sqlazurelabs.com/ and log in

[Note: You’ll need an activation key to log in.]

When logged in select New to create a new mashup

image

Give your mashup a suitable name

image

You will be shown some options for consuming a source of data. Click on Formula

image

We’re going to be good web citizens and use JSON rather than XML to return data from our list. The URI for our Twitter API call is https://api.twitter.com/1/lists/statuses.json?slug=data-explorer&owner_screen_name=jamiet, note how I have specified the list owner (me) and the name of the list (what they call the slug) “data-explorer” as query parameters. If you go to that URL in your browser then you will be prompted to save a file containing the returned JSON document which, if all you want to do is see the document, isn’t very useful. In debugging my mashups I have found a service called JSON Formatter to be invaluable because it allows us to see the contents of a JSON document by supplying the URI of that document as a parameter like so: http://jsonformatter.curiousconcept.com/#https://api.twitter.com/1/lists/statuses.json?slug=data-explorer&owner_screen_name=jamiet. It might be useful to keep that site open in a separate window as you attempt to build the mashup below.

I’ve digressed a little, let’s get back to our mashup. We’re going to use a function called Web.Contents() to consume the contents of the Twitter API call and pass the results into another function, Json.Document(), which parses the JSON document for us. The full formula is:

= Json.Document(Web.Contents(“https://api.twitter.com/1/lists/statuses.json?slug=data-explorer&owner_screen_name=jamiet”))

image

When you type in that formula and simply hit enter you’re probably going to be faced with this screen:

image

Its asking you how you want to authenticate with the Twitter API. Calls to the https://api.twitter.com/1/lists/statuses.json resource don’t require authentication so anonymous access is fine, just hit continue. When you do you will see something like this:

image

The icon

image

essentially indicates a dataset, so each record of these results is in itself another dataset. We’ll come onto how we further parse all of this later on but before we do we should clean up our existing formula so that we’re not hardcoding the values “data-explorer” and “jamiet”.

The Web.Contents() function possesses the ability to specify named parameters rather than including them in the full URL. Change the formula to:

= Json.Document(Web.Contents("https://api.twitter.com/1/lists/statuses.json", [Query = [slug="data-explorer", owner_screen_name="jamiet"] ])) :

image

That will return the same result as before but now we’ve broken out the query parameters {slug, owner_screen_name} into parameters of Web.Contents(). That’s kinda nice but they’re still hardcoded; instead what we want to do is turn the whole formula into a callable function, we do that by specifying a function signature and including the parameters of the signature in the formula like so:

= (slug,owner_screen_name) => Json.Document(Web.Contents("https://api.twitter.com/1/lists/statuses.json", [Query = [slug=slug, owner_screen_name=owner_screen_name] ]))

image

Let’s give our new function a more meaningful name by right-clicking on the resource name which is currently set as “Custom1” and renaming it as “GetTwitterList”:

image

image

We have now defined a new function within our mashup called GetTwitterList(slug, owner_screen_name) that we can call as if it were a built-in function.

image

Let’s create a new resource as a formula that uses our new custom function and pass it some parameter values:

= GetTwitterList("data-explorer", "jamiet")

image

We still have the same results but now via a nice neat function that abstracts away the complexity of Json.Document( Web.Contents() ).

As stated earlier each of the records is in itself a dataset each of which, in this case, represents lots of information about a single tweet. We can go a long way to parsing out the information using a function called IntoTable() that takes a dataset and converts it into a table of values:

image

Here is the result of applying IntoTable() to the results of GetTwitterlist():

image

This is much more useful, we can now see lots of information about each tweet however notice that information about the user who wrote the tweet is wrapped up in yet another nested dataset called “user”.

All the time note how whatever data we are seeing and whatever we do to that data via the graphical UIs is always reflected in the formula bar; in the screenshot immediately above notice that we are selecting the “user” and “text” columns (the checkbox for “user” is off the screen but is checked).

We can now parse out the user’s screen_name using a different function – AddColumn(). AddColumn() taken an input and allows us to define a new column (in this case called “user-screen_name”) and specify an expression for that column based on the input. A picture speaks a thousand words so:

= Table.AddColumn(intoTable, "user_screen_name", each [user][screen_name])

image

There we have our new column, user_screen_name, containing the name of the tweeter that tweeted the tweet. At this point let’s take a look at the raw JSON to see where this got parsed out from:

image

Notice that the screen_name, UserEd_, is embedded 3 levels deep within the hierarchical JSON document.

We’re almost there now. The final step is to use the function SelectColumns() to select the subset of columns that we are interested in::

= Table.SelectColumns(InsertedCustom,{"text", "user_screen_name"})

image

Which gives us our final result:

image

At this point hit the Save button:

image

OK, so we have a mashup that pulls some data out of twitter, parses it and then….well…nothing! It doesn’t actually do anything with that data. We have to publish the mashup so that it can be consumed and we do that by heading back to the home page (which is referred to as “My Workspace”) by clicking the My Workspace button near the top of the page:

image

Back in My Workspace you can select your newly created mashup (by clicking on it) and options Preview, Snapshot & Publish appear:

image

We’ll ignore Preview and Snapshot for now, hit the Publish button instead at which point we are prompted for a name that we will publish the mashup as:

image

Hitting Publish will do the necessary and make our data feed available at a public URI:

image

Head to that URL (https://ws41451459.dataexplorer.sqlazurelabs.com/Published/TwitterListDemo) and here’s what you see:

image

You can download the mashup output as a CSV file or an Excel workbook. You can also download the whole mashup so you can edit it as you see fit and, most importantly, you can access the output of the mashup via an OData feed at https://ws41451459.dataexplorer.sqlazurelabs.com/Published/TwitterListDemo/Feed/jamiet-dataexplorer-text_user_screen_name.

[Opening the page requires a user name and password.]

We have used Data Explorer’s JSON parsing and dataset navigation abilities to pull out the data that we are interested in and present it in a neat rectangular data structure that we are familiar with. Moreover we have done it without installing any software and we have made that data accessible via an open protocol; that’s pretty powerful and, in my wholly worthless opinion, very cool indeed.

Have fun playing with Data Explorer. Feel free to download my Twitter List Demo mashup and mess about with it to your heart’s content.

Jamie said in a later tweet:

@rogerjenn You were correct (bit.ly/siuEpq) that Twitter List mashup reqd auth. It is fixed and all links are now publicly available.

[Above requires an activation code.]


Elisa Flasko (@eflasko) reported New Self-Service Publishing Wizard Eases Distribution of New Datasets on the Windows Azure Marketplace in a 12/6/2011 post to the Windows Azure blog:

imageOver the past year, the Windows Azure Marketplace team has worked directly with data providers to onboard new data into the Marketplace. Today we have over a hundred datasets, from sports statistics to historical weather, available to build apps and perform analytics.

imageToday we are making available tools so that anyone with valuable data can distribute it in the Marketplace through a simple self-service wizard. The publishing wizard allows you to design how your offer will appear on the Windows Azure Marketplace, including sample images, logos, and text. Providers can choose to publish any data stored in SQL Azure with this initial release, and the tool will automatically test your database and recommend performance and configuration changes to provide the best experience in the marketplace. You can even define your pricing options and enable free trials as part of the publishing process.

These new tools are part of the Publishing Portal, which provides capabilities to manage marketplace offers, view financial reports, and update tax and bank account information. If you have valuable data, you can visit our Publishing Portal today to distribute it on the Windows Azure Marketplace. We’re looking forward to exploring a wealth of new data from our publishers and to see the apps and analytics that are created with them!


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

• Christian Weyer (@thinktecture, pictured below) posted Claims-based security today - Can this get even better? Yes, it can: thinktecture IdentityServer 1.0 released on 12/6/2011:

imageDominick Baier, maestro of all-things-security, has finally launched the official successor of our successful thinktecture StarterSTS.

Now, all new, all better Smile

thinktecture IdentityServer is an open source security token service based on Microsoft .NET, ASP.NET MVC, WCF and WIF.

High level features

  • Multiple protocols support (WS-Trust, WS-Federation, OAuth2, WRAP, JSNotify, HTTP GET)
  • Multiple token support (SAML 1.1/2.0, SWT)
  • Out of the box integration with ASP.NET membership, roles and profile
  • Support for username/password and client certificates authentication
  • Support for WS-Federation metadata
  • Support for WS-Trust identity delegation
  • Extensibility points to customize configuration and user management handling

Go and grab it, read the docs – and please give feedback.


Joy George K (@joymon) reported that he Successfully tried Windows Azure AppFabric Caching in Azure environment in a 12/6/2011 post:

imageYou might have wondered why I am telling this as an achievement. It is meant to cache the data and what is there to say "Successfully tried". Yes.There are so many things to say about it even though there are so many articles which explain how to cache in a step by step manner with images. Roughly speaking it took me 3 days to test the caching in the real Azure hosted environment. Below are the issues we faced during the process.
Issue 1 : Wrong SDK version ie Azure SDK to Azure Guest OS mapping


image72232222222Initially I was using Azure SDK 1.6 which is not supported in the Azure production servers. More details are explained here. Most of the time we were getting the DataCacheException with ErrorCode<ERRCA0017> which is nothing but RetryLater. Funny part is when we reverted to SDK 1.5 retry disappeared.


Issue 2 : Windows AppFabric v/s Azure AppFabric


Azure emulator don't have support to simulate Azure caching. So the idea was to use the Windows AppFabric cache which can be installed in our local machines and later change the configuration to use the actual Azure AppFabric .After configuring Windows AppFabric using Windows PowerShell and all we were able to connect to the local cache .But after hosting the same code to Azure, we came to know that there are some differences between Windows Server AppFabric and the Azure AppFabric. It made us rewrite our AppFabricCacheProvider class based on the Azure AppFabric caching API reference. The exceptions we got at this stage was NotSupportedException.

Below are some links which explains how to setup Windows Server AppFabric caching.

http://msdn.microsoft.com/en-us/library/ff637746.aspx
http://www.wadewegner.com/2010/08/getting-started-with-windows-server-appfabric-cache/


Issue 3 : Network issues to access caching clusters in your development environment


We tried to access the real Azure AppFabric cache from out development environment. The Azure caching machine was not reachable from our normal company network. My be some firewalls were in middle. We had resolved it using another external internet connection which don't have any restriction.
If you happen to face this issue, you will get exception which says it cannot access some particular IPAddress which you can easily verify by pinging or tracert


Issue 4 : AppFabric dlls to be included in the Azure publish package.


After 2 days we were able to successfully implement the caching framework and access Azure AppFabric cache from services in our development environment.Everybody became happy for some moment .But we lost the mood when we hosted our services in the Azure. Since we haven't implemented any logging framework it was really difficult for us to get the exact exception. The exception was mainly happening on the line.

DataCacheFactoryConfiguration = new DataCacheFactoryConfiguration("clientName");

Finally we were able to track that down to assembly not found issues ie "The system cannot find the file specified. Could not load file or assembly 'Microsoft.WindowsFabric.Common, Version=1.0.0.0" . Yes we were missing some dlls in the package which we uploaded to Azure. It was from the C:\Program Files\Windows Azure AppFabric SDK\V1.5\Assemblies\NET4.0\Cache folder.We added all the dlls, present in this location to our package and it stated working. Better refer these dlls in your web project / service host project so that these will be packaged automatically.

Microsoft.Web.DistributedCache.dll
Microsoft.WindowsFabric.Common.dll
Microsoft.WindowsFabric.Data.Common.dll

The question remains, how it worked in the emulator? Answer was simple. All these dlls were in our GAC. in Azure machines, GAC doesn't have these dlls.


There were so many minor issues like the letter casing of <dataCacheClients>,should we use the <dataCacheClients> or <dataCacheClient> section in the config, port 22233 was blocked etc...Below are some links if you are struggling to get Azure AppFabric Caching working


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Avkash Chauhan (@avkashchauhan) answered How does Remote Desktop works in Windows Azure? in a 12/6/2011 post:

imageAs you may have known that when you create any of kind of role in your Windows Azure application (Web, Worker or VM) you have ability to enable Remote Access to your role. It mean you can have Remote Desktop access to all of the instances related with the role, which has RDP access to it.

imageThe remote desktop feature is comprised of two imports:

  1. RemoteAccess
  2. RemoteForwarder.

That’s why when you enable Remote Desktop access your service definition shows imports as below:

<Imports>
<Import moduleName="RemoteAccess" />
<Import moduleName="RemoteForwarder" />
</Imports>

Remote Access:

RemoteAccess is imported on all roles you want to eventually be able to connect to. This import controls turning on RDP on the Windows Azure virtual machines and creating the user account for you so you can connect to Windows Azure instance. RemoteAccess has four configuration settings (prefixed with Microsoft.WindowsAzure.Plugins.RemoteAccess):

  • Enabled – must be set to “true” then RDP will be turned on inside the VM.
  • AccountUsername – User account name to create.
  • AccountEncryptedPassword – a Base64 encoded PKCS#7 blob encrypted with the PasswordEncryption certificate that specifies the password for the user account to create.
  • AccountExpiration – a DateTime string that specifies the time the account expires. If left blank or improperly formatted then no useraccount is created.

The Service Configuration shows these configuration settings as below:

<ConfigurationSettings>
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="avkash" />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="****************" />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="RD_Access_Expiry_Date" />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" />
</ConfigurationSettings>

RemoteAccess components tracks every user it creates by putting them in a special group for RemoteAccess use only. When a configuration change occurs or the role starts, RemoteAccess searches for any existing user account in that group with the name specified in AccountUsername. If the user is found, its password and expiration date are updated. All other account in that group will be disabled for RemoteAccess. This way RemoteAccess ensures that only a single user account is active on the machine at any given time.

RemoteForwarder:

  • RemoteForwarder is imported on a single role and takes care of dealing with the fact that Windows Azure only provides load-balanced input endpoints.
  • The forwarder runs on every instance of the role in which it is imported and listens on port 3389 for remote desktop connections.
  • When a connection is received, it searches the first packet for a load balancing cookie which you can see if you open the Portal-supplied .rdp file in a text editor.
  • Using this cookie data it then opens a connection internally to the desired instance and forwards all traffic to that instance.
  • The forwarder only has a single configuration setting:
    • Enabled – if set to “true” then listen for new connections and forward them appropriately.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Brian Swan @brian_swan described Running PHPUnit in Windows Azure in a 12/7/2011 post:

imageIn my last post I suggested 3 strategies for testing OSS/Azure applications. In this post, I’ll dive deeper into the first suggestion by showing you how to run unit tests (using PHPUnit) in Windows Azure. (I’ll assume that you have PHPUnit installed as a PEAR package.)

imageIf you read my last post, you may recall that the first option I suggested was this:

RDP to staging instance and run command-line tests. This approach involves enabling RDP access to a deployment (I’m assuming deployment to a staging slot), opening a command prompt, and running your tests as (many of you) normally would.

After figuring out how to run PHPUnit from the command line in a Windows Azure instance, I did find that a bit more configuration work than I anticipated was necessary. I’m not 100% certain that this is the best way to run PHPUnit in Windows Azure, but it is one way. I’d be interested in hearing better ways to do this.

In any case, here are the steps…

1. Build your application. Build your application as you normally would, running your PHPUnit tests locally.

2. Package your application together with your custom PHP installation. Instructions for doing this are here: Packaging a Custom PHP Installation for Windows Azure. As I stated earlier, I’m assuming that you have PHPUnit installed as a PEAR package. If you followed the default installation of PEAR and PHPUnit, they will be included as part of your custom PHP installation.

Note: Before you actually create the package (i.e. step 7 in the tutorial above), be sure to first make the edits in the .csdef and .cscfg files outlined in the next step.

3. Enable RDP access to your Azure application. Instructions for doing this are here: Windows Azure Remote Desktop Connectivity for PHP Applications. Note that this step is actually part of the previous step as it requires that you edit your .csdef and .cscfg files prior to creating a package for deployment.

4. Deploy your application to staging. Instructions for doing this are here:

5. Login to an instance of your application and find your PHP installation. After you login to an instance of your application, you’ll need to find your application (you can’t be sure it will always be in the same location). I found mine here: E:\approot (and so my PHP installation was here: E:\approot\bin\PHP).

6. Add your PHP directory to the Path environment variable. This will allow you to run php.exe and phpunit from any directory.

7. Add PEAR to your include_path. Chances are that your PEAR installation is no longer in the include_path you had set up when running PHP on your local machine, so you’ll need to update it. In my case, I simply had to do this in my php.ini file:

include_path = ".;E:\approot\bin\PHP\pear"

8. Edit phpunit.bat. Again, chances are that your PHPUnit configuration used hard coded paths that will need to change in the Azure environment:

if "%PHPBIN%" == "" set PHPBIN=E:\approot\bin\PHP\php.exe
if not exist "%PHPBIN%" if "%PHP_PEAR_PHP_BIN%" neq "" goto USE_PEAR_PATH
GOTO RUN
:USE_PEAR_PATH
set PHPBIN=%PHP_PEAR_PHP_BIN%
:RUN
"%PHPBIN%" "E:\approot\bin\PHP\phpunit" %*

9. Run your tests. Now you can open a command prompt, navigate to your application root directory, and run your tests.

Assuming all your tests pass and you are ready ready to move your application to production. One easy way to do this is through the Windows Azure Portal…all you have to do is select your deployment and click VIP Swap:

image

However, after you move your app to production, you will likely want to disable remote desktop access to all instances. You can do this by selecting your role and unchecking the Enable checkbox in the Remote Access section near the top of the portal web page:

image

If you wanted to be doubly sure that no one could remotely access your application, you could also remove the deployment certificate used for RDP access.

As I mentioned earlier, I’d be interested in hearing suggestions for improvements on this process.


David Makogon (@dmakogon) posted Introducing the Windows Azure ISV Blog Series on 12/7/2011:

imageGood day! I’m David Makogon, a senior cloud architect for the Windows Azure ISV team. This team has one primary objective: Help ISVs worldwide architect and develop applications and services to Windows Azure.

imageOur team would like to share some of its experiences with you, and we’ll be highlighting some of the accomplishments from the ISVs we’ve worked with during their Windows Azure application development and deployment. We’ll take a look at specific applications from ISV’s we’ve worked with, exploring a particular architecture or design challenge that needed to be solved or worked around as we integrated their application with Windows Azure. These stories will come from all over the globe, as we have more than 70 architects, evangelists, and advisors on our team worldwide. We have some popular bloggers such as David Gristwood, Ricardo Villalobos, Naoki Sato, and Anko Duizer, just to name a few. We have lots of good stuff to share.

Remember that there are often several ways to solve a particular problem. We’ll describe the way that the ISV ultimately chose, along with a reference architecture diagram, details of the solution and justification, and related caveats or tradeoffs. Feel free to incorporate these solution patterns into your own application, improve upon it, or take a completely different approach. Feel free to share your comments and suggestions here as well!


Kenneth van Surksum (@kennethvs) reported Release: Microsoft Assessment and Planning Toolkit 6.5 on 12/6/2011:

imageIn November Microsoft released a public beta of the Microsoft Assessment and Planning Toolkit (MAP) version 6.5. Today Microsoft announced its release, which is the follow-up of version 6.0 which was released in July this year.

Version 6.5 provides the following new features:

  • Discovery of Oracle instances on Itanium-based servers with HP-UX for migration to SQL Server, including estimation of complexity when migrating to SQL server
  • Assessment for migration to Hyper-V Cloud Fast Track infrastructures, including computing power, network and storage architectures [Emphasis added]
  • imageRevamped Azure Migration feature [Emphasis added]
  • Software Usage Tracking, including assessment for planning implementation of Forefront Endpoint Protection which is now part of a Core CAL and Active Devices

clip_image001


Geva Perry (@gevaperry) reported BlazeMeter: Launching the JMeter Testing Cloud on 12/6/2011:

imageToday, BlazeMeter is publicly launching its cloud load testing service and announcing its $1.2 million funding round led by YL Ventures. I joined the BlazeMeter board of directors back in  July and we've been preparing for this launch since, so it's exciting seeing it all come together.

Screen Shot 2011-12-06 at 12.08.06 AMLike several other startups that I've been involved with, BlazeMeter leverages open source software in a cloud service, making web application development a whole lot easier.

Specifically, BlazeMeter uses Apache JMeter, the popular open source performance testing
framework, to create massive volumes of realistic browser simulations. BlazeMeter also allows current JMeter users, who have an existing set of JMeter scripts, to instantly load those scripts to the cloud and run them without any changes. Alternatively, folks can simply enter a URL, choose a pre-defined test scenario and run it instantly (with approporiate security measures when requesting very large stress volumes).

BlazeMeter Interactive Load Reports 50

The beauty of what Alon Girmonsky, BlazeMeter founder and CEO, and his team did with it, is that although BlazeMeter is extremely easy to use, it is an enterprise-grade performance testing tool, both in terms of scalability and in terms of the comprehensiveness of the reports and analysis it provides.

In addition, BlazeMeter's pricing model is extremely attractive with a combination of usage-based pricing and subscriptions. And you can start running tests for free.

The company has a lot of great plans in store and I will have a lot more to say about it but for now, congrats to Alon, Daniela and the rest of the team! Alon is an incredible enterpreneur and I look forward to working with him on building BlazeMeter to a great company in the coming months and years.

Check out Alon's intro blog post, and don't forget to follow them on Twitter: @BlazeMeter for updates.

And see some additional coverage on the company today:


Avkash Chauhan (@avkashchauhan) described Installing Windows Azure SDK [1.6] returns - DSInit setup error in a 12/5/2011 post:

When trying installing Windows Azure SDK 1.6 in your machine, it is possible you might hit an error related with Windows Azure Emulator Storage installation in your machine. This is when you already main SDK components are installed and installed is installing Window Azure Storage Emulation components.

Windows Azure Storage Emulator Initialization
System.InvalidOperationException: There is an error in XML document (0,0).
System.Xml.XmlException: Root element is missing….

In the Installer log you will see the following error logged:

  • “Error 1722. There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor. Action RunDSInit, location: C:\Program Files\Windows Azure Emulator\emulator\devstore\DSInit.exe, command: /SILENT /NOGUI /FASTTIMEOUT /INSTALL”

The root-cause of this problem is that:

  • DSinit is trying to access is C:\Users\<user>\AppData\Local\DevelopmentStorage\DevelompentStorage.config, and based on the exception details it seems the file is corrupt.

To solve this problem:

  • Try deleting DevelompentStorage.config file and rerun the setup to solve this problem

If you try to load an existing Windows Azure project in Visual Studio, you might hit the following error:

  • “Unable to find fileDFUI.exe please verify your install is correct”.

Please try creating a new Windows Azure solution in visual studio, add the existing projects to the solution, then add web roles and Publishworker roles in solution and finally built and run it again. This should get you going.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Paul Patterson described What’s New? with his LightSwitch development with custom controls in a 12/6/2011 post:

imageThis past 6 or so months have been a bit crazy for me. With a new job it has been difficult for me to find the time to keep up with the blogging. Notwithstanding however, I have made a diligent effort to keep on top of a few interesting things.

image222422222222I am still pushing forward with learning all the goodness of LightSwitch. Just a few weeks back I did a presentation to a group of .Net developers (VBAD SIG) where I demonstrated LightSwitch. In the presentation I created a simple application and the deployed it to Azure, which really opened some dialog about the opportunities for using LightSwitch.

As well I have been keeping my ears to the tracks on the many new and exciting extensions that are popping up weekly in the LightSwitch exosphere. And in keeping my curiosity and enthusiasm in check, I’ve been trying out a lot of these neat tools.

These past few months I have been learning about leveraging custom controls for LightSwitch, such as using Telerik and DevExpress. For example, I created a nice little tool for managing health and safety programme data for businesses…

Here are some gratuitous screen shots for you to gander at…

Whoa! A simple install of an OLAP for LightSwitch extension from the good folks at ComponentOne, and…

Fun times!


Beth Massi (@bethmassi) posted Beginning LightSwitch Part 1: What’s in a Table? Describing Your Data on 12/6/2011:

imageWelcome to Part 1 of the Beginning LightSwitch series! To get things started, we’re going to begin with one of the most important building blocks of a LightSwitch application, the table. Simply put, a table is a way of organizing data in columns and rows. If you’ve ever used Excel or another spreadsheet application you organize your data in rows where each column represents a field of a specific type of data you are collecting. For instance, here’s a table of customer data:

Customer table.

image

image222422222222When you work with databases, the data is stored in a series of tables this way. You then create relationships between tables to navigate through your data properly. We’ll talk about relationships in the next post. For this post let’s concentrate on how to create and work with tables in LightSwitch.

Tables (Entities) in LightSwitch

Applications you build with LightSwitch are forms-over-data applications that provide user interfaces for viewing, adding, and modifying data. LightSwitch simplifies the development of these applications by using screens and tables. Because LightSwitch can work with other external data sources that do not necessarily have to come from a database, we sometimes call tables “Data entities” or just “entities” in LightSwitch. So whether you have a table in a database or a list in SharePoint, both the table and the list are entities in LightSwitch. Similarly, a field in a table or a column in a list is referred to as a “property” of the entity.

Entities are how LightSwitch represents data and are necessary to assemble an application. You create these data entities by using the built-in application database, or by importing data from an external database, a SharePoint list, or other data source. When you create a new project in LightSwitch, you need to choose whether you want to attach to an existing data source or create a new table. If you choose to create a new table, LightSwitch will create it in the built-in database, also referred to as the intrinsic database. You then design the table using the Data Designer.

When you create tables and relate them together you are designing a data model, or schema. Describing your data this way takes some practice if you’ve never done it before, however, you will see that it’s pretty intuitive using LightSwitch. The better you are at describing your data model, the more LightSwitch can do for you when you create screens later.

The LightSwitch Data Designer

The Data Designer is where all your data modeling happens in LightSwitch whether you’re attaching to an existing data source or creating a new database. By using the Data Designer, you can define properties on your entities and create relationships between them. LightSwitch handles many typical data management tasks such as field validation, transaction processing, and concurrency conflict resolution for you but you can also customize these tasks by modifying properties in the Properties window, and/or by writing code to override or extend them.

For a tour of the Data Designer, see Data: The Information Behind Your Application

For a video demonstration on how to use the Data Designer, see: How Do I: Define My Data in a LightSwitch Application?

Creating a “Contact” Entity

Let’s walk through a concrete example of creating an entity. Suppose we want to create an application that manages contacts, like an address book. We need to create an entity that stores the contact data. First open Visual Studio LightSwitch and create a new project called ContactManager.

image

After you click OK on the New Project dialog, the LightSwitch home page will ask you if you want to create a new table or attach to an external data source.

image

Click “Create new table” and this will open the Data Designer. Now you can start describing the contact entity. Your cursor will be sitting in the title bar of the entity window when it opens. Name it “Contact” and hit the Enter key.

image

Once you do this you will see “Contacts” in the Solution Explorer under the ApplicationData node in the Data Sources folder. ApplicationData represents the intrinsic database that LightSwitch creates for you. Contacts refers to the table in the database that stores all the contact rows (or records). You can also think of this as a collection of entities, that’s why LightSwitch makes it plural for you.

Now we need to start defining properties on our entity, which correlates to the columns (or fields) on the table. You should notice at this point that the Contact entity has a property called “Id” that you cannot modify. This is an internal field that represents a unique key to the particular row of data. When you model tables in a database, each row in the table has to have a unique key so that a particular row can be located in the table. This Id is called a primary key as indicated by the picture of the key on the left of the property name. It is always required, unique, and is stored as an integer. LightSwitch handles managing primary keys automatically for you.

So we now need to think about what properties we want to capture for a contact. We also will need to determine how the data should be stored by specifying the type and whether a value is required or not. I’ve chosen to store the following pieces of data: LastName, FirstName, BirthDate, Gender, Phone, Email, Address1, Address2, City, State and ZIP. Additionally, only the LastName is required so that the user is not forced to enter the other values.

image

Also notice that I selected types that most closely match the type of data I want to store. For Phone and Email I selected the “Phone Number” and “Email Address” types. These business types give you built-in validation and editors on the screens. The data is still stored in the underlying table as strings, but is formatted and validated on the screen automatically for you. Validation of user input is important for keeping your data consistent. From the Properties window you can configure rules like required values, maximum lengths of string properties, number ranges for numeric properties, date ranges for date properties, as well as other settings. You can also write your own custom validation code if you need.

For more information on validation rules see: Common Validation Rules in LightSwitch Business Applications

If you don’t see the Properties window hit F4 to open it. Select a property on the entity and you will see the related settings you can configure for it. Depending on the type of data you chose for the property, you will see different settings. All properties have an “Appearance” section in the property window that allow you specify the Display Name that will appear in field labels on screens in the application. By default, if you use upper camel case (a.k.a Pascal case) for your entity property names then LightSwitch will put a space between the phrases. For instance, the Display Name for the “LastName” property will become “Last Name” automatically. So it’s best practice to use this casing for your entity properties.

image

You can also enter a “Description” for properties when their names aren’t intuitive enough for the user, or you just want to display a standard help message. The Description is displayed on screens as a Tooltip when the user hovers their mouse over the data entry control on any screen displaying that field.

Settings you make here in the Data Designer affect all the screens in the application. Although you can make additional customizations on particular screens if needed, you will spend the bulk of your time configuring your data model here in the Data Designer. That way, you don’t have to configure settings every time you create a new screen. The better you can model your entities, the more LightSwitch can do for you automatically when creating the user interface.

For the Contact entity let’s set a few additional settings. First, select the Id field and in the Appearance section, uncheck “Display by default”. This makes it so that the property doesn’t show up anywhere in the user interface. As mentioned earlier, the primary key is an internal field used to locate a row in the table and isn’t modifiable so the user does not need to see it on any screens in the application.

For BirthDate, set the minimum value to 1/1/1900 so that users can’t enter dates before that.

image

For Gender, we want to display a fixed set of static values to the user: “Female”,“Male”. In order to do this in LightSwitch we can use a Choice List. Click on “Choice List…” on the Properties window and this will open a window that will let you define the values that are stored in the table and the display name you want the user to see. For our purposes, we just want to store an “F” or “M'” in the underlying database table. Therefore, also set the Maximum Length to 1.

image

By default, maximum lengths of strings are set to 255 characters and should handle most cases, but you can change this for your needs.

Using the Properties window you can also configure settings on the entity itself. Select the title bar of the Contact entity and notice that there is a setting called Summary Property. Summary properties are used to “describe” your entity and are used by LightSwitch to determine what to display when a row of data is represented on a screen. By default, LightSwitch selects the first string property you defined on your entity but you can change that here.

image

You can also create computed properties to use as the summary property when you want to format values or display values from multiple fields.

For more information on Summary Properties see: Getting the Most out of LightSwitch Summary Properties

Testing the Contact Entity

Now that we have the Contact entity designed, let’s quickly test it out by creating a screen. At the top of the Data Designer click the “Screen…” button to open the Add New Screen dialog. We’ll talk more about screens in a future post but for now just select the List and Details screen. Then drop down the Screen Data and select Contacts and then click OK.

image

To build and launch the application hit F5. Now you can enter information into the contact table using this screen. Click the “+” button on the top of the list box to add new contacts.

image

Notice that the labels are displayed properly with spaces and the Last Name is bolded to indicate it’s a required field. Also if you enter invalid data as specified by the settings we made, a validation error will be displayed. When you are done, click the Save button on the ribbon at the top left of the application shell. This will save the data back into your development database. This is just test data stored in your internal database while you develop the application. Real data doesn’t go into the system until you deploy the application to your users.

In the next post we’ll talk about relationships and build upon our data model. Until next time!


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Scott Densmore reported New Versions of our Windows Azure Guidance Available on 12/7/2011:

imageWindows Azure is a moving platform. When we first released these guides, the SDK was young and we decided we needed to refresh to support the latest 1.6 SDK. You can now get these from MSDN:


David Linthicum asserted “Virtualization of resources means programmers can't tap deep features directly -- a potentially painful but positive change” in a deck for his Developers in the cloud lose access to the 'metal' post of 12/6/2011:

imageWhat's fun about software development is that you can leverage the deep features of whatever platform you're on. This includes direct access to the input/output subsystems, video memory, and even the stack. Thus, you can make the software do exactly what you want it to do, directly exploiting platform features.

imageHowever, as we move to cloud computing, including development platforms, that ability to leverage deep features could be coming to a rapid end. Consider the fact that cloud-based platforms are multitenant, and most resources are virtualized. The ability to program down to the metal is no longer there, both for legitimate reasons such as using a GPU for floating-point calculations and illegitimate reasons such as using a low-level hack.

Why? It's an architectural reality that you can't allow platform users and/or running software to directly access physical resources. These physical resources are shared with the other tenants through layers of technology that fake out users and software into thinking that the physical resources are dedicated to them. They are not -- they are abstracted virtual resources that must be accessed using specific and controlled interfaces.

Will developers tolerate the inability to program to "the metal"? Most will, but some won't.

Developers often gave up control for productivity in the past, and cloud computing is just another instance of that. On the downside, they won't be able to get "wiggy" with the software they create or use direct access to platform resources to gain an advantage in look and feel and in performance. However, this kind of "to the metal" programming often leads to platform issues down the road, such as when OSes are upgraded and physical resources change.

In many respects, the limited access to platform resources could be a positive change for users of software, if not for developers -- that is, if developers can get over the loss of control.

I believe that this is a non-issue.


Tom Hollander (@tomhollander) described Automated Build and Deployment with Windows Azure SDK 1.6 in a 12/5/2011 post:

imageA few months ago I posted on how to automate deployment of Windows Azure projects using MSBuild. While the approach documented in that post continues to work, Windows Azure SDK 1.6 has introduced some new capabilities for managing Windows Azure credentials and publishing settings which I wanted to leverage and build upon. With this new approach, you’ll no longer need to manually manage details such as Subscription IDs, hosted service names and certificates. Because this approach relies on a few tools that are too big to share in a blog post, I’ve also created the Windows Azure Build & Deployment Sample on MSDN which contains all of the tools and sample projects described in this post.

imageBefore we go into details on how the build and deployment process works, let’s look at how Windows Azure SDK 1.6 manages credentials and publishing profiles:

  • The Visual Studio “Publish Windows Azure Application” dialog contains a link to a special page on the Windows Azure Portal that allows you to download a .publishsettings file. This file contains all of the details of your subscription(s), including subscription IDs and certificates.
  • The same “Publish Windows Azure Application” dialog allows you to import the .publishsettings file, which results in the certificate being installed on your local machine, and the subscription details imported into a Visual Studio file called Windows Azure Connections.xml (this lives in %UserProfile%\Documents\Visual Studio 2010\Settings). Note that after you import the .publishsettings file you should delete it (or at least protect it) as it contains the full certificate and private key that grants access to your Windows Azure subscription.
  • When you are ready to publish your Windows Azure application, you can create a new "publish profile” or use an existing one. A publish profile is saved in your Windows Azure project with a .azurePubxml extension, and contains various details such as your hosted service name, storage account name and deployment slot. The .azurePubxml file doesn’t contain your subscription details, but it does list the subscription name that must correspond to an entry in your Windows Azure Connections.xml file.

In updating my scripts for automated build and deployment on a build server, I wanted to leverage as much as this as possible, but I needed to build some tools that mirror some of the steps done by Visual Studio’s “Publish Windows Azure Application” dialog, since you may not have Visual Studio installed on your build server.

The build and deployment solution contains the following components:

  1. The AzureDeploy.targets file, which is installed on your build server to tell MSBuild to package and deploy your solution to Windows Azure
  2. The ImportPublishSettings tool, to import a .publishsettings file onto a build server
  3. The AzureDeploy.ps1 PowerShell script, which also depends on a helper library called AzurePublishHelpers
  4. A TFS Build Definition that passes properties to MSBuild to initiate the build and deployment process.

The following diagram shows how all the components and files come together, and I’ll describe the details of each below.

SDK 1.6 deploy

The AzureDeploy.targets file

In my previous post on this topic, I showed you how you can edit your .ccproj project file to define additional targets used in the build process. This approach is still an option, but this time I’ve change my approach by creating a custom MSBuild .targets file which is installed on the build server. This is generally a better option as you don’t need to hand-edit .ccproj files, the custom targets run only on the build server (not on development workstations), and the targets can be reused for multiple Windows Azure projects.

The code included in my AzureDeploy.targets file is shown below. This file needs to be copied to your build server to C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Windows Azure Tools\1.6\ImportAfter, and it will be automatically referenced by the main Windows Azure targets file.

<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <PackageName>$(AssemblyName).cspkg</PackageName>
    <PackageForComputeEmulator>true</PackageForComputeEmulator>
  </PropertyGroup>
  <Target Name="AzureDeploy" AfterTargets="Build" DependsOnTargets="Publish" Condition="$(AzurePublishProfile)!=''">
    <Message Text="Executing target AzureDeploy from AzureDeploy.targets file"/>
    <Exec WorkingDirectory="$(MSBuildProjectDirectory)" 
         Command="$(windir)\system32\WindowsPowerShell\v1.0\powershell.exe -f c:\builds\AzureDeploy.ps1 $(PublishDir) $(PackageName) &quot;Profiles\$(AzurePublishProfile)&quot;" />
  </Target>
</Project>

The purpose of this code is to tell MSBuild to package the project for Windows Azure (achieved with the dependency on the Windows Azure SDK’s Publish target) and then call a PowerShell script (you need to change the path depending on how you set up your build server). Note that this target only runs when the AzurePublishProfile MSBuild property is set, which we’ll do later on when we set up the TFS build definition.

Note that you may want to make some other customisations in a .targets or .ccproj file, for example to transform configuration files. I haven’t described this in this post, but there is some information on this in my previous post on this topic.

The ImportPublishSettings tool

The ImportPublishSettings tool (available from the Windows Azure Build & Deployment Sample) can be used to import Windows Azure credentials from a .publishsettings file into your build server. It has been designed to operate in the exact same way as the Visual Studio “Publish Windows Azure Application” dialog, so if you have Visual Studio installed on your build server you can use that instead of this tool. Whichever tool you use, this is a one-time process that is completed when you first set up your build process.

This is a simple command line tool which takes 3 parameters (only one of which is required):

  • publishSetingsFilename: the .publishsettings file to import
  • certStoreLocation (optional): the certificate store to which the certificate should be imported. Possible values are CurrentUser (the default) or LocalMachine. You should use CurrentUser if your build process is running as a user account, or LocalMachine if you are running under a system account such as NETWORK SERVICE.
  • connectionsFileName (optional): the filename in which the imported settings should be stored. This defaults to “%UserProfile%\Documents\Visual Studio 2010\Settings\Windows Azure Connections.xml”. You may want to change this if your build process is running under a system account such as NETWORK SERVICE.

If you run the tool under the same account used for your build process, you shouldn’t need to anything more. However if you run it as a different user (for example, you run the script as yourself but your build process runs under NETWORK SERVICE), you will need to open the MMC Certificates snap-in and grant permissions for the certificate private key to the build process’s account.

The AzureDeploy.ps1 PowerShell Script

The AzureDeploy.ps1 PowerShell script is responsible for taking a packaged Windows Azure application and deploying it to the cloud. The sample implementation included in the sample project is pretty simple, and you may want to extend it to perform additional steps such as installing certificates, creating storage accounts, running build verification tests or swapping staging and production slots. There are also a couple of things which you will need to customise depending on if you’re running as a normal user account or a system account. Still, hopefully this script is a useful starting point for your build and deployment process.

The script takes three parameters, which will normally be passed to it by the TFS build process (but you can pass them in yourself for test purposes):

  • BuildPath: The folder containing the Windows Azure project on the build server, for example “C:\Builds\1\ProjectName\BuildDefinitionName\Sources\AzureProjectName”.
  • PackageName: The unqualified name for the Windows Azure .cspkg name, for example AzureProjectName.cspkg
  • PublishProfile: The path to the .azurepubxml file that should be used for deploying the solution. Note that only some of the properties in this file are used for deployment, such as the subscription name, hosted service name, storage account name and deployment slot. Other settings in this file, such as EnableIntelliTrace, are not currently used by the sample scripts.

The deployment script depends on a helper library called AzurePublishHelpers.dll (which is also used by the ImportPublishSettings tool), which knows how to read and write from the various files used in the solution. In order to make this library available to PowerShell you will need to install it as a PowerShell module. To this, first open the folder for PowerShell modules, which is “C:\Windows\System32\WindowsPowerShell\v1.0\Modules\AzurePublishHelpers” (replace System32 with SysWOW64 if you’re running your build as 32-bit on a 64-bit system). Then create a folder called AzurePublishHelpers and copy in the AzurePublishHelpers.dll file.

The TFS Build Definition

The final piece of the puzzle is setting up one or more Build Definitions. I’m using Team Foundation Server for this, but if you’re using a different ALM toolset you should be able to accomplish something similar.

You can configure your Build Definitions however you want, for example to use different XAML workflows, different settings for running tests, versioning assemblies, etc. You should have at least one Build Definition for each Windows Azure environment you want to deploy to, for example Test, UAT or Production. To configure a Build Definition to deploy to Windows Azure, you’ll need to choose the Process tab and enter the name of your chosen Publish Profile (.azurepubxml file) in the MSBuild Arguments property as shown in the screenshot below:

image

Conclusion

I hope this post and the accompanying tools and samples help you automate most (if not all) of your Windows Azure build and deployment process. I’ll try to keep the post up-to-date as the platform continues to evolve. If you have questions or comments, please feel free to post here or on the Windows Azure Build & Deployment Sample page.


Bytes by MSDN posted a Bytes by MSDN: December 6 - Scott Guthrie video on 12/6/2011:

Join Tim Huckaby, Founder of InterKnowlogy & Actus Software, and Scott Guthrie, Corporate Vice President in Microsoft's Server & Tools Business Division, as they discuss the new features in the next release of Visual Studio 11 Dev Preview (Dev 11) and Windows Azure, announced at the Build 2011 Conference.

Scott takes us through the many new improvements to Dev 11 which include the next release of .NET 4.5. It is also side-by-side compatible with other releases of Visual Studio and its solution and project file compatible with older releases, making it easier for project sharing. Scott also discusses Windows Azure, its cadence of release as it is a service based, and the coming waves of innovation that we can expect in the near future. An awesome interview you don’t want to miss!

Open attached fileHDI-ITPro-MSDN-mp3-Scott_Guthrie_and_Tim_Huckaby.mp3

The interview appears to have been conducted during the //BUILD/ conference, based on Scott’s “we’re releasing today” comment about Dev 11.


Panagiotis Kefalidis (@pkefal) continued his JBoss series with Running JBoss 7 on Windows Azure — Part II on 12/6/2011:

imageContinuing [from] where I left it on my previous post, I’m going to explain how the Announcement service works and why we choose that approach.

The way JBoss and mod_proxy work now is that every time something changes in the topology, either a new proxy is added or removed or a JBoss node, then the proxy list has to be updated and both the node and the proxy have to be aware of their existence.

imageMod_proxy is using multicast to announce itself to the cluster but as this is not supported on Windows Azure, we created our own service that runs on the proxy and on the node also. Each time a new proxy or a node is added/removed, the service notifies the rest of the instances that something changed in the topology and they should update their lists with the new record.

The service is not running under a dedicated WorkerRole but it’s part of the same deployment as the proxy and the JBoss node. It’s a WCF service hosted inside a Windows NT Service listening on a dedicated port. That approach gives us greater flexibility as we keep a clear separation of concern between the services on the deployment and we don’t mix code and logic that has to do with the proxy, with the Announcement service. Originally the approach of using an NT Service caused some concerns as how this service is going to be installed on the machines and how can we keep one single code base for that service, running on both scenarios.

First of all, you should be aware that any port you open through your configuration is only available to the host process of the Role. That means if the port is not explicitly open again on the firewall, your service won’t be able to communicate as the port it’s blocked. After we realized that, we fix it by adding an extra line to our Startup Task which was installing the service on the machines. The command looks like this:

which is part of the installer startup task

To make our service even more robust and secure we introduced a couple of NetworkRules that they only allow communication between Proxies and Jboss nodes:

Any kind of communication between the services is secured by certificate based authentication and message level encryption. The service [is] a vital component in our approach and we want it to be as secure as possible.

The service is monitoring a couple of things that helps us also collect telemetry data from the Jboss nodes, but it’s also wired to a couple of RoleEnvironment events like OnStopping and OnChanged. Everytime there is an OnStopping, we send messages out to all of the other service instances to de-register that proxy from their list because it’s going down. Also, the service itself is checking on specific intervals if the others nodes are alive. If they don’t respond after 3 times, they are removed. The reason we do this, is to handle possible crashes of the proxy as fast as possible. Lastly, everytime there is an OnChanged event fired, we verify that everything is as we know they should be (nodes available, etc).

Next post in the series, the cluster setup.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

No significant articles today.


<Return to section navigation list>

Cloud Computing Events

• Scott Guthrie (@scottgu) invited readers on 12/7/2011 to Learn Windows Azure Next Tuesday (Dec 13th) with a 90 minute keynote starting at 9:00 AM PST:

imageAs some of you might know, I’ve spent much of my time the last 6 months working on Windows Azure – which is Microsoft’s Cloud Computing Platform (I also continue to work on ASP.NET, .NET, VS and a bunch of other products).

imageNext Tuesday, Dec 13th we’ll be holding a special Learn Windows Azure training event for developers. It will provide a great way to learn Windows Azure and what it provides. You can attend the event either by watching it streamed LIVE online, or by attending in person (on the Microsoft Redmond Campus). Both options are completely free.

Learn Window Azure Event

top_imageDuring the Learn Windows Azure event attendees will learn how to start building great cloud based applications using Windows Azure.

I’ll be kicking off the day with a 90 minute keynote that will provide an overview of Windows Azure, during which I’ll explain the concepts behind it and the core features and benefits it provides. I’ll also walkthrough how to build applications for it using .NET, Visual Studio and the Windows Azure SDK (with lots of demos of it in action).

We’ll then spend the rest of the day drilling into more depth on Cloud Data and Storage, how to use the Visual Studio Windows Azure Tools, how to Build Scalable Cloud Applications, and close off with an Q&A panel with myself, Dave Campbell and Mark Russinovich.

Register Now for Free

The free Learn Windows Azure event will start at 9am (PST) on Dec 13th. You’ll be able to watch the entire event live on Channel9 or attend it in person. Both options are completely free.

  • Register now to watch online or attend the event in person for FREE

I hope to get a chance to chat with you about Windows Azure there!

I’m registered to watch.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

My (@rogerjenn) A First Look at HP Cloud Services post of 12/7/2011 begins:

imageOn 12/7/2011 I received an invitation to test a Private Beta version of HP Cloud Services (@HPCloud). Following is an illustrated description of the sign-up and compute/object storage provisioning processes:

Setting Up an Account

1. I set up an account with the access code provided by e-mail and clicked the Dashboard button to open the following page:

image_thumb2

2. I clicked the Activate Now button for Compute Services in the US West 2 - AZ1 Availability Zone, which opened a Setup Up a Payment Method (credit card) information form:

image_thumb5

HP says the credit card info is for “testing the billing system only” and you won’t incur charges when you use the Private Beta version. …

The post continues with “Configuring Compute Instances” and “Configuring Object Storage” sections. Stay tuned for a “Digging into HP Cloud Compute Services” post later this month.


Dana Gardner (@Dana_Gardner) asserted “A wide range of new Cloud Solutions designed to advance deployment of private, public and hybrid clouds” as a deck for his HP Hybrid Cloud to Enable Telcos and Service Providers post of 12/7/2011 to Briefings Direct blog:

imageHP at the Discover 2011 Conference in Vienna last week announced a wide range of new Cloud Solutions designed to advance deployment of private, public and hybrid clouds for enterprises, service providers, and governments. Based on HP Converged Infrastructure, the new and updated HP Cloud Solutions provide the hardware, software, services and programs rapidly and securely deliver IT as a service.

imageI found these announcements a clearer indicator of HP's latest cloud strategy, with an emphasis on enabling a global, verticalized and marketplace-driven tier of cloud providers. I've been asked plenty about HP's public cloud roadmap, which has been murky. This now tells me that HP is going first to its key service provider customers for data center and infrastructure enablement for their clouds.

This makes a lot of sense. The next generation of clouds -- and I'd venture the larger opportunity once the market settles -- will be specialized clouds. Not that Amazon Web Services, Google, and Rackspace are going away. But one-size fits all approaches will inevitably give way to specialization and localization. Telecos are in a great position to step up and offer these value-add clouds and services to their business customers. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

And HP is better off providing the picks and shovels to them in spades, than to come to market in catch-up mode with plain vanilla public cloud services under its own brand. It the classic clone strategy that worked for PCs, right? Partnerships and ecosystem alliances are the better way. A good example is the partnership announced last week with Savvis.

HP’s new offerings address the key areas of client needs – building differentiated cloud offerings, consuming cloud services from the public domain, and managing, governing and securing the entire environment. This again makes sense. No need for channel conflict on cloud services between this class of nascent cloud providers and the infrastructure providers themselves.

Expanding the ecosystem
Among the announcements was an expansion of the cloud ecosystem with new partners, offerings and programs:

  • New HP CloudSystem integrations with Alcatel-Lucent will enable communications services providers to deliver high-value cloud services using carrier-class network and IT by automating the provisioning and management of cloud resources.
  • HP CloudAgile Service Provider Program offers service providers expanded sales reach, an enhanced services portfolio and an accelerated sales cycle through direct access to HP’s global sales force. HP has expanded the program with its first European partners and with new certified hosting options that enable service providers to deliver reliable, secure private hosted clouds based on HP CloudSystem.

    Clients want to understand, plan, build and source for cloud computing in a way that allows them to gain agility, reduce risk, maintain control and ensure security.

  • HP CloudSystem Matrix 7.0, the core operating environment that powers HP CloudSystem, enables clients to build hybrid clouds with push-button access to externally sourced cloud-based IT resources with out-of-the-box “bursting capability.” This solution also includes automatic, on-demand provisioning of HP 3PAR storage to reduce errors and speed deployment of new services to just minutes.
  • The HP Cloud Protection Program spans people, process, policies and technologies to deliver a comparable level of security for a hybrid cloud as a private internet-enabled IT environment would receive. The program is supported by a Cloud Protection Center of Excellence that enables clients to test HP solutions as well as partner and third-party products that support cloud and virtualization protection.

Enterprise-class services
New and enhanced HP services that provide a cloud infrastructure as a service to address rapid and secure sourcing of compute services include:

  • HP Enterprise Cloud Services – Compute which automates distribution of application workloads across multiple servers to improve application performance. Clients also can improve data protection through new backup and restore options while also provisioning and managing additional virtual local area networks within their cloud environment. A new HP proof-of-concept program allows clients to evaluate the service for existing workloads prior to purchase.
  • HP Enterprise Cloud Services for SAP Development and Sandbox Solution enable clients to evaluate and prototype functionality of SAP enterprise resource planning software via a virtual private cloud, using a flexible, consumption-based model.

Guidance and training
HP has also announced guidance and training to transform legacy data centers for cloud computing:

  • Three HP ExpertONE certifications – HP ASE Cloud Architect, HP ASE Cloud Integrator and HP ASE Master Cloud Integrator, which encompass business and technical content.
  • Expanded HP ExpertONE program that includes five of the industry’s largest independent commercial training organizations that deliver HP learning solutions anywhere in the world. The HP Institute delivers an academic program for developing HP certified experts through traditional two- and four-year institutions, while HP Press has expanded self-directed learning options for clients.
  • HP Cloud Curriculum from HP Education Services offers course materials in multiple languages covering cloud strategies. Learning is flexible, with online virtual labs, self study, classroom, virtual classroom and onsite training options offered through more than 90 HP education centers worldwide.

    The new offerings are the culmination of HP’s experience in delivering innovative technology solutions, as well as providing the services and skills needed to drive this evolution.

  • Driven by HP Financial Services, HP Chief Financial Officer (CFO) Cloud Roundtables help CFOs understand the benefits and risks associated with the cloud, while aligning their organizations’ technology and financial roadmaps.
  • HP Storage Consulting Services for Cloud, encompassing modernization and design, enable clients to understand their storage requirements for private cloud computing as well as develop an architecture that meets their needs.
  • HP Cloud Applications Services for Windows Azure accelerate the development or migration of applications to the Microsoft Windows Azure platform-as-a-service offering.

A recording of the HP Discover Vienna press conference and additional information about HP’s announcements at its premier client event is available at www.hp.com/go/optimization2011.

You might also be interested in:


Herman Mehling asserted “GigaSpaces XAP employs a POJO-based programming model as part of its core” in a deck for his GigaSpaces XAP: The Java PaaS with .NET Support article of 12/6/2011 for DevX:

imageGigaSpaces XAP offers a Platform as a Service (PaaS) solution that allows developers and enterprises to deploy and scale existing applications as well as build new enterprise applications in the cloud. The solution supports everything from mission-critical applications demanding extreme performance to large-scale Web applications based on popular frameworks such as Java EE (JEE), .NET, Spring and Jetty.

imageThe main components of the GigaSpaces PaaS are:

  1. GigaSpaces XAP is an application server that provides a complete environment for deploying and running enterprise applications, including multi-tenancy, enterprise-grade middleware and auto scaling -- either in or outside the cloud. XAP's architecture enables organizations -- including companies such as Dow Jones, Virgin Mobile and Sempra Energy -- to flexibly move between their data centers and the cloud or to use both simultaneously. This approach offers a clear migration path from a hosted or on-premise environment to the public cloud.
  2. GoGrid is a cloud hosting service that provides the underlying hardware infrastructure. ServePath, the providers of GoGrid, offer dedicated hosting for the enterprise market with the requisite uptime, security, and service level agreements.
Overcoming Enterprise Java Complexity

JEE has been criticized for its complexity. This complexity mostly relates to complex programming models, complex configurations that rely heavily on XML, and bloated specifications, which result in heavyweight and complex application servers.

Recent versions of JEE (Java EE 5 and Java EE 6, which hasn't been finalized yet) are taking the first steps to address such complexity through a POJO-based programming model and the use of profiles.

However, the development community has not waited for the Java Community Process (JCP) to keep up with current trends and needs. Alternative programming models and development frameworks have emerged, the most dominant one being the Spring Framework, which promotes dependency injection, a POJO-based programming model, and aspect-oriented programming. GigaSpaces XAP adopts this style of programming as part of its core OpenSpaces APIs.

Another complexity issue is that JEE and other application servers were designed to solve a set of very similar problems around business logic processing. Data storage/management and messaging issues are not integrated with these app servers. Consequently, the burden of integrating apps and systems falls onto development teams.

GigaSpaces XAP takes a different approach. Space-based Architecture (SBA) calls for viewing the problem from end to end and providing a complete solution without the need for complex integration within the boundaries of applications and systems.

Foundations of GigaSpaces XAP

At the core of GigaSpaces XAP is GigaSpaces' in-memory data grid (IMDG), also known as The Space.

The Space is a data grid implementation whose API is inspired primarily by the JavaSpaces specification and the powerful tuple space model. However, as one would expected from any modern data grid implementation, The Space contains richer functionality, supporting modern paradigms like POJO-based data objects, Java 5 generics, and dependency injection.

The GigaSpaces IMDG supports multiple clustering topologies (partitioned, replicated, master/local, and more) and enables developers to store vast amounts of data in the memory of data grid instances, while maintaining high availability through replication to peer instances in the cluster.

The Space integrates with all major relational databases via the JDBC API and the Hibernate ORM framework.

Read more: Next Page: Cross-Language Support and Interop.


<Return to section navigation list>

0 comments: